Berkeley Earth Update

by Steve Mosher

It has been a while since we’ve done an update and there is much to report on, including an update to the web site, some additional memos/papers to discuss and an update on the papers. Let’s start with the web site.

Website update

  1. Updated data and code drop. The process for automatically updating our temperature record has progressed somewhat. Since we rely on 14 different datasets that each have different updating processes our updates are not to a point where new numbers can be produced on a monthly basis but that is the goal. The code and documentation has been improved so that dedicated end users can download it and get it running without too much outside help. Still it is not a beginner project. Over the course of the next few months we will be working with researchers who have expressed interest.
  2. Gridded data has been posted. The gridded data is in 1 degree grids and equal area grids.
  3. State and Province data. Since we create a temperature field we can use Shape files to extract the average temperature field for irregular areas. Data for states and provinces of the largest countries is provided. Theoretically, one could specify any arbitrary polygon and extract that from the field which may be of use for certain applications such as reconstructions.

Memos

There are four new memos that we are posting for people to comment on. Two of the memos relate to Hansens PNAS paper (H2012) on extremes.   Hansen’s PNAS paper was read by some to be an argument for a more variable climate. Here is an example of how some people understood Hansen’s paper. Both of these memos have been reviewed and improved by Hansen  and Ruedy so we thank them for their contributions.  The 1st memo was written by Sebastian Wickenburg and the 2nd memo was written by Zeke Hausfather.

The the PNAS paper does not establish  “if you put more energy into a system variability increases.”  This is shown in two ways. In Wickenburg, we show that the widening of the distribution can be a mere methodological artifact.  Hausfather makes the same point and illustrates a different methodology that challenges assertions of increased variability. The primary insight of the H2012 remains, in a warming climate we expect to see more warm extremes. However, H2012 did not establish or aim at establishing that the distribution of temperatures has widened.  Showing a change in distribution probably requires different statistical tests than those that were applied.

The 3rd and 4th memos are extensions of our Methods paper. The 3rd memo is a simple exercise to help people visualize the difference between the Berkeley method, the CRU method and the GISS method. To illustrate the difference we use visual data rather than temperature data.

The 4th memo stems from reviews of the method paper. A reviewer of the methods paper requested that we use GCM data to establish that our method was superior to CRU’s method. As the methods paper was already rather long, we decided to write up a separate memo focused on this test. The approach is straightforward. A 1000 year GCM simulation is used as ground truth. Since this data exists for every place and time we can calculate the “true” average  at any given time. This “ground truth” is then sampled by using the GHCN locations as a filter. The experiment is repeated using sub samples of the 1000 year run. The results show that if you use a limited spatial sample ( GHCN locations )  with temporal gaps ( not every station is complete ) that the Berkeley method has the lowest prediction error. This should come as no surprise. As far as I know this is the first time any rigorous pure methodological test has been performed on CRU or GISS and is one of the benefits of having code posted for the various methods.

Papers

The results paper has been published by Geostatistics and Geoinformatics, available online. The Methods paper and UHI paper are under going final review prior to being submitted.

Poster

Hausfather, Rhodes and Menne collaborated on an AGU poster comparing Berkeley “scalpel” technique to NOAA homogenization

 

JC moderation note:  Comments on this thread will be moderated for relevance

906 responses to “Berkeley Earth Update

  1. The image you linked to provided by the Hare-Brained one is rather interesting.
    You note that it is a plot of Energy vs. reaction coordinate

    http://rabett.blogspot.com/2012/08/elis-three-laws-or-hansen-simply.html

    What would the diagram look like if we where to use a 24 hr heating/cooling cycle, observe the average coiled state of protein and then measure energy as (Tmax + Tmin)/2?
    The odd thing is that in thermodynamics we know that ‘Energy’ is real, as is ‘Temperature’, however, ‘average’ energy and ‘average’ temperature is unreal when describing dynamic systems.
    We can of course talk about a system being in local thermal equilibrium, and indeed these are the only locals we can describe using classical thermodynamics, but we cannot describe the thermodynamics of a thermally oscillating system using equilibrium thermodynamics.

    • Eli’s second law is wrong.

      • Well Steve, the link is here Your first statement is correct, it is how Eli thought about the Hansen Sato and Ruedy paper, BUT Eli never said that was what HSR were trying to show, merely what the Rabett saw in the data. HSR did look different baseline periods (why Zeke used 0 – 0 baselines instead of 1 – 0 is a minor annoyance)

        As Eli recalls reading HSR there is a major issue that Zeke has not confronted. For this type of analysis, a baseline period which includes a significant change in the controlling variable (temperature extremes) may not be suitable..HSR chose 1951-1980 because it was a period when the global temperatures were relatively constant. HSR state:
        ——————————-
        The question then becomes, what is the most appropriate base
        period. Our initial choice, 1951–1980, seems to be nearly optimum.
        It was a period of relatively stable global temperature and
        the earliest base period with good global coverage of meteorological
        stations, including Antarctica. . . . .

        The 30-y period 1951–1980 with relatively stable climate is
        sufficiently long to define a climatological temperature distribution,
        which is near normal (Fig. 9, Left), yet short enough that we
        can readily see how the distribution is changing in subsequent
        decades. This exposes the fact that the distribution is becoming broader and that there is a disproportionate increase of extreme
        hot outliers.
        In contrast the 60-y base period, 1951–2010, and the
        1981–2010 base period, which include the years of rapidly changing
        climate within the base period, make it more difficult to discern
        the changes that are taking place.
        ——————————-

        Were Eli not a wonderful bunny he would call you and manacker out for telling untruths about him(Damn, just can’t do the McIntyre spit step). OTOH, Eli suggests that Steve RTFR (Maybe also Zeke:)

      • I am sorry Eli, but I read your comment as your interpretation of the main points of Hansens paper and not your analysis of the data

        You linked to this:

        “There are three simple points in the Hansen paper

        First if there is an increasing/decreasing linear trend on a noisy series, the probability of reaching new maxima or minima increases with time

        Second, if you put more energy into a system variability increases.

        Third, if you put more energy into a system variability increases asymmetrically towards the direction favored by higher energy

        Comment by Eli Rabett — 21 Aug 2012 @ 9:23 AM”

        #########################

        The way I read that you are stating that Hansen’s paper made these points. It didnt say that you thought the data showed this, you clearly stated that there were three points IN HIS PAPER. If you meant to write that Hansen missed something in his data that you saw, then I apologize. So, when I went back to your source where you quoted yourself I took the principle of charity with me. I assumed that when you wrote the paper made these points, that the paper actually made these points and that your words meant what they plainly mean.

        Given that Hansen has reviewed our memos and made helpful suggestions and sees them as complimentary to his work, I’m at a loss to explain your displeasure. Of course you are free to say that when you wrote ” There are three simple points in the hansen paper” that you meant something other than the plain meaning of those words, I’m happy to accept your translation of rabbit. It might be easier just to say we didnt fully understand each other. Where is Willard?

      • Eli

        ‘The question then becomes, what is the most appropriate base
        period.”

        you need to understand that choosing a base period, any base period, is the step that causes the methodological problem. See sebastian’s memo.

      • > Where is Willard?

        Helping his youngest shoot Angry Birds.
        That’s the only physics I care for right now, Mosh.

        I appreciate your Zeke channel, though.
        Keep it open, you’ll gain time and allies.

        We should have a game where auditors shoot bunnies in a circle.
        It could be called Firing Squads!

        INTEGRITY ™ — We Play Firing Squads !

      • By all means willard spend time with the young ones.
        I suppose I can just say that I misunderstood Eli and leave it at that.
        Not that anything turns on it. Plus the niners are in the super bowl so charity abounds.

      • willard, shooting bunnies in a circle would look very circumspect.

      • willard, shooting Starlings? Wow weird is that, now?:o)

      • Sorry for the bad formatting Steve, you did READ in the HSR paper:

        ————————–
        This exposes the fact that the distribution is becoming broader and that there is a disproportionate increase of extreme hot outliers.
        ————————

        and you did LOOK at Figure 9 which shows exactly what Zeke did with the change in base period? And you did FOLLOW their discussion about why the 1951-1980 base period was the appropriate one and not the others

        Your argument is with Hansen not Eli. The Rabett does see the disconnect btw what Hansen wrote you and what is in the Hansen, Sato and Ruedy paper, but that is NOT Eli’s problem with respect to something he wrote before Hansen wrote to you.

      • Mosh,

        I had a minute to play Firing Squad! with our valiant Knight:

        http://rabett.blogspot.ca/2013/01/reader-rabett.html

        Please remind Bob, who would never notice this kind of thing at Eli’s.

      • Steven Mosher

        Nice willard. I suppose I should come defend my friend ravetz. Funny the old school european leftists and Mosh got along pretty well in Lisbon.
        As for the rabbett I think I’ve offered him the peace pipe and he is still off in his warren thumping his foot, oh well maybe zeke can have some luck

      • Consensus does seem difficult to get in regards to climate change doesn’t it….lol

      • INTEGRITY ™ — We Play Firing Squads !

        willard, you said squad, which is it?

      • (I tried posting this earlier. I apologize if this ends up a duplicated post.)

        Since some people have been taking umbrage with Eli’s comment:

        Second, if you put more energy into a system variability increases.

        I thought I’d weigh in on this while I’m over here. What Eli says here is actually true.

        A van der Pol oscillator is a good example (it’s a simple physical system that shares many characteristics with more complex systems).

        x”(t) + (-r1 + r2 x(t)^2) x'(t) + w0^2 x(t) = 0

        A few statements about the interpretation of this equation:

        Note that the linear damping term (-r1) is negative, this is a way of parametrizing an energy source that is external to the system being modeled. The Sun would act as that energy source for the Earth.

        The term r2 x(t)^2 is referred to as the “nonlinear damping term”. If r2 = 0, for initial x(0) and/or x'(0) ≠ 0, the system is unstable and x(t) -> infinity as t-> infinity, that is it is physically unrealizable.

        Increasing r1 is equivalent to increasing the amount of energy in the system.

        Given any initial nonzero values for x(0) and/or x'(0) it will tend to the limit cycle given approximately by x(t) = 2 sqrt(r1/r2) sin (w0 t + phi).

        (You can verify the general form of this by noting that r1/r2 has dimensions of length-squared.)

        Note that if you increase r1, the amplitude sqrt(r2/r1) gets largre, meaning as I increase the amount of energy available to the system, the amplitude increasese.

        Recognizing that this is a very simple system and that multiphase systems like the Earth’s climate needn’t follow this general prescription, it should still be noted that if increased variabilty is a generic feature of simple models, it isn’t an immediate conclusion that more complex models should behave in the opposite manner. To me, the burden of proof here seems to be on people claiming that the variability should be diminished.

        That said, simply because the variability is increased, does not mean you’d expect to resolve it with measurement at this point. (In fact, you’d probably not to be able to resolve this effect at the moment, and if you could, things really would be “worse than we expected.”)

      • Eli,

        My first approach (lowess-based detrending) was a more explicit attempt to remove the need for baseline dependence. It didn’t completely work (1951-1980 baseline still had slight broadening, while 1981-2010 did not), but it eliminated most of it. The main point is that the method chosen by Hansen is not particularly appropriate to discern changes in variance in temperatures over time. That doesn’t change his conclusions per se, as an increasing mean with constant variance would still lead to considerably more extreme heat events.

    • Doc Martyn

      Rabett’s laws #2 and #3:

      Second, if you put more energy into a system variability increases.

      Third, if you put more energy into a system variability increases asymmetrically towards the direction favored by higher energy

      Both have been proven wrong by Zeke Hausfather and Sebastian Wickenburg (see Steven Mosher’s post above).

      The “cwazy Wabbit” missed two out of three.

      In baseball a batting average of 0.333 is not bad (but it sucks otherwise).

      Max

      • our preferred verbiage is to say that the “increased variability” claim has not been established, rather than saying it is wrong. This nuance, of course, could lead to prolonged fights.

      • Steven,
        The same amount of energy entering the system, coupled with less energy leaving the system, equates to less overall energy flux.
        If, otoh, you’re talking about the increase in potential energy, you have to compare that with the total amount in the system (the amount it took to increase the earth system from 0K)

      • Eli’s observations were simple restatement of the Boltzmann distribution as shown in this typical figure. if you do the integrals the variance in (width of) the distribution of likelihood that the system has energy E is ([3/2]^1/2) kT. The shape of the distribution becomes asymmetric in the direction of higher E with increasing T. That’s the easy part (actually trivial) and is exactly what Eli said.

        Second, if you put more energy into a system variability increases.

        Third, if you put more energy into a system variability increases
        asymmetrically towards the direction favored by higher energy

        The tougher one is seeing why this is true of temperatures, The temperatures that Hansen was talking about were local temperatures each of which was simply (ok, fairly simply) proportional to the local average energy. Since these are measurements of different things (situations) the central limit theorem does not apply.

        FWIW, Zeke and Hansen are arguing about a data set, Eli is trying to find a basic principle underlying the data set.

      • Steven Mosher

        ‘FWIW, Zeke and Hansen are arguing about a data set, Eli is trying to find a basic principle underlying the data set.”

        No Eli you were mis interpreting was hansen was trying to show. He was not trying to show increased variability ( personal communication ) and was thankful that we cleared that up.

        So, this is simple. People who thought he was trying to show that were wrong. It’s not a big mistake.

        Hansen and Zeke are not arguing. Hansen agrees with Zeke. The confusion came from people and rabbets who thought he was trying to show a widening distribution.

      • David Springer

        Da pwobwem wit da wabbit’s second and turd laws is enewgy distwibution. The extra energy is preferentially going to the higher northern latitudes. This decreases the temperature differential between tropics and polar zone. A decreased delta T means less work can be accomplished across the gradient. Carnot’s law not Boltzmann’s. The result is more calm not more chaos.

      • Steven Mosher

        Oh, and Eli, just to show you how its done.

        I , steven mosher, was wrong when I thought Hansen was trying to prove that the distribution had widened. His charts show that, but he was not trying to prove it. See simple.
        In some way people who thought vaughan Pratt was trying to forecast temps in 2100 made the same kind of mistake. Simple mistake. perfectly understandable, not a huge deal. not a ‘science mistake’ a failure to communicate.

      • To be fair, I did initially misread Hansen to be claiming that variability was increasing, and he was kind enough to clarify.

  2. What about the last 15 years (or 16). Why do your graphs keep going up?

    Big joke.

    • 1. The charts currently are land only.
      2. CRU’s methodology is inferior and has the largest errors especially in trend estimation.

      So, our graphs go up because A) its land only, and B) CRU underestimates the actual warming

      • Hopefully the Russians will take some comfort in the slope of your graphs.

        http://rt.com/news/winter-snow-russia-weather-275/

      • GaryM, Those Russians have gotten good at moving snow. By spring, I bet they have is all back to normal.

      • Yes gary, the changes in the arctic predictably lead to extreme snowfall in the NH. Warmer weather in one location can lead to increased winter snowfall in another location.

      • The Skeptical Warmist

        Gary M.,

        The Sudden Stratospheric Warming event that began in late December and is only now fading is the direct cause of much of the cold outbreaks in the NH so far this winter, including the extreme snowfall in Moscow and the current cold period in Britain. That SSW event had its roots in a heat wave and high pressure area over India and Pakistan back in mid-December.

        Amazing how this planet is so connected that heat in one area/region can lead to cold in another, isn’t it?

      • Gates
        Careful analysis of NOAA’s animation for period at the 30 hPa (which is lower altitude than 10 hPa, on which you base your statement) shows no sign of anything over India and Pakistan but conclusively shows a plum of hot air originating from Kamchatka volcanic eruption and it is connected to Kamchatka for the period of about 10 days.
        http://www.vukcevic.talktalk.net/SSW2012-13.htm
        Simple logic says that an observation at a higher altitude can not indicate the source accurately if it is not evident at the lower altitudes.

      • Mosher, Canada is cooling over the last 15 years if you use EC data and 1×1 grid squares and warming if you use 5×5 grid squares.

        http://sunshinehours.wordpress.com/2012/12/09/canada-grid-square-choices-5×5-warming-and-1×1-cooling/

        Where is your 15 year data? Gridded? 1×1 and 5×5.

      • Steven Mosher

        Bruce, last time you used EC data you screwed up by not applying the quality flags. So, I will take a pass on evaluating anything you have to say about that data. The gridded data is on the site. you need to understand ncdf to read it.

      • 1) I am using the EC monthly summaries.

        2) Why ncdf? It just makes BEST more of a waste of time.

      • Steven

        Warmer weather in one location can lead to increased winter snowfall in another location.

        How ’bout:

        Warmer weather Increased snowfall in one location can lead to increased winter snowfall warmer weather in another location.

        (Cause and effect?)

        Max

      • Steven Mosher

        Bruce. You still need to apply qc to the monthly summaries. [ slaps forehead]

        why .nc?

        well because its a standard and because end users request that a standard be used rather than inventing an ad hoc format that requires coding. You are basically asking a question like ‘why did you use pdf?”
        trust me, asking why we use ncdf does not make you look entirely credible

      • The Skeptical Warmist

        Vuk,

        Respectfully, your Volcanic origin for this latest SSW just does not fit with observed facts. Remember the 10 hPa chart showed the warming staring in southern Asia in late December? Here is the vertical velocity averaged over the whole troposphere for that period,covering both Southern Asia and your volcanic area far Northeast Asia:

        http://tiny.cc/qks8qw

        You can physically see the huge upward vertical velocity in the troposphere in S. Asia, far away from your volcanoes. This represents a massive vertically directed Rossby wave breaking on the tropopause. It compresses the air from the upward push of air into the stratosphere. Which you clearly see having the same origin in S. Asia at the 10 hPa level. What you see much further north and east of this near your volcanoes a few days later at the 50 hPa level is the downward falling warmed air recoil from the initial upward push shown above. This downward compression warming recoil is associated with the high pressure anomaly over the Arctic during SSWs.

      • The Skeptical Warmist

        Vuk,

        Just for fun, I plotted the vertical velocity across the whole troposphere 1000 to 100 hPa just prior to the big SSW In 2009. Interestingly, here’s the location on Earth that had a huge vertical velocity pushing right into the stratosphere in the days prior to the SSW. Turns out to be the same region as this year (which was surprise to me):

        http://tiny.cc/6oz8qw

        This region is an area where sometimes the jet stream comes down across a high desert in Asia and then hits a range of 20,000 foot peaks, deflecting the stream to vertical. Probably more research is warranted…

      • Hi Gates
        You are a scientist and possibly an expert in these matters. If you are studying the SSW, writing or contributing to a paper, then it is wise to consider all known factors.
        I have drawn your attention to one, which could be a trigger by opening a ‘funnel’ in the tropopuse, for the warm Pacific air to flow into stratosphere.
        I suggest to carefully study NOAA’s animation, the jet-stream effect on the SSW flow with it’s 10 days holding link to the Kamchatka area.
        http://www.vukcevic.talktalk.net/SSW2012-13.gif
        Your vertical velocity graph, I would say is at wrong latitude for the SSW and more likely to be associated with the Hadley/Farrel cells boundary around 30N.
        Also consider tropopause (red line) height
        http://www.srh.noaa.gov/jetstream//global/images/jetstream3.jpg
        in relation to the latitude. Kamchatka is at 55N, at the latitude where tropopause is much lower than 25-30 N you are suggesting as the origin.
        Worth another look.

      • Mosher, ascii is a common format for HADCRU gridded data. All your other data is text. I have scripts to download and analyze hadcru4 ascii.

        In case you don’t know where their data is:

        http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html

      • bruce. In my packages you will find routines for reading CRU grids.
        It’s a pity they still use the format they do. Research .nc and join the rest of us in the 21st century.

      • The standard is to offer both Mosher.

        BEST’s obsession with warming for 1960 is so 20th century. The important question today is the 16 year pause. but your group is still obessesed with ignoring the 1930s/40s and proving 1998 was warmer than 1960.

        You deserve to be in an obscure unread journal.

      • Steven Mosher

        bruce. the standard is to offer both? sorry, now you are just making things up.

  3. First time I realize that Zeke’s name is Hausfather.

    No wonder he’s a class act.

  4. Whether or not a globe that is warming by fractions of a degree will show significantly higher extremes is the subject of Hansen et al. (2012) “Perception of Climate Change”.

    If the increase of the “extremes” is also only measured in fractions of a degree, there is not much to be concerned about. If, however, there is also an increase in the variance with warming, this could be of concern.

    As you point out, some people erroneously interpreted Hansen 2012 to conclude that the variance in extremes would increase with warming.

    In his analysis of the Hansen et al. paper, Zeke Hausfather addresses the question: “is variance increasing as the globe warms?”

    Using the 1951-1980 baseline, he concludes, ”anomalies plot shows slightly less variance in the current decade than in prior decades”

    Using the 1991-2010 baseline, Hausfather writes, ”the variance was greater in the past and is smaller now!”

    So the “variance” is actually decreasing with warming, which means there should be less extreme hot days.

    Sebastian Wickenburg identifies a “pinch effect”, which creates a spurious wider variance at later times.:

    Therefore, the number of ‘hot’ events, i.e. much higher local anomalies than during the base period, will always increase in such a system, especially if there also is a global average increase in T. The results of comparing the number of 3 sigma events at later times to the number of 3 sigma events during the base period, therefore are difficult to interpret, because the ‘pinch’ effect will increase the number of such events at later times.

    So much for statistical analyses of Hansen’s paper.

    But let’s look at the actual record.

    The actual record of extreme hot temperatures in the USA (by states) shows that
    http://usatoday30.usatoday.com/weather/wheat7.htm

    Only 6 were after 1990
    Only 1 was after 2000.

    23 were in the 1930s

    7 were before 1930

    The remaining 13 were between 1940 and 1990.

    Conclusion: It looks like there is no visible increase in extreme hot temperatures as a result of global warming.

    Max

    • > But let’s look at the actual record.

      Why the Dice Gods invented statistics when we can look at actual records with our God’s Eyes.

      • Willard, me boy

        “Actual records” are always a nice thing to look at.

        Hausfather and Wickenburg did a statistical analysis using “actual records” and concluded that there is no increase in variance with higher temperature (as Rabett claimed).

        I simply took the “actual record” of “record” high temperatures by US state (a very limited data base, to be sure) and also came to the same conclusion as the statistically much more relevant memos of Hausfather and Wickenburg.

        Got it?

        It’s not really that complicated.

        Max

      • MiniMax,

        Repeating your comment is not enough to dodge my point:

        If looking at data had any validity, there would be no reason to analyze them.

        What you’re doing is called eye-balling.

        Got it?

        Thanks!

    • “last updated 2006” would be an important observation, that would affect your conclusion.

      • bob droege

        How many statewide record highs have there been since 2006?

        Thanks for info (if you have it).

        Max

      • Bob Droege

        Bob Koss has answered your question below (1 record high since 2006), so it has not “affected my conclusion” (as you thought it would)..

        Most were still in the 1930s.

        Max

    • manacker, as I pointed out yesterday when you mentioned this, apples and oranges. Hansen’s paper was looking at 3-month average temperatures, not daily records. This would be a much smoother variation over time and would show climate change more clearly. I wish someone would do an analysis of summer records like this to compare properly with Hansen’s statistics.

      • Jim D

        The memos by Hausfather and Wickenburg answered the key question, which was left open by Hansen (but apparently misinterpreted by some):

        The variance has NOT increased with warming (in fact, it looks like it may have decreased).

        The statewide daily peak values I cited are obviously measuring top values differently, but suggest that there has been no real increase in record hot days recently.

        It looks like the 1930s was the decade in the USA with the most statewide record hot days. Realize this is simply one set of data points in a puzzle that is much more complicated, but every bit of data tells us something.

        Max

      • manacker, yes, a lot of people misinterpreted the 3-sigma statements as being about a change in variance, when actually it is about the properties of a shifted bell curve which make what was once an extreme summer, much more common.

      • It’s still fundamentally wrong and highly misleading to define 3-sigma events in relation to some past average. That approach contains the same fundamental error as does the interpretation that the widening would be proof of increased variability. When the latter is dismissed the former should also be dropped.

        Perhaps Steven might ask what Hausfather and Wickenburg think about this issue.

      • Hanson is using the mean from 1950-1980 the ‘aerosol age ” a period when T was lower then the previous regime.it would be remarkable if there was no drift in the data at present.

      • Pekka, Hansen made it very clear how he defined his terms. He showed that events that used to occur half a percent of the time, now occur nearly ten percent, or equivalently the area increased by this factor on average in a given year. These are useful things to point out, and they arise from his definitions that he used as a way of visualizing climate change.

      • Steven Mosher

        Pekka I will ask. I should see them both on tuesday

      • David Springer

        Probably not if they see you first. Not if they’re as smart as you say they are.

      • Steven Mosher

        Do, please ask. I too am very keen to get to the bottom of this. If, as seems possible, there is rather less ‘wrong’ with HSR12 than has been claimed it would be right and proper to clear the air.

    • Here is an up to date graphic of the time distribution of the 50 state high/low records from NCDC data. http://i46.tinypic.com/9gy2hx.gif

      Five states are different from that USA list. Old record and year are shown in brackets.
      Colorado 114 1954 [118 1888]
      Minnesota 115 1915 [114 1936]
      S. Carolina 113 2012 [111 1954]
      Vermont 107 1912 [105 1911]
      Wyoming 115 1988 [116 1983]
      Since 1951 there were 18 highs and 24 lows set or tied with previous record.

    • 4 maximum temperatures for Jan 19 were set yesterday and one additional tie, against 0 record colds set for Jan 19, so I conclude that the world is continuing to warm.

      http://www.ncdc.noaa.gov/extremes/records/daily/mint/2013/01/19?sts%5B%5D=MO#records_look_up

      My argument is just as good as yours, ie not worth a plug nickel.

    • Max,

      The shrinking variance example was an illustration of how the results are highly dependent on the baseline period used. All we establish is that this method is inappropriate to determine changes in variance over time, so I wouldn’t claim that variance is increasing OR decreasing at this point.

  5. Steven Mosher

    Thanks for posting this very interesting information. I’ve already commented on the memos by Sebastian Wickenburg and Zeke Hausfather regarding the Hansen 2012 study.

    The memo by Robert Rohde explains why and how the BEST land anomaly gives a better picture of the land temperature record than either HadCRUT or GISS.

    Good stuff.

    Max

    • The Hansen memos. I hope, settle the question about whether of not H2012 established that the distribution has widened. H2012 didnt establish that , and Hansen communicated that it was not his aim to establish that. In short, many folks mis interpreted that aspect of his paper. So, he was happy to see us clarify that. Robert’s memos, I hope, should make people pause before they hang serious claims about “the pause” on the CRU record.

      • Steven Mosher

        It is a shame that the statistically superior BEST record only covers land area rather than global temperature. Let’s hope that gets extended one of these days, so we’ll know if BEST also sees a “pause” or not.

        Max

      • Gee Max, what are you going to do if they are also underestimating surface warming of the oceans, which is also likely.

        I had one word for it: HadCrappy.

      • SST has been added in. The release of that has been gated by various other projects. I hestitate to give a date on the release cause there are a couple other balls in the air.
        However, now that we’ve posted a gridded land product, folks can add any SST they want. The trick is combining the two and there are limited number of ways to do that.

      • Steven Mosher – will Wood for Trees ever update their Best data?

      • I’m not sure if he does regular updates of our product. should be straightforward

      • I don’t think the WfT data has changed at all since it first went up on the site. Still has the dip at the end.

      • JCH

        If the BEST sea data show warming and the info is as solidly based as the land only data appears to be, I will accept these data.

        What else?

        Max

  6. Steven, the diurnal temperature trend shift is still screaming look here :)

    • Ya, I know. Personally, I think that area of the results is way more interesting than the AMO stuff or the C02 fit stuff.

      • I think that the “global” Tmax and Tmin comparing SST with MAT will help resolve the diurnal trend shift stuff, AMO stuff and narrow the CO2 range stuff :)

        Especially if y’all can work the satellite data into the mix.

      • At one point there were a couple of us who thought we should do a MAT plus SAT product since icoaads has as many MAT records as SST records.

      • Steven, “At one point there were a couple of us who thought we should do a MAT plus SAT product since icoaads has as many MAT records as SST records.” The MAT is a PITA, but from what little I have looked at, it would indicated considerably more warming as Tave, but the Tmin would reflect the 1985 shift in diurnal temperature and more closely follow the ocean oscillations.

        I think it would be worth the effort, but then I am just a fishing guy :)

      • David Springer

        captdallas2 0.8 +/- 0.2 | January 20, 2013 at 11:29 am |

        “Especially if y’all can work the satellite data into the mix.”

        Sure he can. These boys wrote the book on stitching disparate data together. It’s called Mannipulation or, alternatively, hiding the decline using Mike’s Nature trick.

    • I agree that is an interesting and robust-looking result. I think the diurnal range increases as the land area dries. Drier areas have larger diurnal cycles. We should expect land areas to dry because the ocean is not warming as fast as the land. It may have been decreasing before 1980 due to increasing aerosol effects decreasing surface solar radiation. My two cents.

      • JimD, marine air temperature appears to be warming faster than surface temperature. What would cause that?

      • Steven Mosher

        captain its SHI ship heat iislands.
        sarc off.

      • If you mean marine surface air temperatures globally are warming faster than the global sea-surface temperature, I would be skeptical of that statement until I see the data. Upper air, possibly, because of the land warming effect spreading over the oceans.

      • The Skeptical Warmist

        Jim D. said:

        “We should expect land areas to dry because the ocean is not warming as fast as the land.”
        ______

        Please define your terms. The use of SST’s or even near surface temperatures over the oceans is of course not a good metric for how much the ocean itself is warming. The best metric for that would be ocean heat content down to the deepest levels you can consistently measure. Second to ocean heat content for measuring how much the oceans are warming would be measuing the effects of ocean warming from heat that is being advected away from the warmer tropical waters toward the polar regions via ocean currents and these effects would be expected to be seen primarily in the reduction of sea ice. Given that substantially more heat is naturally advected toward the N. Pole than the S. Pole on this planet, and the fact that ocean currents can actually reach the N. Pole, it is reasonable to expect that the N. Pole would warm faster and more than the S. polar region.

      • TSW, OK, I mean ocean surface temperature, because that determines the atmospheric water vapor. This part is not warming due to a multi-decadal cooling phase and has impacts on the global water vapor which impacts the relative humidities over the warming land, i.e. they go down. This may also impact cloud cover over land (clouds have been decreasing) further amplifying the differential land heating and drying out the soil more. Drier soils have larger diurnal cycles.

      • steven

        Is this the dreaded Ship Heat Island Temperature (S.H.I.T.) effect?

        Max

      • Steven Mosher

        haha manaker.

        I was thinking at some point to address the “boundaries” on UHI contamination by looking at trends in MAT and Tropospheric trends. Hmm, lots of work left undone. Most folks think that this is un interesting..

      • You joke about Ship Heat Islands, but marine air temperatures measured during the day are affected by solar heating of the ship. For long term studies, most analyses have used only night marine air temperatures. The National Oceanography Centre at Southampton have done some work on daytime heating biases. NOCS also worked on a new NMAT data set and paper, which has some preliminary comparisons with land temperatures.
        http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50152/abstract

      • Thanks John,
        I really appreciate the work you have done in this area. I’ll have a look at your references.

  7. Congratulations on getting this out there. I know it was a lot of work.

  8. Now we know where Dr. Pratt can get his millikelvin stuff published.

  9. Dr. Curry,
    Might the reviews requested by JGR illuminate this discussion?
    JGR did not publish this for a reason.

    • Having seen the GIGS paper, I can see how it might not be considered an atmospheric science paper as its results are mostly land statistics. The things they to do relate the results to the atmosphere are interesting but not ground-breaking. They subtract a volcano signal and a log(CO2) expectation (as Vaughan Pratt did), and find that what is left correlates with AMO but leads it, indicating both AMO and land temperature change as a result of another forcing that can only be speculated on. Interestingly, from the perspective of Vaughan’s work, the BEST paper finds a sensitivity of 3.1 C per doubling fits the log(CO2) part best. This is with no delay, as would be consistent with land where none would be expected, but they have ignored variations in sulphates and other GHGs, effectively assuming it is part of the CO2 signal, so it is not truly isolating CO2 just some aggregate proportional forcing.

      • Although BEST offers no sea data, WoodForTrees has collected several sources for land and sea separately. This chart fits a trend line to each of CRUTEM3 (land, the red plot) and HADSST2 (sea, the blue plot).

        Click on the “Raw data” link at the bottom and search for “slope” to see that the trend lines have respective slopes of 0.251 C/decade and 0.152 C/decade. That is, during that period, the sea is warming at 152/251 = 61% of the rate at which the land is warming.

        Calculated naively, 61% of a land climate sensitivity of 3.1 C/doubling could therefore be expected to correspond to a sea climate sensitivity of 0.61*3.1 = 1.9 C/doubling.

        Assuming zero Hansen delay in my spreadsheet (i.e. setting HanDelay (my new name for GWDelay) to 0 and refitting accordingly), climate sensitivity should be 2.1 C/doubling. (My revised spreadsheet can be seen here, many thanks for the geophysical if not astronomical volume of feedback at Climate Etc. over the past six weeks or so.)

        Since the sea is 70% of the Earth’s surface, one might naively compute the global climate sensitivity as 0.7 * 1.9 + 0.3 * 3.1 = 2.26 C/doubling as the zero-Hansen-delay climate sensitivity.

        This is a tad higher than my 2.1 C above.

        As Yogi Berra said, when you come to a fork in the road, take it.

        One branch of this fork says BEST is right about the 2.26 figure and I’m wrong.

        The other says that I’m right.

        If so it should be possible to calculate what BEST should have obtained for the land climate sensitivity. Call this x.

        Then 0.61 * x would be the sea CS (climate sensitivity), Weighting these by the 70/30 ratio of sea to land in area, we would expect 0.7 * (0.61 * x) + 0.3 * x for the global climate sensitivity, or 0.727 * x.

        Since I claim 2.1 C/doubling globally, BEST should have obtained 2.1/0.727 = 2.89 C/doubling.

        That would make their 3.1 C/doubling figure high by 0.2 C/doubling.

        Given that Richard Lindzen once seriously suggested 0.5 C/doubling, and some calculations have come up with 8 C/doubling, what’s a mere 0.2 C/doubling between friends? (Berkeley is only 60 miles by car from Stanford.)

      • Brandon Shollenberger

        Vaughan Pratt, your entire comment is based on BEST’s value of 3.1 degrees being for a doubling of CO2. It isn’t. BEST used CO2 as a proxy for all greenhouse gases.* That means you’re comparing a doubling of CO2 to a doubling of all greenhouse gases. That means those results are meaningless.

        *Actually, it’s serving as a proxy for all anthropogenic forcings. That makes things even more complicated.

      • @Brandon Shollenberger: BEST used CO2 as a proxy for all greenhouse gases.*

        While I’m flattered that you think I could be doing something else, I have no idea how I could be so clever as to do such a thing.

        Like BEST, and like everyone else estimating climate sensitivity observationally, I’ve been using CO2 as a proxy for all anthropogenic forcings. If there’s an alternative I have no idea what it could possibly be.

      • Brandon Shollenberger

        Vaughan Pratt:

        While I’m flattered that you think I could be doing something else…

        My apologies. You did use CO2 as a proxy. I guess your results and BEST’s are apples and apples. Neither is a measure of sensitivity to a doubling of CO2, but they are both a measure of the same thing.

      • Nice vaughan. When time permits drop me a note and I’ll see what I can do to get you an early release of ocean stuff. Also funny how people dont get the using c02 as a proxy. meh

      • Vaughan Pratt, “Then 0.61 * x would be the sea CS (climate sensitivity), Weighting these by the 70/30 ratio of sea to land in area, we would expect 0.7 * (0.61 * x) + 0.3 * x for the global climate sensitivity, or 0.727 * x.”

        Because of the Antarctic and Greenland, the “land” percentage would be closer to .25 than .3 I would think. I was playing with 0.68 ocean, 0.26 land and 0.06 icebox. That puts me in the range of 0.8 CO2eq with ~ 0.6 to 1.6 amplification. The 0.8 is based on the impact of ~ 4 Wm-2 on a 334 WM-2 near ideal radiant surface that happens to only cover about 68 to 70 percent of the total surface which makes a good thermal reservoir by not so great a heat sink. :)

      • @BS: Neither is a measure of sensitivity to a doubling of CO2, but they are both a measure of the same thing.

        How do you define “sensitivity to a doubling of CO2?” Presumably holding aerosols constant (tricky in the case of aerosols from jet engines, which increase in proportion to increasing CO2 since it’s very inconvenient to have to scrub jet exhausts). But are you also holding water vapor constant? If so that would be a lot closer to the notion of no-feedback sensitivity, generally accepted to be considerably less.

        I’ve been interpreting “business as usual” to mean not just that CO2 stays on the curve it’s been on but that its concomitants such as other GHGs (especially water vapor) and aerosols do so as well in full detail (same light-to-dark color ratio, low-to-high altitude ratio, etc.). As soon as you relax any of those assumptions you open a whole can of worms as to the meaning of “climate sensitivity.” While it’s perfectly reasonable to open that can of worms, one must be aware of when one is doing so and not speak naively of the climate sensitivity but instead state all your assumptions.

        Furthermore the precise definition of “curve CO2 has been on” is up for grabs. Max Manacker has been insisting for a long time now, recently joined by Greg Goodman, that the (seasonally corrected) Keeling curve follows an exponential. That assumption would give atmospheric CO2 a CAGR of somewhere between 0.4% and 0.5% depending on how you fit it to its evidently Procrustean bed. The mauve curve in these plots is the function 1.00411^(y − 556.9) (i.e. a CAGR of 0.41%) when naively fitted to the endpoints and 1.00452^(y − 588.8) (CAGR 0.45%) when fitted to 1975 and 2005, with R2 respectively 99.3% and 99.0%. The graphs on the right show that this models CO2 as increasing at around 1.7 ppmv/yr lately, about 0.5 ppmv/yr lower than its actual 2.2 ppmv/yr (taken from the green curve), but in 1960 higher by an equal amount.

        My poster used the whole of HadCRUT3 to infer a preindustrial level of 287.4. Taken in conjunction with the Hofmann-Butler-Tans exponential model of anthropogenic CO2 and fitting to 1975 and 2005 gives 287.4 + 1.02518^(y − 1823.6) for an R2 of 99.56%, the red curve at upper left in the above graph. With an annual increase today of 2.5 ppmv (green curve in top right chart), this errs on the high side, being about 0.3 ppmv/yr higher than actual.

        If the goal is just to model the Keeling curve it is hard to see how to improve on the Hofmann et al function based on a preindustrial level of 270, the green curve in these graphs. This gives a CAGR of about 1.89% for the anthropogenic part of atmospheric CO2, with an R2 of about 99.83% (for the seasonally corrected Keeling curve) however fitted. (260 has a very slightly better R2 but a considerably less plausible preindustrial level. Hofmann et al used 280 which is midway between the green and red curves.)

        My take on the Keeling curve is that Hofmann’s formula (which I believe he developed as an AMU poster before Butler and Tans joined him as coauthors on the journal version) is too naive as a model of how anthropogenic CO2 influences the atmosphere. The natural component should not be held constant but should decrease following a suitable law reflecting the details of the whole carbon cycle when our contribution to it is taken into account. This would improve both our understanding of the carbon cycle in general and (from my standpoint) my SAW+AGW model of multidecadal climate as well.

      • @cd: Because of the Antarctic and Greenland, the “land” percentage would be closer to .25 than .3 I would think.

        Very interesting point, captd. I would think that would depend on whether your analysis was static or dynamic. Certainly the icecaps cool the atmosphere, presumably even more effectively than the oceans. Dynamically however the question is whether they warm faster, which is what matters when it comes to global warming.

        When well below freezing, as in winter, they warm to at least the same extent as land since the icepack is a great insulator (whence igloos).

        But at 0 C, as in summer, the latent heat of fusion of water kicks in and suddenly they hold the line on temperature even more effectively than the oceanic mixed layer. Getting above 0 C, which involves first melting the ice, requires an amount of heat equivalent to raising water from 10 C to 90 C! (Ice is such a great regulator that people use a mix of ice and distilled water to calibrate thermometers.)

        Perhaps we should split the difference. ;)

      • Brandon Shollenberger

        Steven Mosher:

        Also funny how people dont get the using c02 as a proxy. meh

        Seeing as I’m the only one who commented on that issue, you’re presumably referring to me in this backhanded insult. If so, that’s silly as I do get using CO2 as a proxy. In fact, I’ve discussed the matter in some detail to correct people’s misuse of it as a proxy on multiple occasions. Even on this blog. Look at my exchanges with manacker for an example.

        I’ve been interpreting “business as usual” to mean not just that CO2 stays on the curve it’s been on but that its concomitants such as other GHGs (especially water vapor) and aerosols do so as well in full detail

        Seeing as other GHGs and aerosols haven’t followed the same curve as CO2, I don’t know why you’d interpret it that way. The overall anthropogenic forcing may have largely followed the CO2 curve (the uncertainty in aerosol forcings is so large any number of curves could be “right”), but the individual components are known to have diverged.

        While it’s perfectly reasonable to open that can of worms, one must be aware of when one is doing so and not speak naively of the climate sensitivity but instead state all your assumptions.

        Which is why it is important to be clear you’re using CO2 as a proxy. Talk of sensitivity to a doubling usually refers to CO2 itself, and not CO2 as a proxy. Both values can be calculated, but they are not (directly) comparable, and they should not be confused. In the case of the comment I responded to, you should have made it clear you were not referring to sensitivity as it is normally used.

        How do you define “sensitivity to a doubling of CO2?”

        I’d hope you already know my answer. I’d define it the same way the IPCC defines it. That is, a doubling of CO2 will cause a certain change radiative forcing. That change in forcing will cause a certain amount of warming. There will then additionally be some feedbacks that affect that amount.

      • Vaughan Pratt, “Very interesting point, captd. I would think that would depend on whether your analysis was static or dynamic.” I am still in static though I have attempted a little dynamic.

        In the Arctic, latent heat of fusion does hold the line, in the Antarctic , not so much. So there is at least 10 to 17 Wm-2 of slop that I see no way around at the true surface. The 4C (334.5 Wm-2 per S-B) and the 334 Joules per gram latent heat of fusion does tend to create a stable point though in both the fancy radiant transfer and my old school HVAC stuff :)

        Looking down south, the Antarctic is anti-phase in Tmin like I expect due to radiant forcing changes with Tmax drifting up as I expect due to OHC, but explaining the shift in diurnal temperature range to everyone’s satisfaction is pretty elusive.

      • @VP: I’ve been interpreting “business as usual” to mean not just that CO2 stays on the curve it’s been on but that its concomitants such as other GHGs (especially water vapor) and aerosols do so as well in full detail

        @BS: Seeing as other GHGs and aerosols haven’t followed the same curve as CO2, I don’t know why you’d interpret it that way.

        Sorry, I wasn’t clear as to my meaning. By “do so as well” I meant that each contributor to AGW stays on the curve it has been on, not the curve CO2 has been on.

        If you’re arguing that forecasting based on a complex situation like past behavior of multiple curves is an imprecise art, I’m with you 100% on that. While it’s easy to extrapolate any given analytic model (merely knowing the exact values of its derivatives at a single instant in time is enough for an exact extrapolation arbitrarily far into the modeled future), the real future is fickle and frequently unfaithful to analytic models of the past.

        @BS: In the case of the comment I responded to, you should have made it clear you were not referring to sensitivity as it is normally used.

        My comment concerned the value of climate sensitivity obtained by the BEST team. I’m sorry if it wasn’t clear I was using the same understanding of the notion as theirs. As you say it would be nonsense to use some other meaning than theirs. I do spout nonsense sometimes, but I hope I managed to avoid doing so on this occasion.

        Incidentally my poster eschews both “equilibrium climate sensitivity” and “transient climate response” in favor of “prevailing climate sensitivity” (in the panel below Figure 3). The panel’s footnote defines this new term “circumstantially” as follows.

        “* Climate sensitivity depends on the prevailing circumstances, which we take here to be what obtained “on average” during 1850-2010. The profound disequilibrium of modern climate makes its circumstances very different from those of the deglaciations of the past million years, in which CO2 changed two orders of magnitude more slowly.”

        There is no guarantee that the prevailing climate sensitivity, PCS, for the 21st century will turn out the same as that for the 20th century. On the other hand I have so far seen no suggestion from any quarter that climate sensitivity however defined is either increasing or decreasing, making past PCS as good a predictor of future PCS as any (which should not be taken to mean that it actually is a good predictor, merely the best we have for the moment).

        Many people seem to be viewing climate sensitivity as a constant of nature like the speed c of light or Newton’s universal gravitation constant G instead of the ill-defined construct it really is. Giving it a value is like saying that Florence Colgate’s beauty is 987 millihelens, or that Helen of Troy’s beauty has been estimated at 1013 milliflorences bearing in mind that unlike the standard kilogram in Paris the former beauty metric is no longer available for comparison.

  10. A fan of *MORE* discourse

    Judith Curry posts: “The primary insight of the [Hansen’s] H2012 remains, in a warming climate we expect to see more warm extremes. However, H2012 did not establish or aim at establishing that the distribution of temperatures has widened. Showing a change in distribution probably requires different statistical tests than those that were applied.”

    Neil Plummer and colleagues have posted a very nice analysis What’s Causing Australia’s Heat Wave? that makes this same point: it’s the (definite!) global-warming trend that’s driving the increasing incidence of extreme temperatures, not the (plausible but still-unverified) increasing variance of weather-related fluctuations.

    Mathematically minded Climate Etc readers may enjoy the following reflection on why increasing variance is thermodynamically plausible. The following reasoning is due originally to Boltzmann, and was made explicit by Onsager, and is accessibly discussed in many textbooks (see for example Charles Kittel’s Elementary statistical physics (1958), chapter 33, “Thermodynamics or Irreversible Processes and Onsager Reciprocity Relations). A terrific auxiliary reference is Zia, Redish, and McKay “Making Sense of the Legendre Transform” (2009).

    Let u(t) be the time-dependent spatial energy density of a system in thermodynamic system, and let s(u) be the entropy density state-function of that system, and assume u is the sole conserved quantity of that system. Then the variance \sigma^2_u of the time-dependent fluctuations in u(t) is related to the entropy function by a simple Boltzmann/Onsager relation:

    \displaystyle\sigma^2_u\,{=}\,\big\langle(u(t)-\langle u(t)\rangle_t)^2\big\rangle_t\,{=}\,\left(\frac{\partial^2 s(u)}{\partial u^2}\right)^{-1}

    For typical fluid-dynamical systems u\,{\propto}\,\ln u, and so for fluid dynamical systems the energy variance does generically increase with temperature, increasing specifically as \sigma^2_u\,{\propto}\,u^2. Because the global heating observed so far is rather small on an absolute temperature scale (however significant it may be to us living organisms!) this simple model predicts an increase in climate variance that is so small (of order one percent) as to be very difficult to observe.

    This derivation is non-rigorous for two reasons. The first is the common-sense reason that the earth’s climate-system is not a dynamical system in thermodynamical equilibrium (mainly because the earth is continually radiating the sun’s heat into cold space). Nonetheless, to the extent that entropic ideas apply even approximately to the earth’s climate system, the above Boltzmann/Onsager relations provide reasonable grounds to anticipate that long-term analysis may show increasing climate variance.

    The second reason is more sobering. The magnitude of this thermodynamical effect depends on how close we are (or aren’t) to the lethally hot temperatures of a planet-killing climate-phase transition. If we were to observe substantially increasing climate variance, that would be a very concerning thermodynamical warning flag, that our planet is approaching a devastating climate-change tipping-point! \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\frown}\,\diamondsuit\,{\displaystyle\text{\bfseries!!!}}\,\diamondsuit\,\overset{\scriptstyle\circ\wedge\circ}{\frown}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    That is why variance is well-worth careful study, and the entire Berkeley Earth community is to be congratulated for their foresighted and careful attention to this issue. Thank you, Berkeley Earth Surface Temperature (BEST) Group!

    So who says thermodynamical reasoning has to be complicated, eh? \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries???}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Thanks for that reference Fan.

    • What part of (Tmax+Tmin)/2 isn’t temperature and that the temperature of the system isn’t a measure of the heat in that system?
      Why do you insist that (Tmax+Tmin)/2 is directly proportional to energy density? It isn’t.
      Moreover you cannot use a description of equilibrium thermodynamics and apply it to an open thermodynamic system, a complete description of an electrically heated boiler and the contents of a thermos flask are quite different.
      Stop talking crap your points are very, very stupid. Reflect on Judy’s post on etiquette and stop trolling.

      • @DocMartyn: Moreover you cannot use a description of equilibrium thermodynamics and apply it to an open thermodynamic system, a complete description of an electrically heated boiler and the contents of a thermos flask are quite different. Stop talking crap your points are very, very stupid. Reflect on Judy’s post on etiquette and stop trolling.

        By this criterion, had it turned out Newton was in competition with Methusaleh for longevity and was posting here, his corpuscular theory of light would be deemed “crap” given Thomas Young’s compelling double slit experiment in 1802. Newton would be judged a troll in violation of netiquette and Einstein (whose only Nobel prize was for the photoelectric effect he discovered in 1905) would have had to come to his rescue. Heisenberg and Schroedinger, whose work was done in 1925-6, would in turn have shot Einstein down for not believing in pure probability.

        On my to-do list is a defense of Einstein against Heisenberg and Schroedinger and a proof that quantum computing is not what it’s cracked up to be. (Quantum security is fine however.) That in turn will be shot down by someone else, though not immediately. God only knows (by definition) where this all headed.

    • Fan

      Increased variance with warming may well have been “plausible”, but as the two memos showed, it just didn’t work out that way in real life (in fact, it seems like variance may have decreased with warming).

      Many “plausible” ideas turn out not to be real.

      It is very good that Steven Mosher has posted this interesting new information, which clears up some earlier misconceptions.

      Max

      • Plausible isn’t a very high standard. A second gunman in Dallas is plausible. Plausible gets a lot of people into trouble.

      • Steven Mosher

        when I looked at the temporal evolution of the standard deviation of tmax and tmin, the answer was decreased variability.
        problem there is perhaps related to changing spatial coverage .. that is changes in the number of coastal stations. A station by the coast has a markedly different SD than one 50km inland. so, I scratched my head and went to look at other stuff. i should prolly go back and have another go at the data

    • Plummer et al state, accurately enough, that the higher temperatures in early January were the result of a moderately delayed (2 weeks) monsoon season, allowing the huge but quite stationary air bubble over the central Australian desert region to continue heating until shape-shifted by normal monsoonal influences, albeit a little late in the season

      After that statement, there is no further discussion of any possible or likely causes of the 2-week monsoonal delay, just the usual “planet is slowly frying” attributions

      Until a reasonable explanation of that moderate monsoonal delay is connected to CO2 levels, I’ll remain sceptical.

      Winter months in Australia can and have had a mirror image. A huge cold air bubble sits over the Aus central desert, essentially unmoving and feeding cold air off its edges. On occasion (eg. late 68-69), for over 3 months, giving over 90 successive days of cool, cloudless sunshine. Aus winters can be absolutely marvellous times to be alive. I have not seen anyone blaming CO2 levels for them, even though it does seem a mirror image to summer

  11. From the results paper ‘Discussion.’ cited in the article.

    ‘Our analysis does not rule out long-term trends due to natural causes; however, since all of the long-term (century scale) trend in temperature can be explained by a simple response to greenhouse gas changes, there is no need to assume other sources of long-term variation are present.’

    Well that’s nice to know, we can cancel all the research and save a bit of money by laying off all those climate scientists.
    Trouble is I was taught to question and, yes be skeptical about such definite claims. So sorry, with the paucity of reliable data I’ll just give it a few more years of gathering decent data. Sorry if that might offend the modellers out there but garbage in still means garbage out, no matter how much you polish it…

    • This is the crux of the debate, isn’t it. They say that simple ideas on climate sensitivity due to GHGs that go back to Arrhenius a century ago and the Charney sensitivity of 1979 (just refined a little recently) explain the century-scale warming without modification. People disputing that have to show why the simple ideas are wrong and what else happened instead. Two tough barriers, and a case for Occam’s Razor if ever there was one.

      • Steven Mosher

        yup. That particular part of the paper has irritated folks on all sides. Climate engineers ( folks who want to understand every last detail ) argue its too simple and overlooks all sorts of nice details. C02 skeptics, well, they mostly sling mud and dont address the fundamental claim. Gimme C02 and Volcanoes and I can explain the rise in temperature. Of course it could be the ln(leprachuans) that is really doing the work, but like unicorns leprachuans are known to avoid direct observation.

      • Jim D

        I think the “crux of the argument” as beesaman has described it, is something else than you have concluded.

        The cited paper concludes:

        ‘Our analysis does not rule out long-term trends due to natural causes; however, since all of the long-term (century scale) trend in temperature can be explained by a simple response to greenhouse gas changes, there is no need to assume other sources of long-term variation are present.’

        This as a classical “argument from ignorance” (i.e. “we can only explain X if we assume Y” and “no need to do more work on other explanations; the science is settled”), which of course is invalid in a situation where there are still many unknowns.

        If we truly knew everything there is to know about why our climate behaves the way it does, we could make such assumptions.

        But we obviously don’t, so we can’t.

        Just one example:

        The CLOUD experiment at CERN has confirmed that the cosmic ray cloud nucleation mechanism works under controlled conditions when certain natural aerosols are present. It has, however, not been able so far to confirm that this mechanism will work in our climate system, nor what the magnitude of its impact could be.

        Let’s assume further reproducible experimentation confirms that the mechanism does, indeed, work in a controlled environment simulating our climate system and that it is sufficiently strong to explain essentially all of the warming we have seen over the 20thC.

        This would be new empirical scientific evidence, which we do not yet have today, which would completely change our conclusions on the relative impact of natural versus anthropogenic (greenhouse) forcing.

        I am not predicting that this will happen in that way, of course, but it cannot be excluded.

        And since we do NOT have empirical evidence to support the model-derived magnitude of GH forcing from human GHGs, we cannot rule this out yet.

        Max

      • manacker, it is a case of when the simplest explanation explains things, why look further, unless you can first prove that the simple explanation doesn’t work which becomes increasingly difficult when datasets like BEST come along.

      • The problem with Manackar’s argument is actually deeper. If you consider a number of drivers, x,y and find that they can explain your observation z based on good science and someone comes along and says, well w is more important, they not only have to demonstrate how w affects the system, but also that you got x and y wrong.

        WRT cosmic rays, Scafetta cycles, Landscheidt astrology, whatever, at best you have correlations that are questionable. With CO2 and volcanoes you have things that are comparatively well understood.

      • Eli Rabett

        Yes. The cosmic ray / cloud connection has not yet been demonstrated experimentally in a controlled environment that would replicate our atmosphere. But the CLOUD folks at CERN are working on it.

        Let’s wait and see what they come up with before we write it off as a “correlation that is questionable”.

        At the same time, let’s continue trying to find empirical evidence to support the CAGW hypothesis (2xCO2 ECS ~ 3C).

        We’re not there yet on either hypothesis, as I am sure you will agree.

        Max

      • Steven Mosher

        manaker its not an argument from ignorance.
        The structure goes like so.

        1. Given: Gghs cause warming
        2. Given: Volcanoes cause cooling.

        If you take these two givens and fit the data your residual looks like AMO with natural variability being about .17C per decade.

        Of course it could be something else. it could be ln(unicorns) works better to explain the data.

        The point is you dont need to appeal to anything else to explain the the data. Take an example from evolution.
        Evolution explains what we see in terms of life forms on the planet.
        Of course it doesnt rule out a sneaky deity that really controls everything, but explanatory parsimony suggests that adding entities that are not necessary, is well, not necessary.

        If you want to object then deny #1 and be a skydragon

      • Volcanics are still poorly understood in terms of climate,ie there is no uniqueness theorem,the test being in the data eg Krakatau and Tarawera seem to have made little difference in the BEST data,yet the former had a forcing twice the size of Pinatabo

        http://www.woodfortrees.org/plot/best/from:1870/to:1900

        It seems Best is worst the test being the singularities.

      • Steven Mosher

        Sure GHGs cause warming (how much is not clear) and volcanoes (usually) cause cooling.

        If you take these two givens and fit the data your residual looks like AMO with natural variability being about .17C per decade.

        That’s all very nice Steven, but what is not nice is the statement:

        there is no need to assume other sources of long-term variation are present

        because this sentence could just as well be written

        there is no need REASON to assume other sources of long-term variation are NOT present

        Point: the science is NOT settled.

        Has nothing to do with “unicorns”, Steven – just “science”, that’s all.

        Max

      • Steven Mosher

        Don’t come with the (overworked) “evolution” analogy.

        Evolution has been validated by empirical data from actual observations.

        The CAGW premise (=high 2xCO2 ECS) has not.

        The difference is very simple.

        One is a corroborated hypothesis, which has become an accepted theory and reliable scientific knowledge; the other isn’t there yet – it is still an uncorroborated hypothesis.

        Max

      • Steven Mosher

        Not to belabor a point, but speaking of “argument from ignorance”:

        …volcanism, combined with a simple proxy for anthropogenic effects (logarithm of the CO2 concentration), reproduces much of the variation in the land surface temperature record; the fit is not improved by the addition of a solar forcing term. Thus, for this very simple model, solar forcing does not appear to contribute to the observed global warming of the past 250 years; the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy. The residual variations include interannual and multi-decadal variability very similar to that of the Atlantic Multidecadal Oscillation (AMO).

        Yikes!

        I can “make it fit” when I ASS-U-ME there are only two forcing factors (volcanoes and CO2) plus variability from AMO.

        Duh!

        I could also “make it fit” by reducing CO2 impact and increasing solar forcing.

        What the hell, I could “make it fit” by simply “making it fit” (check Vaughan Pratt’s “millikelvin” model).

        This is what modeling is all about?

        Heaven help us!

        Max

      • manacker, so do you also think the AMO-like signal left over is bogus?

      • Jim D

        “Making it fit” is “bogus”.

        That there was a residual that looked like AMO after “making it fit” was a nice coincidence.

        Max

      • You may apply Ockham’s Razor to attribution, but the chaos and the multiplicity of causes are carborundum.
        ===========

      • manacker, so you are accepting AMO but not volcanoes in terms of valid contributions, or both AMO and volcanoes, but not understanding the ramping up part that is left over, and is their third component.

      • Steven Mosher

        manaker. I make the analogy to evolution merely to draw attention to the structure of the argument and the case for parsimony. Parsimony is not an epistemic ground but a pragmatic one. of course you need not be swayed by appeals to parsimony.

      • @manacker: Evolution has been validated by empirical data from actual observations. The CAGW premise (=high 2xCO2 ECS) has not. The difference is very simple. One is a corroborated hypothesis, which has become an accepted theory and reliable scientific knowledge; the other isn’t there yet – it is still an uncorroborated hypothesis.

        Only in your mind, Max, only in your mind. Switzerland would seem to have translated you into a vacuum.

        Speaking as a scientist accosted from time to time in cocktail parties on various controversial topics like second-hand smoke, which races or species were made in God’s image (maybe God most closely resembles a chimp), evolution, global warming, etc., I have a much harder time with evolution than global warming. How could a velociraptor evolve into a bird, for example? What “empriical data from actual observations” are you talking about? We simply don’t have such a thing! If you believe otherwise I have a very valuable bridge I can let you have for pennies on the dollar.

        Compared to global warming, evolution as a theory is totally nuts. If you can’t see that then you’re just blindly accepting what other people tell you about evolution.

      • Don’t tell Pratt about the fossils. He will have nightmares.

      • @manacker: At the same time, let’s continue trying to find empirical evidence to support the CAGW hypothesis (2xCO2 ECS ~ 3C).

        Hansen et al’s 1985 paper made this whole climate sensitivity thing complete rubbish. Here we are 28 years later and people still talk about climate sensitivity as though it were a real thing.

        If talking about something as undefined as climate sensitivity is not a violation of netiquette, what is?

      • Matthew R Marler

        Steven Mosher: Parsimony is not an epistemic ground but a pragmatic one.

        Well said.

        “Occam’s razor” has also been called “Occam’s lobotomy” and “Occam’s Guillotine.” A sticking point has been the definition of “beyond necessity” — for example, is it already “beyond necessity” if numerous correlations and past oscillations have still not been accounted for? Another sticking point has been “entities” — is it an “entity” to assume that something not known well can not possibly matter, something like variations in solar output?

        As others have said, thank you for the work on BEST. I really liked this: The approach is straightforward. A 1000 year GCM simulation is used as ground truth. Since this data exists for every place and time we can calculate the “true” average at any given time. This “ground truth” is then sampled by using the GHCN locations as a filter. The experiment is repeated using sub samples of the 1000 year run. The results show that if you use a limited spatial sample ( GHCN locations ) with temporal gaps ( not every station is complete ) that the Berkeley method has the lowest prediction error.

      • Thanks mathew. yes, the GCM data for testing the methods is elegant and simple. I’ll probably add Nick stokes method to the pile which I think should get very close to berkeley accuracy with computation times than are sub 1 minute. I use it for massive sensitivity testing. basically a least squares approach.

      • Max,

        You do know that we can measure cosmic rays, and those measurements tell us how many charged particles they can form in the atmosphere?
        And we can measure the amount of charged particles in the atmosphere and compare the two.
        And the theory is that a drop in the amount of cosmic rays causes less clouds which causes warming, right?
        We measure them by the amount of charged particles they form.

        How much warming would they cause by decreasing to 0?

        Or come by one of the labs I work at in the US, I can make some cosmic rays for you!
        And antimatter.

        none.

      • Jim D | January 20, 2013 at 1:20 pm | Reply
        This is the crux of the debate, isn’t it. They say that simple ideas on climate sensitivity due to GHGs that go back to Arrhenius a century ago..

        It’s an answer from gobbledegook, the ignorance not of the data but of the scientists who continue to use unproven premises. Arrhenius didn’t have the faintest idea of what Fourier really said and imagined the atmosphere as being what Fourier said it wasn’t:

        The “Greenhouse Effect” was originally defined around the hypothesis that visible light penetrating the atmosphere is converted to heat on absorption and emitted as infrared, which is subsequently trapped by the opacity of the atmosphere to infrared. In Arrhenius (1896, p. 237) we read:

        “Fourier maintained that the atmosphere acts like the glass of a hothouse, because it lets through the light rays of the sun but retains the dark rays from the ground.”

        This quote from Arrhenius establishes the fact that the “Greenhouse Effect”, far from being a misnomer, is so-called because it was
        originally based on the assumption that an atmosphere and the glass of a greenhouse are the same in their workings. Interestingly, Fourier doesn’t even mention hothouses or greenhouses, and actually stated that in order for the atmosphere to be anything like the glass of a hotbox, such as the experimental aparatus of de Saussure (1779), the air would have to solidify while conserving its optical properties (Fourier, 1827, p. 586; Fourier, 1824, translated by Burgess, 1837, pp. 11-12).

        So your arguments from authority, Arrhenius, are worthless, you have not established that there is any such thing as this “Greenhouse Effect” concept you claim exists.

        And, this is regardless whether you go with the “classic CAGW Greenhouse Effect” which has clouds and carbon dioxide blocking longwave infrared from the Sun then magically absorbing longwave infrared from the Earth, or the variation that the Sun doesn’t produce any longwave infrared, heat radiation, which further idiocy supposedly answers the “classic” gobbledegook.

        Your, generic, other argument from authority waving your hands in Fourier’s direction is also gobbledegook – Fourier said nothing about radiated heat apart from heat flow – do read further about this on Timothy Casey’s “The Shattered Greenhouse” from which I’ve quoted: http://greenhouse.geologist-1011.net/

      • Steven Mosher | January 20, 2013 at 5:22 pm |
        manaker its not an argument from ignorance.
        The structure goes like so.

        1. Given: Gghs cause warming
        2. Given: Volcanoes cause cooling.

        If you take these two givens and fit the data your residual looks like AMO with natural variability being about .17C per decade.

        Of course it could be something else. it could be ln(unicorns) works better to explain the data.

        The point is you dont need to appeal to anything else to explain the the data. Take an example from evolution.
        Evolution explains what we see in terms of life forms on the planet.
        Of course it doesnt rule out a sneaky deity that really controls everything, but explanatory parsimony suggests that adding entities that are not necessary, is well, not necessary.

        If you want to object then deny #1 and be a skydragon

        Now, you can’t even get that right, you’re the one’s believing in skydragons..

        “1. Given: Gghs cause warming”

        The greenhouse gases “causing warming” are predominately nitrogen and oxygen, the heavy voluminous ocean of real greenhouse gases, Air, which is our atmosphere. It is these which act as a blanket around the Earth slowing the rate of cooling and so avoiding the extreme lows of heat loss from the surface as seen for example on the Moon without any atmosphere.

        AGW/CAGW have misattributed this effect to their own fiction “The Greenhouse Effect greenhouse gases” and have done so on the science fraud that the minus 18°C temperature is only without them, when a) this temp figure is for the Earth without any atmosphere at all, without nitrogen and oxygen, and b) the temp without the “the Greenhouse Effect greenhouse gases” would be around 67°C.

        Taking out the Water Cycle but with the rest of the atmosphere of mainly nitrogen and oxygen in place the temp of the Earth would be 67°C.

        So, let’s get this straight, your “AGW/CAGW Greenhouse Effect greenhouse gases and their effects” are based on an out and out lie.

        This garbage in explains the garbage out we get from you, generic AGW/CAGW “climate scientists”. You’ve created a fictional sky dragon, and some of us in the real world can see how you’ve manipulated physics to come up with your fictional claims.

        It is simply no longer possible for me to take any of you seriously as scientists who believe in such a silly fictional skydragon at complete odds, and fraudulently so, with the real physical world around us – less of the sarcasm from you would be in order..

        “2. Given: Volcanoes cause cooling.”

        Of interest here:
        “”Scientists are convinced these plumes contain so many cooling sulfate particles that they may be masking half of the effect of global warming,” noted the July 20 Wall Street Journal.

        Assumption Was Wrong

        However, a team of researchers from NASA and the University of California at San Diego reported in the August 2 issue of the British science journal Nature that they sent instruments into “brown clouds” of aerosols over Asia to measure their effect on temperature. To their surprise, the researchers discovered the common assumption that aerosols lower temperatures was wrong.

        Instead, aerosols were found to substantially amplify the Earth’s greenhouse effect.”
        http://news.heartland.org/newspaper-article/new-aerosol-study-refutes-global-warming-theory

    • The discussion in this thread has been particularly valuable. I am sorry to say this, but it seems clear (and very little is clear in the GW area), that the warmists are in a bind. They must make the case that the probability that there are other relevant (meaning, they would contribute enough) forcings beside CO2 and methane, is so low that the science is “settled”. But that case is far from strong enough, and the holes are too big. Cosmic radiation cannot be dismissed right now. It is a scientifically reasonable that variations in radiation can affect the earth’s temperature, and there is no credible argument to the contrary. Further work may or may not show that the effect of radiation is negligible. But to discount it because it is not well enough understood is a form of scientific head-in-the-sand. And this is only one of many areas in which the warmists must overstate the degree of confidence one can have based on known science.

      One of the things that bothers me the most is the violation of the dictum that “correlation does not imply causation.” As good as the BEST work may be from a statistical and data analysis perspective, it completely ignores any arguments pro or con the mechanisms that may affect the earth’s temperature. For me, it is profoundly unscientific to declare that because their models correlate well with the concentration of greenhouse gasses in the atmosphere, anything is “proven”.

      Occam’s razor is not relevant here. It is a heuristic, meaning that it is sometimes right, and sometimes not. You can tell from my handle that it is one of my favorite metaphors, but only in the right context. It has to do with complexity, and when a less complex answer might be better than a more complex one. But arguing complexity in the GW case is a red herring. Is cosmic flux a more complex explanation than CO2 and other man-generated greenhouse gas injections into the atmosphere? I think not, nor have I seen any good argument to that effect. It is rather the case that greenhouse gasses were deemed to be the culprit 3 decades ago, and very little effort has gone in to evaluating proposed alternative, or looking for other ones. Don’t blame Occam’s razor for three decades of betting on one horse to the relative exclusion of others. In this regard, I am particularly concerned that we have no credible ideas about the large variations in the earth’s temperature over, say, the last 100,000 years that would suggest why what we now observe is not driven by the same forces that existed before we started burning large amounts of coal and oil a century and a half ago.

      • Occam37,
        Very neat summary. “In this regard, I am particularly concerned that we have no credible ideas about the large variations in the earth’s temperature over, say, the last 100,000 years that would suggest why what we now observe is not driven by the same forces that existed before we started burning large amounts of coal and oil a century and a half ago”. This latter missing point of explanation remains a starting point in understanding the climate puzzle .

        Steve, thank you for the work on BEST and the fairness of your explanations. You are mellowing! BEST is a step in the right direction.

        Humility is always appropriate.
        GarryD

      • The thing is, it is the skeptics who look to cosmic rays to explain climate, but who don’t understand cosmic rays, look very foolish to those of us who do understand cosmic rays.

        Simple enough really, just count the cosmic rays and calculate the total number of ions they could possibly produce and compare that number to the number of charged particles already present in the atmosphere, if the two numbers are close, you may have a point, but since they are not, it is time to reject the cosmic rays affect climate theory.
        If you think I am hand waving, I suggest swinging a meter and doing some math.

  12. The Skeptical Warmist

    Regarding the Berkley temperature series, I would love to see added to this “simple fit” graph from Berkley:

    http://berkeleyearth.org/pdf/decadal-with-forcing.pdf

    An overlay of PDO + AMO fluctuations since 1850 or so. The result would be quite interesting. Anyone up to the task?

  13. A fan of *MORE* discourse

    DocMartyn posts: “Stop talking crap your points [scientific references] are very, very stupid.”

    DocMartyn, it is my pleasure to further illuminate for Climate Etc readers in general, and for you in particular, the crucial role of fluctuations in thermodynamics, particularly in regard to the crucial question of whether fluctuations might provide advance warning of CO2-driven bistable phase-changes in earth’s climate system.

    A good starting point are the standard references relating to the thermodynamical theory of phase transitions, and in particular the phenomenon of critical opalescence, which provides vivid experimental demonstrations of increasing fluctuations near phase transitions.

    This work further illuminates BEST’s vital role in studying fluctuations … there’s much more to climate-change science than just statistical analysis of data! Because if we start to see climatological “critical opalescence”, in the form of increasing fluctuations in climate measures, then that is very substantial scientific cause for concern, eh DocMartyn?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries???}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Keep trying to inform them, Fan. “Skeptics” tend to dismiss actual published papers with a wave of the hand, but they are intent on reading blog articles for their information because those are the tabloids of science information. I will add another thought here regarding this and the connection to policy-making. Surely when things like paleoclimate show evidence for GHG effects being important, it is just due diligence for skeptics not to ignore it when making their arguments to the politicians and public, otherwise they may give the appearance of being either (a) selective with their facts, (b) ignorant of relevant science, or (c) just lacking scientific curiosity. When paleoclimate studies are converging on the fact that 50 million years ago CO2 levels exceeded 1000 ppm and ocean temperature were more than 10 C warmer, they should think and wonder why, read those papers, just out of scientific curiosity, not close their minds off.

      • Jim,

        You appear not to like sceptics, perhaps undestandably if you’ve gotten yourself into a blue funk about the world coming to an end. But 50 million years ago the world’s climate was, according to Stanley, extemely mild, with temperatures at the equator being about the same as they are today and th poles being at the temperature of the Pacific Northwest. having said that it was a different world with India still to collide with Asia and the continents in general being in different places than today, so the resultant climates cannot be compared. It was a different world.

        Learn to love your opponents, if you’re wrong then it’s easier to apologise. And you’re wrong.

        g

      • A fan of *MORE* discourse

        Geronimo replies with: “[a blend of cherry-picking, willful ignorance, and Panglossian optimism]”

        Geronimo, please be aware that rational climate-change concerns are not assuaged by arguments based upon cherry-picking, willful ignorance, and Panglossian optimism … no matter how confidently skeptics assert these arguments, or how often they are repeated, or what cherished ideological authorities are cited!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries???}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • geronimo, who is Stanley, and why do you quote this as an authority? Do you know, or does he know, that 50 million years ago was the Eocene epoch, which, if it is known for one thing, that is being very warm.

      • Jim, Stanley is the text book on the holocene.

      • Is there a textbook on the Eocene?

      • Stanley would appear to have been superseded by the Zachos curve.

        The Eocene Optimum was hot.

      • On all paleotemps on Wikipedia I see 6C at the peak. But I expect the error bars are rather large. And how do you know the CO2 concentration. Last source I checked gave a +-50% error bar. Hargreaves and Annon just published a paper about the last glacial maximum that estimated temps to be substantially less cold than Hansen and others and claimed at least that theirs was more reliable. This new estimate of course lowered the climate sensitivity estimate by a factor of 2. Surely, its best to look at work like Nic Lewis’ recent rework using IPCC AR5 aerosol forcings or its predecessors.

      • Beerling and Royer (2011, Nature Geosciences commentary) available from Beerling’s website, is an up-to-date look at CO2 and temperature for the Eocene. Hansen’s “Target atmospheric CO2” 2008 paper also is a multi-expert review, easily found with a Google search.

    • Matthew R Marler

      A fan of *MORE* discourse: This work further illuminates BEST’s vital role in studying fluctuations … there’s much more to climate-change science than just statistical analysis of data!

      I personally would like to see a much more thorough account of all the actual heat and energy transfers through the climate system than what is available now, and a clear account of how CO2 changes those transfer processes. I share your earlier skepticism regarding the thermodynamic equilibrium approximation to the Earth climate system, and I am skeptical that a computed change in the “equilibrium temperature” has a relation to any change that might actually occur in the climate. It would make a great deal of difference whether there was more of a change in the temperature of the upper troposphere than at the surface and lower 100 feet; all kinds of changes are within the margin of error of the equilibrium approximation.

      But what do you mean by “just statistical analysis of data”? That is plenty hard, especially when “statistical analysis” includes vector autoregressive modeling and fitting of high dimensional dynamical models to the data.

  14. Steve, I plotted best vs. HADCRUT4 global average and it looks like best is more “noisy” as a time series. Is this related to land only vs. global?

    • Steven Mosher

      yes, the variability of air temps is greater than the variability of SST.
      the range is wider as well. Diurnal variation for SST is lower as well.

  15. Brandon Shollenberger

    From the paper:

    This empirical approach implicitly assumes that the spatial relationships between different climate regions remain largely unchanged, and it underestimates the uncertainties if that assumption is false.

    As I’ve pointed out multiple times in the last year or so, BEST’s results don’t fit this assumption. The correlation structure of land temperature changes in BEST’s data over time. Either the assumption must be wrong or the BEST methodology is introducing biases with its spatial weighting.

    Has anyone ever addressed this concern?

    • Brandon Shollenberger-

      The correlation structure of land temperature changes in BEST’s data over time.

      Your assertion* above has also been a nagging thought here too. To say the correlation structure is unchanging of time would seem to be inconsistent with expectations of how the climate (and weather) might change with warming, e.g., more extremes, shifting of precipitation patterns, etc. If the climate is changing and has changed over the last few decades with demonstrable effects at the scale of the representative correlation length (100’s to 1000’s of kilometers) would it not be fortuitous that the correlation structure–a statistical concept–is not changed? Certainly there are suggestive regions (American Midwest?) where sufficient data exist and the idea can be tested. This can be examined with the data available.

      The impact of the assumption, however, may not be too great*** in the BEST calculations as local kriging estimates can be relatively insensitive to the correlation structure (or semi-variogram) in comparison to kriging’s local error estimates–and BEST does not report the latter.** (Commercial contour packages, e.g., SURFER(tm), often just employ a fit linear variogram as the default, probably assuming most people will not be interested in the local error estimates.)

      * I don’t recall your specific comments. Did you already go back and computationally look at the time dependence or is your thought present more like mine–intuitive.
      ** In the context of kriging this almost seems likely throwing the baby out with the dishwater, since BEST then turns around and tackles uncertainty in an ad hoc manner. Very curious…at a minimum one would like to the the kriging error (maps) vis-a-vis additional data needs.
      *** I have no idea. That is why such loose threads can be frustrating, and compounded by premature PR efforts.

      Oh,well.

      [JC-It’s like getting out of Afghanistan.]

      • Brandon Shollenberger

        mwgrant:

        To say the correlation structure is unchanging of time would seem to be inconsistent with expectations of how the climate (and weather) might change with warming, e.g., more extremes, shifting of precipitation patterns, etc.

        Indeed. At the very least, some areas are expected to warm more quickly than others. One would expect that to change the correlation structure BEST’s assumption says is unchanging.

        I don’t recall your specific comments. Did you already go back and computationally look at the time dependence or is your thought present more like mine–intuitive.

        I can find my earlier comments on the topic if you’d like, but what I did was a very simple test. I checked how well different regions correlate to each other (and to the globe) in different periods of time. It’s easy to do since BEST’s website has data for different regions available directly from it.

        I have no idea. That is why such loose threads can be frustrating, and compounded by premature PR efforts.

        Likewise. I’ve never formed an opinion on how important this issue is. I’ve just been bothered by the fact it hasn’t been addressed. Some time back Mosher responded to me by basically hand-waving it away. Zeke responded to me in a more meaningful manner, but he didn’t get back to me like he said he would (I assume he forgot).

        I’d think people would be worried if a fundamental assumption in their error calculations was called into question. I certainly wouldn’t think they’d continue to rely on it without doing any checking.

    • Brandon Schollenberger-

      …I’ve just been bothered by the fact it hasn’t been addressed. Some time back ….

      I’d think people would be worried if a fundamental assumption in their error calculations was called into question. I certainly wouldn’t think they’d continue to rely on it without doing any checking.

      That really just about sums it up.

      I appreciate your answering in detail–likely time dependence of the correlation function has bothered me for a while and I have been surprised that the topic had not been broached. (I obviously had missed your comments–but missing comments is easy to do.) I was surprised when I first read Rohde’s papers (2011 then 2012).

      Before looking at the BEST papers based on experience with geostatistics I anticipated a likely effort with kriging for each year or each year from a representative set of year, e.g., 1940, 1950, etc.–a lot of computation. I further anticipated that kriging error maps would be part and parcel of the routine output. Thus armed one could make comments on the time dependence of the correlation function/semi-variogram, changes in error magnitude and distribution over space and time, etc. All of this is well established ‘science’ and using parts of it would put it in a familiar form, easing review/acceptance. The paper and an approach of estimating error outside of the kriging thus strikes me as idiosyncratic, but that reflects my bias, i.e., is not necessarily a flaw. Still I am disappointed those topics to date have not be developed in BEST documentation and that no expert reviews have yet surfaced in public or posted at the BEST site on the BEST kriging.

      Thanks for the briefing on how you arrived at your assertion. (I assume they are here or at Lucetia’s and can track them down.) Your back-of-the-envelope approach seems quite reasonable (and informative). I had toyed with a stripped down version of what I describe in the preceding paragraph, but got side-tracked with a wrinkle with the data as posted at BEST–not all of the posted material is QAed. (BEST has neither used nor QAed all of the data that they posted as processed, and so it is a modest emptor caveat when you use their data.) Murphy the All Powerful assured a conflict between that and my approach and I decided it was not worth the effort to retrench as BEST has the big Mo and bad ears anyway.

      Finally, to me establishing the time behavior of the correlation would be paramount. It is essential to identify that which is demonstrated by the data and that which must be assumed in the calculations, if the calculations are to be meaningful in context.

      • Brandon Shollenberger

        mwgrant:

        I appreciate your answering in detail–likely time dependence of the correlation function has bothered me for a while and I have been surprised that the topic had not been broached. (I obviously had missed your comments–but missing comments is easy to do.) I was surprised when I first read Rohde’s papers (2011 then 2012).

        I’ve been disappointed by the lack of response on a number of points. Even when I raised basic issues like station counts (there were mismatches in papers), I was seemingly ignored. I’ve tried figuring out how seasonality was handled, and again, nothing.* Then when I saw a key assumption underlying their calculations of errors is apparently wrong…

        Thanks for the briefing on how you arrived at your assertion. (I assume they are here or at Lucetia’s and can track them down.) Your back-of-the-envelope approach seems quite reasonable (and informative)

        I think those are the only two sites I discussed it at. I don’t remember just which posts it was on, but I do remember when I brought up the concerns again later. Zeke said he’d ask one of the authors about it (while acknowledging its importance), but I think he forgot. And then Mosher gave a non-responsive response. I never heard back when pointed out he hadn’t addressed my concern. Ultimately, the only useful thing to come from that exchange was Mosher saying:

        The ASSUMPTION is that this structure stays the same going back in time. under that assumption you can calculate an uncertainty due to spatial coverage. without that assumption you have bupkiss.

        If someone raises a concern that could mean “you have bupkiss,” would you really go five months without doing anything to address it?

        *It’s possible I just missed it in the code BEST released, and if not, that it is included in the new code dump. I can’t tell because the new code is packaged with data in a 500+ MB file I can’t download on my current connection.

      • Brandon,

        Thats really a question for Robert Rohde. I’d suggest emailing him. I’ve been a tad less involved with the Berkeley project over the last 6 months due to work, unfortunately (unlike those lucky academics, I don’t get paid to do climate science :-p).

      • Brandon Shollenberger

        Zeke Hausfather:

        Thats really a question for Robert Rohde. I’d suggest emailing him. I’ve been a tad less involved with the Berkeley project over the last 6 months due to work, unfortunately (unlike those lucky academics, I don’t get paid to do climate science :-p).

        I’ll probably try getting in touch with him once I’ve examined things more. I always thought this would be something that got cleared up (a bit) before any papers got published. Now I guess the only way I’ll get an answer is to do some work myself.

        But for the moment I’m a bit sidetracked. I need to download the newest code/data dump and see if I can figure out why BEST seems to have a more a notable seasonal cycle than any other major temperature series.

        (By the way, is Rohde’s e-mail address posted somewhere?)

      • Brandon Shollenberger

        Which reminds me, is there an updated version of the BEST global record? When I go to the Results Summary page, I get directed to this file which only goes to November of 2011. All the regional files go to July 2012. I’d like to use the most up-to-date data, but it’s hard when I can’t find it for one series.

      • Brandon,

        The most up-to-date data is available here: http://berkeleyearth.lbl.gov/regions/global-land

        You can find contact info for Robert here: http://www.globalwarmingart.com/wiki/Global_Warming_Art:Contact

      • Brandon Shollenberger

        Thanks Zeke. Do you know why the two files have notably different values? I don’t recall seeing anything posted about a change in methodology, but there is a significant non-random difference in the two. To show what I mean, here is the difference from 1900 on. The two files do say they use different amounts of time series (36,853 vs. 37,145), but an increase in series of less than one percent shouldn’t cause a difference like that.

        And it’s not just the temperature values which have changed. The uncertainty ranges have changed quite a bit as well. The combined effect of this is a full 13% of the current values since 1960 falls outside the 95% uncertainty ranges of the old series (as opposed to the 0.1% before 1960). I’d consider that a pretty big change, and it’s disturbing to see both files on that website at the same time with no apparent explanation for the difference.

        Incidentally, those uncertainty ranges are screwy. I challenge anyone to explain to me why there should be a huge step change around 1955. Or why the new data file has huge uncertainy excursions shortly after that weren’t present in the earlier version. Seriously, these are the uncertainty values of 1968 in the most up-to-date file:

        0.066
        0.156
        0.162
        0.107
        0.230
        0.158
        0.187
        0.280
        0.530
        0.094
        0.092
        0.054

        That doesn’t make sense.

    • Brandon Shollenberger

      Since I was talking to mwgrant about the subject, I decided to do a quick examination of the correlation of some BEST data. I picked North America, South America and Europe as my regions to examine. I decided to focus on data post 1900 because of coverage, and I choose continents since they should have the most consistent correlations. Here are graphs showing correlation for ten year periods:

      Europe vs. North America
      Europe vs. South America
      North America vs. South America

      Similar variations are seen if you use different period lengths or examine all periods (increasing period length with time). Larger ones will be found if you compare smaller regions (or include earlier years), including ones that are distinctly non-random.

      Speaking of which, something caught my eye. A while back I noticed some series still had clear seasonal cycles after BEST “removed” them. That made me wonder how BEST handles seasonal variations. I tried but failed to find out (and got no response from Mosher when I asked him). I now wish I had pursued the matter further as this is an ACF graph for BEST’s North America temperature series. It looks like there is an obvious seasonal cycle.

      (None of this is conclusive. I put it all together in about 20 minutes. Still, I think it’s enough to merit concern.)

      • Brandon

        This just a quick response to the graphs. On first blush, yes, I would think register on a Bupkiss Sensitivity scale. That said a couple of observations from this quarter.

        First, the range and correlation length in Figure 1 of the methodology supplement are on the order of 3000 meters and a little more than 1000 meters, respectively. I expect most of the pair distances in the intercontinental correlations to exceed that or lie in the upper part of the range, i.e., they do not shed a lot of light directly on the interesting part of the spatial structure. However, the figures do show changes between the intercontinental correlations changing in time and this is enough to merit attention in the paper.

        Working with states data in the US, e.g., the Midwest, one could examine the time behavior of the correlation function/semi-variogram on a regional scale. In addition one could look at effects of decreased sample density in time and clustering on the correlation. If local kriging errors were a part of the effort one also could start to get a read on the spatial distribution of the error over time–an insightful handle on on just what can we say when we go back in time, at least from one established documented perspective. I would hope that there are things like this to varying degrees of completion in project files. BTW note the following text lifted from p. 9 of the May 2012 ‘Averaging Process Paper’:

        NOAA also requires the covariance matrix for their optimal interpolation method; they estimate it by first constructing a variogram during a time interval with dense temperature sampling and then decomposing it into empirical spatial modes that are used to model the spatial structure of the data (Smith and Reynolds 2005). Their approach is nearly ideal for capturing the spatial structure of the data during the modern era, but has several weaknesses. Specifically this method assumes that the spatial structures are adequately constrained during a brief calibration period and that such relationships remain stable even over an extended period of climate change.

        I’m not sure what I’d make of that. On the surface it suggests considerable thought was spent on time behavior, but there is not clear stans-alone development of the topic.

        Regarding attention to time dependence of correlation in the papers : 1.) As Mathew R Marler says below “alternative models for the spatial autocorrelation could be the topic of someone’s PhD dissertation”, and I am reasonably confident that has also been evident for a while to BEST [e.g., see above]; 2.) Mosher acknowledged bupkiss thoughts regarding the importance of the time independence assumption; 3.) you raised the issue as an outsider to members of the BEST team earlier; and 4.) Marler shares some doubt on the topic below. These alone suggest that as a matter of due diligence at least some additional qualitative discussion should appear in the final public BEST documentation record, but a some point before that question needs airing before the scientific community. Hints are there, but development is incomplete.

        The canned response of course is ‘well it is a work in progress.’ Unfortunately the chosen approach to publishing has been the modern paradigm: one of PR first, throw the incomplete preliminary work out for eliciting response second, then third, clean up. To me it doesn’t seem to work very well: ‘it is a work in progress’ too easy of an out that encourages creeping rationalization. Practically, step 3, cleanup, is a killer for all parties involved because by that point PR issues have sufficiently roiled the waters. IMO this approach this really tends to undermine healthy skepticism.

      • Brandon Shollenberger

        mwgrant:

        First, the range and correlation length in Figure 1 of the methodology supplement are on the order of 3000 meters and a little more than 1000 meters, respectively. I expect most of the pair distances in the intercontinental correlations to exceed that or lie in the upper part of the range, i.e., they do not shed a lot of light directly on the interesting part of the spatial structure.

        Indeed. The last time I did a similar test I used countries instead of continents. In some cases, the results were notably more disturbing. Even tests between countries and the continents they’re part of showed similar results. I don’t think there’ll be a change at finer resolutions that negates the problem. I’d test myself but without Matlab, I’m not sure how much work it’d be to do.

        I would hope that there are things like this to varying degrees of completion in project files.

        I’m cyncial, but I don’t think there are.

        Practically, step 3, cleanup, is a killer for all parties involved because by that point PR issues have sufficiently roiled the waters. IMO this approach this really tends to undermine healthy skepticism.

        That step is especially difficult here given BEST’s bad PR decisions. I’ve highlighted exaggerations in BEST’s website and at least two of Muller’s op-ed pieces. If you consistently exaggerate your work, how can you be expected to address issues that diminish your work?

        Anyway, BEST has published their results paper with a stated assumption underlying their uncertainty calculations that is known not to work. Either it isn’t true or their results aren’t in line with it. I don’t get that.

      • Brandon-

        Indeed. The last time I did a similar test I used countries instead of continents. In some cases, the results were notably more disturbing. …

        Yes one would expect that given the longer range time variation. It is just nice to exercise things at the same scale as the correlation length and range–and you had done so.

        …Even tests between countries and the continents they’re part of showed similar results. I don’t think there’ll be a change at finer resolutions that negates the problem.

        Tests with multiple states or countries (Europe) give you information where pair separations fall in the range and should be most telling. In the simple case go out much further and you are essentially in a distance regime where there is no spatial correlation and drift might be evident. The game for kriging is in closer. That’s where you chase detail.

        I’d test myself but without Matlab, I’m not sure how much work it’d be to do.

        Well, again, I think what you have done is sufficient for raising a flag at this time. Zeke suggests you contact Rohde. Well that might clear up* things in a hurry, or at least remove some chaff from the mix. Otherwise it is slog out the calcs–with no guarantees on whether it goes anywhere. With Zeke checking in on it today, it does seem like Rohde or calcs. One can empathize with “don’t get paid to…”, huh?

        * I am not optimist, I win more bets that way.

      • Brandon Shollenberger

        mwgrant:

        Well, again, I think what you have done is sufficient for raising a flag at this time. Zeke suggests you contact Rohde. Well that might clear up* things in a hurry, or at least remove some chaff from the mix. Otherwise it is slog out the calcs–with no guarantees on whether it goes anywhere. With Zeke checking in on it today, it does seem like Rohde or calcs. One can empathize with “don’t get paid to…”, huh?

        That’s what I’m worried about. This isn’t my job. It’s not even in my field of expertise. The idea of working through the calculations to figure out how this issue impacts their uncertainty estimates is… daunting. The fact I’d have to rewrite nearly everything they’ve done in another programming language just makes it worse.

        I don’t hold much hope for what could come from contacting Rohde. I can’t imagine anything he could say that would explain why he hasn’t tested this before. And if he has tested it before, I can’t see why it would never have been discussed. That, plus the fact I’m really not the right person to pursue this matter, makes me hesitant to e-mail him.

        I probably will anyway. I just want to figure out what’s with the seasonal cycle in BEST before I do. The fact there’s a clear seasonal cycle at continental and global scales shocks me. Kriging requires correlation calculations. A failure to remove seasonal cycles will necessarily impact those correlations. I doubt it would change the overall results, but it could change area/regional trends, and it would certainly impact the uncertainty calculations.

        (To be honest, I’m focusing on that issue because it’s simpler. I don’t feel quite as overwhelmed by it.)

      • Brandon Shollenberger-

        I probably will anyway.

        Hmmm, I assumed that would be the case.

        I just want to figure out what’s with the seasonal cycle in BEST before I do. The fact there’s a clear seasonal cycle at continental and global scales shocks me.

        Good choice. For now an assumption on the time independence of the correlation function just may be one of the things that people have to accommodate–ideally BEST eventually will get upfront with a discussion about that aspect. When tackling time dependence we’re talking about a lot of calculations and postprocessing–even when clever. Also as you are undoubtedly aware there are a number of practical issues that have be ‘resolved’ if one has to go by a route other than using the BEST code. A number of folks will be interested in what you have to say on the seasonality question.

        [As an aside, I still hope to get back to my geostatistics regional exercise with the US data or a subset. However, that part-time effort can’t be considered for about 3 months–other plans and motivation.]

        Kriging requires correlation calculations.

        This is not a problem when using canned geostatistics software or even in R–but it should not be treated as plug and chug. There is some art in it, and one should have a certain amount of comfort with geostatistical concepts before diving in. I recommend R or an off-the-self-code such as GSLIB (a suite of command-line codes). To me there is no need to duplicate the BEST approach in detail–the objective is to further study the spatial correlation and related issues with BEST data, and not to verify the BEST results. BEST will produce verification in time as a part of their QA or they won’t. That’s their problem/joy.

        For me a study of spatial correlation is much more easily handled using documented non-custom codes, e.g., GSLIB. [There could be dataset size limitations, but if one is looking at things on regional scale and having modest goals in the study that is likely not a problem.]

        A failure to remove seasonal cycles will necessarily impact those correlations.

        [Shudder.]

        I doubt it would change the overall results, but it could change area/regional trends, and it would certainly impact the uncertainty calculations.

        [Shudder, again.] I don’t think results from a ‘routine’ kriging exercise would be worth the cost of the electrons used in the calculations. Maybe clever minds can extract something from it but I suspect that quite a bit of conceptual underpinnings would have to be developed and/or laid out–much more than seen to date.

        Again, thanks for sharing your thoughts on the time dependence. Bon chance.

      • Brandon Shollenberger

        mwgrant:

        Hmmm, I assumed that would be the case.

        I’m starting to wish I hadn’t even started looking into BEST. It seems I keep running into obvious problems. Even ignoring the difference in files I raise there, there’s a huge step change in BEST’s uncertainty ranges. How in the world could those shrink by more than half in the space of a few years? It’s not like there’s a notable change in station density around that point or anything.

        A number of folks will be interested in what you have to say on the seasonality question.

        I’m still dumbfounded at the idea BEST has a stronger seasonal cycle than any other record I look at. If it’s new and improved methodology, how can it do a worse job on such a (relatively) simple part? And how is it nobody ever noticed?

        To me there is no need to duplicate the BEST approach in detail–the objective is to further study the spatial correlation and related issues with BEST data, and not to verify the BEST results.

        I don’t want to duplicate the BEST approach. The problem is I can point out problem after problem, but eventually somebody will have to look and see how much they matter. And if nobody on the BEST team cares about the points I make…

        Bon chance.

        Thanks!

      • Brandon-

        I’m starting to wish I hadn’t even started looking into BEST. It seems I keep running into obvious problems.

        I have felt that way a number of times, but keep coming back. I happen to think that the application of geostatistical method could really bring into to focus and facilitate the consideration of uncertainty. But BEST does not actually apply any geostatistics to the uncertainty, instead taking more of an add-on approach (outside of the kriging). I suspect flaws there this may well lead back to the issues you are poking. Or maybe it goes back to the ‘real’ data, i.e., the QC ‘nonseasonal’ stations numbers input into the BEST averaging process or the raw data underneath those ‘data’. It certainly seems some substantive changes have occurred between March and December of 2012.

        One thing that comes to mind for me is that you are looking at the average of the gridded values in the case of the of the anomaly and post-kriging processing for the estimated uncertainties. Unwinding the cause of what you have picked up may require a little determination and luck. (Hence , your comment?!)

        For kicks and giggles, I pulled the numbers you looked into R and played around a little–old habits die hard. Here are some observations from that:

        1.) For the record I reproduced your figures.

        2.) The number of series and observations have increased from March to December. However, this could well be a ‘net’ increase with a number of observations in the March compilation removed. Also maybe some ‘outliers’ have now been culled from the additions. Who knows, just got to assume it is documented.

        3.) I plotted the new anomalies (December) versus the old (March) anomalies. Scatter was very low and the plot is very linear as one would hope. [The adjusted r^2 of the regression is 0.9888 with 1341 degrees of freedom.]

        4.) I also plotted the new uncertainties versus the old uncertainties. This has much more scatter which is also a little lumpy in appearance. [The adjusted r^2 of the regression is 0.7495 with 1341 degrees of freedom.]

        5.) Scatterplots of uncertainty versus anomaly for the two data sets look similar. One interesting feature in both is that the data appear to be clustered into two diffuse but distinct regions. High uncertainty has some tendency to be associated with lower anomalies. Also there is much more variation in the uncertainty for the higher uncertainty cases. Upon reflection these plot provide essentially the same information as your time plots–where one see two clusters in contrast to the step change. Color scheme can enhance the contrast.

        So in my own meandering way I arrived at the same point as you and grok what you are pointing out. Everytime I look at something there is a glitch or problem. Instinctively or thru boneheadedness I can not shake the idea that the best thing about BEST is an emphasis on geostatistical methods and the the worst thing is not following through with them regarding the treatment of uncertainty–instead patching a composite scheme on top of gridded data.

      • Brandon Shollenberger

        mwgrant:

        Unwinding the cause of what you have picked up may require a little determination and luck.

        I suspect it will take more than a little of both. A step change in the uncertainty levels suggests a design issue or bug, and finding those in a system you’re not familiar with is difficult. The seasonality issue will require going through BEST’s procedure for removing seasonal cycles as BEST doesn’t describe how it’s done. As for the files changing… I have no idea. I might find an explanation on my own, but if I do, it will be purely because of luck (or because it was documented).

        1.) For the record I reproduced your figures.

        That’s always good to hear. I’d hate to find out an “issue” I discovered was really just a screw up on my part.

        Who knows, just got to assume it is documented.

        I remember seeing tons of comments by Steven Mosher about documenting changes. I hope that means there’s documentation on this.

        One interesting feature in both is that the data appear to be clustered into two diffuse but distinct regions. High uncertainty has some tendency to be associated with lower anomalies.

        That’s an interesting observation. I hadn’t noticed that before. I’ll have to give some thought as to why that happens. I know I’ve seen that sort of thing arise from improper uncertainty calculations, but there are other possibilities.

        Instinctively or thru boneheadedness I can not shake the idea that the best thing about BEST is an emphasis on geostatistical methods

        I think they definitely have a good idea here. I’m just starting to become more and more convinced they implemented that idea poorly.

      • Brandon Shollenberger-

        JUst FYI and to try something new I stuck some of the images (redos) in a photobucket folder. :

        The Uncertainty vs. anomaly (December) scatterplot — http://s1285.beta.photobucket.com/user/mwgrant1/media/miscClimate/Tavg-UncvsAnom_zps8567ccab.png.html?sort=3&o=2

        An uncertainty vs. anomaly (December) smoothed scatterplot for the same data–it gives a little better handle on point density —
        http://s1285.beta.photobucket.com/user/mwgrant1/media/miscClimate/AUsmoothed_zps2be97b58.png.html?sort=3&o=0

        Again a smoothed scatterplot this time for December uncertainty vs. March uncertainty —
        http://s1285.beta.photobucket.com/user/mwgrant1/media/miscClimate/UUsmoothed_zps74eded43.png.html?sort=3&o=2

        and one for the December anomalies verus March anomalies —
        http://s1285.beta.photobucket.com/user/mwgrant1/media/miscClimate/AAsmoothed_zps4fec735c.png.html?sort=3&o=1

        The last figure for anomalies shows pretty good consistency in the two sets of anomalies–ultimately calculated from the kriging part of the exercise, and the next to last figure (uncertainties) suggests the the uncertainty calculations changed more between March and December. Certainly the use of the smoothed density nicely shows where clumpiness is lurking. But it is risky to speculate.

        Oh well, end of exercise. [I hope the links work after I hit the post reply button.

        Finally,

        I think they definitely have a good idea here. I’m just starting to become more and more convinced they implemented that idea poorly.

        That is where I am at and going more in that direction as time passes. The PR stuff is just so unfortunate, as it may eventually preclude or hinder shaking out the bugs.

      • The title was mislabeled in 2nd figure: “An uncertainty vs. anomaly (December) smoothed scatterplot for the same data–it gives a little better handle on point density ” –

        so try this:

        http://s1285.beta.photobucket.com/user/mwgrant1/media/miscClimate/AUsmoothed_zps4a2f4949.png.html?sort=3&o=0

      • Brandon Shollenberger

        mwgrant, those are nice graphs. I’ve always been terrible with making things look good, and sometimes I hate it because a good graph can make things very easy to see.

        The PR stuff is just so unfortunate, as it may eventually preclude or hinder shaking out the bugs.

        For all the PR stuff, I have to wonder how much feedback BEST has gotten. The entire purpose of the pre-releases was to get more eyes to examine the work. I don’t see how things like strong seasonal cycles could slip by any real examination. Heck, Steve McIntyre made a post (or two?) questioning BEST’s handling of seasonal cycles when the first pre-releases were made.

        I guess the question is, did they not get good feedback, or did they not listen to what they got?

  16. Dear Mosher –> You seem to have inside information, as in :

    ” Muller has nothing to do with the journal. Muller didn’t even know about the journal until it was presented as an option. “

    So tell us all please — Why was this paper published in this shockingly obscure, brand-new journal? Was it actually Peer-Reviewed (notice the initial caps please) — was it really sent out in its entirety to at least three world-class respected experts in the necessary fields, let’s say climate and statistics and computer modelling for instance, and thoroughly vetted, revised, etc before publication? Or was it reviewed by a single editor? and if so, whom?

    Please only reply with what you known for certain for yourself from your actual personal experience. If you are going to relate “what you’ve been told” — please tell us what you have been told and by whom….supply quotes, please.

    • Steven Mosher

      So tell us all please — Why was this paper published in this shockingly obscure, brand-new journal? Was it actually Peer-Reviewed (notice the initial caps please) — was it really sent out in its entirety to at least three world-class respected experts in the necessary fields, let’s say climate and statistics and computer modelling for instance, and thoroughly vetted, revised, etc before publication? Or was it reviewed by a single editor? and if so, whom?

      1. Why was it published? The editor and the reviewers thought it was important work and good work.
      2. Was it Peer Reviewed. Yes. There were three reviewers. I read the reviews and then checked our final draft to make sure that we addressed the points that we thought needed to be addressed.
      3. Was it sent out in it’s entirety? Yes. I prepared the final draft.
      4. Was it sent to 3 world class experts in climate and stats? The reviewers identities are not revealed so that I can only infer from their comments. They understood what we were doing and made helpful suggestions. This was in contrast to previous reviewer comments at other journals who seemed to struggle with kriging, so a geostats journal seemed the better fit.

      I can assure you from personal first hand knowledge that “Muller didn’t even know about the journal until it was presented as an option.”

      • A geostats journal that had yet to publish, anything? Do you know how long they were sitting around waiting for someone, anyone, to submit a paper?

      • Mosher — Just to clarify for us — you state “geostats journal ” — yet there was nothing ‘geostats’ about this journal at the time of your submission except the name – which is the only thing that existed. It had no prior publication – your paper is the ONLY paper it has EVER published. Your paper is published in Volume 1 Issue 1.

        Why in the world would a group such as BEST want to publish this paper of which they are so proud in a journal with ZERO prestige, ZERO history, ZERO recognition, gee….ZERO anything?

        Who does BEST think is going to just accept that this until-this-moment-nonexistent journal can be trusted to have peer-reviewed this paper to the full satisfaction of the greater field of Climate Science?

        How does this move, publishing in a brand-new infant journal, forward BEST’s purpose of putting all the doubts and controversy about the temperature record finally to rest.

      • “2. Was it Peer Reviewed. Yes. There were three reviewers. I read the reviews and then checked our final draft to make sure that we addressed the points that we thought needed to be addressed.”

        It would be interesting to know why the reviewers from journals that had actually published scientific papers in the past rejected BEST. It seems that it was submitted to multiple journals and received multiple thumbs down, before going to G&G.

      • Steven Mosher

        Hi Kip.

        Good questions.

        sher — Just to clarify for us — you state “geostats journal ” — yet there was nothing ‘geostats’ about this journal at the time of your submission except the name – which is the only thing that existed. It had no prior publication – your paper is the ONLY paper it has EVER published. Your paper is published in Volume 1 Issue 1.
        ############
        Somebody has to be the first. There was a choice between 2 journals where we could be assured that the reviewers did not require tutorials in kriging.

        #################

        Why in the world would a group such as BEST want to publish this paper of which they are so proud in a journal with ZERO prestige, ZERO history, ZERO recognition, gee….ZERO anything?
        ##################
        1. prestige didn’t matter to guys who have nobel prizes already.
        2. history? we enjoyed the idea of setting a standard. being first was an honor.
        3. Recognition? only seems to matter to skeptics who argued that peer review wasnt important anyway.

        Basically, we liked the idea of being judged on the quality of the science by people not tainted by the kind of nonsense we have seen in other places.
        ####################

        “who does BEST think is going to just accept that this until-this-moment-nonexistent journal can be trusted to have peer-reviewed this paper to the full satisfaction of the greater field of Climate Science?”

        The researchers already using the data are happy to cite the journal. They didnt need peer review to tell them the result was sound. They needed peer review for the check box that it is. The only folks who object are people who dont like the answer. Actual folks who want to use the data and have a citation are happy. go figure.
        #####################
        “How does this move, publishing in a brand-new infant journal, forward BEST’s purpose of putting all the doubts and controversy about the temperature record finally to rest”

        The doubts were put to rest when the data and code was published. That is the acid test. From my perspective ( see what Ive said elsewhere) peer review is a check box. Now that we have the check box, of course skeptics change their tune and suddenly think that being the 1000th paper is somehow most important. In short, when skeptics had no peer reviewed papers, they said peer review didnt matter. Now that we have peer review, the same folks want to move goal posts they thought were unimportant to begin with.

        In fact, you might think that a new journal was selected merely to point out to people that skeptics will change their tune at the drop of a hat.

      • Rather amusing to watch “skeptics” who can’t say enough how much they discredit peer review, the way the system is based in impact factor, the way that the CAGW-cabal controls the science by only recognizing studies published in “elite” journals, run themselves in circles to discredit this publication.

        The whole BEST initiative is a Rorschach test – as are opinions about those involved: Muller, Mosher, etc. are viewed as completely different people on the basis of the product of their science. It’s a good thing that so many “skeptics” are engineers – it gives them the skills to “reverse engineer” their opinions based on how they feel about the outcomes of science.

      • Everyone here — now is your chance to weigh in.

        Are you satisfied with Mr. Mosher answers?

        Do they make sense to you?

        Does he appear to you to be dissembling? or being upfront?

        Those of you who are publishing CliSci researchers –> do his answers seem at all reasonable to you on a professional basis?

        Dr. Curry — care to weigh in on this point about the decision of BEST to publish this particular paper in this particular journal?

        Does anyone else have any more questions to place before Mr. Mosher on this issue?

        PS: I wish to thank Mr. Mosher for taking the time to answer these questions here. The greater CliSci community, of course, will be the judge of whether or not this paper as published in this journal has been actually and factually properly peer-reviewed and whether or not it will be considered part of the accepted literature.

      • The failure to get the BEST paper published in a climate science journal goes against the skeptical view that pro-AGW papers always get published because of inside deals. It is not that simple. This speaks well of the filtering process in those journals and against bias. This paper was about as pro-AGW as they get, and wasn’t published in a climate science journal. Why it wasn’t published there, we can only speculate, but I think the climate science journals are not so interested in new statistical approaches, and the signal extraction was quite basic not adding any novel scientific insights.

      • We believe that thing about being honored to be first, Steven. G&G probably rejected a lot of Nobel Prize winners, before BEST came along. Is the moon still made of green cheese, Steven?

      • Kip Hansen @ 7.19: even the most august and ancient of journals had a first issue and a first published paper. The issue is surely the quality of the published work, which I am not qualified to judge, but which from the discussion here seems to have merited publication. In years to come, the new journal might be highly valued and revered for its first published paper. Or not, let’s deal with the paper on its merits.

      • Steven Mosher

        It would appear that Kip hansen wants to engage in ‘re defining the peer reviewed literature” Gosh, You actually get to see skeptics doing the very thing that Jones was criticized for.

        i wonder, joshua, do you suppose that somebody could have been bright enough to foresee that skeptics would take this slant? hehe, I love the smell of irony in the morning

      • Steve, you say that you prepared the final draft. Why is it that you were not an author? Also, do you care to mention how many journals rejected Best before settling on the present journal?

      • I will help you Jim D. It did not get published in a Team climate science journal, because the Team does not like the publicity seeking Muller, trying to hog their spotlight. Wouldn’t you have a good laugh if Spencer, or Lindzen published something in the G&G, aka journal of last resort? It’s amazing how you people can spot hypocrisy everywhere, but in your own house.

      • JimD, “This paper was about as pro-AGW as they get, and wasn’t published in a climate science journal. ”

        I don’t think it is pro AGW or anti, I think it is pro science. Face it, they hacked off just about everybody, what is more pro science than that :)

      • Kip , I for one am satisfied with Mosher’s answers, though I should remind him the ln(prechauns) are pretty common.

        Heck, even this brand new peer reviewed data seems to indicate an ln(prechaun) is having some fun.
        http://berkeleyearth.lbl.gov/auto/Local/TMIN/Figures/76.35S-33.96E-TMIN-Trend.pdf

      • Steven

        Not being one of the inner sanctum, I couldn’t give a fiddler’s f*** in which journal the paper was published.

        Also pal peer review doesn’t really give me goose pimples, either.

        It’s all about “content”.

        And (assuming we’re still discussing the “making it fit” paper) this seems to me to be a model-driven exercise in futility.

        Maybe you (or someone else) can convince me otherwise. I’m open to almost anything (except, pardon the expression, “arguments from ignorance”).

        Max

      • Steven Mosher

        “Steve, you say that you prepared the final draft. Why is it that you were not an author? Also, do you care to mention how many journals rejected Best before settling on the present journal?”

        I am not an author because I didn’t do any of the writing. The paper was written by the time I joined the team. Zeke and I basically sat in meetings and made suggestions. one of us ( or maybe it was robert ) suggested the c02 fit to temperature. In my book just making a suggestion or wordsmithing here and there doesnt mean you are an author. For the final draft my role was pedestrian. First I looked at all the reviewer comments and made sure that the valid questions got answered and that errors got corrected. Then making sure that the paper met the guidelines, citations etc, abstract, Secretaries are not authors in my mind.

        Journals. It was submitted to one journal as I recall. That journal wanted to have the methods paper published first before they would consider the results paper. Since no other surface temp results paper even has a methods paper that seemed a bit odd, it was also odd since the results paper confirmed what was already known. go figure. confirms known science. extends the record back beyond 1850. ..

      • Steven Mosher

        manaker. you are welcomed to disregard the bits about the c02 fit. For me that is an minor part of the paper. As i recall it came about from a discussion we had about the data prior to 1850. One of us( me or zeke or robert ) suggested looking at the relationship between volcanoes and that data. In short, is that early data confirmed by anything else. that grew into the c02 fit exercise. Obviously some people think its too simple. others dismiss it as curve fitting. In steves world the results paper would just be the results.. the DTR stuff was way more interesting.. the c02 and amo stuff.. not that interesting. but this is not burger king and i dont get things my way.

      • moshe, it’s good to hear you are not totally touting the attribution part. Ockham is fine except when the simplest ain’t. Nice fit, though, but not to the millikelvin, dagnabit.
        =======

      • Steven Mosher

        sure kim. I just look at the structure of the arguments. attribution is a thorny messy epistemic briarpatch. its the area where you are most likely to see fundamental epistemic issues raised. circumstantial evidence at best. but its evidence.. kinda like the glove in OJ

      • Twisted logic, Don. Publicity seeking won’t have disqualified this paper. Hansen and Trenberth have no trouble getting published. No, it is that the content probably didn’t advance the science, only confirmed it.
        capt.d. doesn’t see it as pro-AGW, but we just had a debate here about the log(CO2) fit and that they say there is no need to assume other sources of long-term natural variation which looks like a pro-AGW comment that JC would never approve (hence not being an author).

      • JimD, “capt.d. doesn’t see it as pro-AGW, but we just had a debate here about the log(CO2) fit and that they say there is no need to assume other sources of long-term natural variation which looks like a pro-AGW comment that JC would never approve (hence not being an author).”

        I don’t think they said CO2 and aerosols done it, game over. They did a fit. There are others things that can cause that fit. Remember since that fit was made, the magnitude of the aerosol forcing has been questioned, new stratospheric impacts discovered and the odd diurnal trend thingy has appeared. There is plenty of puzzle left to finish.

      • i wonder, joshua, do you suppose that somebody could have been bright enough to foresee that skeptics would take this slant? hehe, I love the smell of irony in the morning

        Like clockwork, dude. Like clockwork.

      • capt. d., they also had no qualms about the 3.1 C sensitivity that came out of the fit, which would have been a red flag to a non-AGWer. Regarding aerosols, yes, they probably assumed that those and the other non-CO2 GHGs roughly cancel, which agrees on the whole with the IPCC AR4 forcing.

      • Your logic is absent, Jim D. Hansen and Trenberth have no trouble getting published, because they are leaders of the Team. The Team do not like Muller. They don’t like publicity seeking dabblers in their climate science coming along and claiming to have done it better than they have been doing it all their lives. Hansen, Trenberth, Mann, Jones et al could have easily gotten this paper published. They would not have had to go to the journal of last resort to get their box checked. And references to the G&G paper in future climate science papers will be very few and far between, Jim D.

        All of the above is exactly what the skeptics mean when they say it’s pal review, not peer review. Mosher and his BEST team have a beef with the big Team, not the skeptics. It wasn’t the skeptics who rejected their paper and caused them the humiliation of shopping their paper around, until they landed in the journal of last resort.

      • Don, I will agree with you on the fact that the authors are not climate scientists, which makes it hard for them to publish in a climate science journal. They haven’t really broken new ground in that area with this paper. Reviewers who are in the field would probably have seen some naivety in those aspects of the paper. Sometimes you may give the lack of background or breadth a pass if it is a new Ph. D. student, but here they probably figured that the material would have a good chance elsewhere.

      • Jim D “but here they probably figured that the material would have a good chance elsewhere.” Elsewhere, turned out to be the journal of last resort. The big Team must be chortling in their beers.

      • Don, dataset-introducing papers tend to get very high citation counts. I am sure they won’t have any problem whatever the journal. The journal’s impact factor will become disproportionately large if this is one of their only papers.

      • That’s some halfway quais-plausibe spin Jim, but the reality is that G&G has zero credibility. And Mosher’s story that the BEST team is proud that their rejected paper landed there is ludicrous. And everybody knows it.

      • @manacker: Not being one of the inner sanctum, I couldn’t give a fiddler’s f*** in which journal the paper was published.

        The technical content of Max’s posts are at a similar level.

      • Matthew R Marler

        fwiw, probably not much, I like the decision to be the first to publish in a new journal. This paper will be highly cited, and represents really good work on a massive scale. Now everyone will want to publish in that journal.

        One paper can not possibly address every technical criticism, and I agree with Brandon Schollenberger and mwgrant that the assumption of constant spatial correlation across years and season is dubious. I expect that alternative models for the spatial autocorrelation could be the topic of someone’s PhD dissertation. Based on my limited experience, trying to estimate some seasonal variations in the spatial autocorrelations of a large set of spatial domains would increase the computation time by a factor of about 100, and produce coefficients in the model with large standard deviations — if the algorithm converged to a solution at all.

      • thisisnotgoodtogo

        Kip Hansen said

        “…ZERO prestige, ZERO history, ZERO recognition, gee….ZERO anything? ”
        Those were their glory days.

      • Kip and others

        Rather than trying to find dirt in the publication process look at the paper and give us your assessment of the science. It would be far more useful.

      • @HR: Kip and others: Rather than trying to find dirt in the publication process look at the paper and give us your assessment of the science. It would be far more useful.

        Unless “Kip and others” are “peers” in the sense of “peer review,” one could argue that it is far less useful.

        The Internet has made it possible for those who are clueless about how climate works to generate reams of unscientific garbage about climate. The point of peer review is to weed out all that garbage so that what remains doesn’t drown those who are trying to “assess the science” as you put it.

        This comes with the risk of monopolization of ideologies. With that in mind I’m all for allowing competing ideologies to express their views. But unless they provide their own competent peer reviewers these alternative perspectives are going to be lost in the vast unrefereed space of alternative theories. Who has time to evaluate every one of those when most of them are junk science?

      • @manacker: Also pal peer review doesn’t really give me goose pimples, either. It’s all about “content”.

        1. Content that hints at the coming cold gives Max goose pimples. It’s a natural reaction to cold.

        2. Talk of global warming will give Max goose pimples when hell freezes over.

        3. Appointments and Promotions committees have found peer review a more arms-length indicator of future productivity than goose pimples, which gets a bit personal.

        4. Something about vestigial fur or feathers, I can never remember which.

    • I believe Mosher’s point. Kriging may be a relatively unknown method in the geosciences whereas its becoming a lot more common in other fields and is better known to statisticians. Kriging does seem rather good at providing reliable surrogates in the presence of noise in our applications anyway.

      • I believe Mosher’s point. Kriging may be a relatively unknown method in the geosciences whereas its becoming a lot more common in other fields and is better known to statisticians.

        That view is a patently absurd ‘history’ comment. The opening statement for the Wikipedia ‘Kriging’ entry provides a hint:

        Kriging is a group of geostatistical techniques to interpolate the value of a random field …. ( http://en.wikipedia.org/wiki/Kriging )

        Geostatistics have been applied in the geosciences for decades, and this includes many peer-reviewed publications in AGU journals and elsewhere. My first exposure, a Battlele determination of areal radon fluxes, was 1980 and the practice was established by then. Much of the foundational was in the geosciences—mining. One thinks of Krige, Matheron, Journel, Delhomme, Clark. … Soviet work (Gandin) in 50’s and 60’s meteorological work. Working both from theoretical and practical perspectives Journel developed FORTRAN code for wider use in the 70’s. BYW I also believe the Soviets are also now credited with some leadup work work in the 1940’s, but that may be bad memory on my part.

        Geostatistics, while a well established and mature discipline, is still a specialized niche and it may be appropriate reviewers just were not selected. [This is how I read Mosher’s comment.] Another possibility is that Rohde’s treatment of it, I suspect, is not what those who use it–meaning competent users but not researchers in the discipline–would expect. Also the writing style is dense and a little ambiguous in places compounding the effort, say if the reviewer was a user/applier as opposed to a more mathematical researcher. In any case you really should dismiss the idea that kriging is new in the geosciences That just isn’t the case.

      • sorry about the typos–its getting cold…

      • So Kriging is unknown to JGR? And if Mosh is correct, is it bad etiquette for a journal to ask for methods?
        As I asked near the beginning of this thread, Why did JGR (or everyone else) reject this paper?

      • John another and by extension David Young-

        So Kriging is unknown to JGR?

        No, i.e., Kriging is actually quite well known to those folks and many other for decades. To suggest otherwise in a gentleman’s discussion is to invite pyrrhic defeat sans Phoenix options. It is better not to go there. More important, evaluation of BEST’s merit/contributions should be judged on what it brings to the table and not what it is perceived* to bring to the table. This includes where it is and where it is not innovation.

        For example below in italics is a snippet from an online search of volume 80 1975 random 70’s vintage article I turned up by is less than a minute. The literature is laden** with articles of that vintage and earlier.

        For the record all I trying to get straight in peoples’ minds here is the simple fact that kriging is not new and don’t be distracted by the bauble–kick the geostatistical tires and remember that BEST’s value drops once you drive it off of the Berkeley lot. (We have to live with it.) A deeper point is that if you want to assess the work then one perspective is to obtain reviews by qualified practitioners who are who comfortable in this established niche. Don’t speculate and or buy in to speculation. [Maybe such will be part of an entire BEST public record some day. This would include JGR and GandG reviewer comments.]

        The BEST work [choose one answer, a-h]:

        a.) BEST does have merit
        b.) has been and will continue to be hyped some
        c.) has some loose threads/odd ends in regard to ‘published’ data, theoretical development, implementation, and documentation
        d.) has promise to improve estimation of global temperature(s)
        e.) has promise to improve understanding of errors in the estimation of temperature
        f.) is a work in progress, so despite the hype some slack* is warranted
        g.) is not as good as sliced bread
        h.) all of the above

        Note that my comment above mentions even earlier work in the field. HTH

        [Note to JC – can’t leave the poppy fields, right now.]
        _____
        * Perceptions are shaped both by the work/papers and by comment/review. Not casting dispersions in any one direction, lust noting that perception is important and has many contributors.
        ** Well, has quite a few.

        – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
        Now, the 1975 JGR snippet:

        Comments on ‘Optimal contour mapping using universal kriging’ by Ricardo A. Olea
        Journal of Geophysical Research
        Volume 80, Issue 5, January 1975, Pages: 832–834, Hiroshi Akima

    • Mr. Mosher:

      Your bit –> “It would appear that Kip hansen wants to engage in ‘re defining the peer reviewed literature’ Gosh, You actually get to see skeptics doing the very thing that Jones was criticized for.” is not professional.

      You and the rest of the crew at BEST must have known that the decision to publish your paper in SciTechnol’s as-yet-unheard-of journal, Geoinformatics and Geostatistics, would raise questions, and it has…..There is no sense pretending to be upset by something you expected all along, or to try and blame the questioners for having those questions. It was your (BEST’s) decision as you have explained. Striking out at those who question the decision is uncalled for.

      I asked the questions plainly and outright so that you (here representing BEST) would have a chance to publicly answer them. I asked politely, and for the most part, you have answered politely (with the exception of this lapse).

      It is nothing new to have a paper questioned based on where it is published….this happens every day and it sometimes perfectly valid. Quite frankly, not all journals are equally trusted to perform rigorous peer-review or trusted to utilize peer-reviewers of the same quality. They obviously don’t, they can’t, it is not anyone’s opinion — simply an acknowledged fact in the scientific world.

      Time will tell if this decision of yours works for you or not. Certainly not up to me.

      • Steven Mosher

        Kip. I’m not upset. im amused. amused because skeptics have responded as predicted.

      • Steven, what will the climate be on Feb. 2nd, 2013?

      • Steven Mosher

        Tom.
        the climate technically speaking does not exist. the results paper gives you an estimate of the past. It isnt a forecast. If you want to use it to make a forecast or a prediction then you will have to make various assumptions. A naive prediction would be to say that feb 2013 would be like the average feb plus a boatload of error. feb 2033? add .3C give or take.. feb 2103. add 1-3C.

      • Yes, I see now.

      • Steve, you say, “Kip. I’m not upset. im amused. amused because skeptics have responded as predicted.” Steve, given your history in climate data issues, you level of certainty is only partially justified. Do you understand that CAWG skeptics are equally amused because you responded exactly as predicted.

      • Bob.

        ‘Do you understand that CAWG skeptics are equally amused because you responded exactly as predicted.”

        yes, I bet that each an every one of them said “wait! if we move the goal posts on peer review, mosher will throw the “re defining peer reviewed literature” quote at us. Yup, I believe each and every one of them saw that coming and stepped on that rake.

      • Skeptics are treating you better than the Team has. They like you less than they like Muller. They stuffed your paper and you ended up with the least reputable journal that you could have found. Aren’t there other geostats journals, Steven? Do I have to Google it for you? Why don’t you take this up with the boys on realclimate? I bet they would find you amusing and have quite a bit fun, at your expense.

      • @Tom: Steven, what will the climate be on Feb. 2nd, 2013?

        At latitude 86 N, very cold. More so than in front of an average fireplace in Darwin. Or on Venus at any latitude.

        Strange question.

      • Mr. Mosher — Will BEST agree to share the anonymous reviewers comments with the world? Will you put them up on the BEST site along with the final published version of your paper?

        Such an action might help support your claims of three independent truly peer-reviewers and that “.. I looked at all the reviewer comments and made sure that the valid questions got answered and that errors got corrected. “

      • Kip again why does that matter. If they were to do that what’s to stop you labelling it Pal Review and then demanding names and so on and so on. We get it you don’t trust climate scientist with a different opinion to you. Let’s move on to something more broadly enlightening.

  17. WUWT has a field day with the Best publication. Couldn’t miss opportunity to comment on the BEST ( c ) omics publication
    http://www.vukcevic.talktalk.net/Bestomics.htm

    • Milivoje, you have clearly missed your calling. There is way more money to be made in good cartooning (emphasis on “good”) than in supporting the Koch brothers for free.

  18. David Springer

    “The primary insight of the H2012 remains, in a warming climate we expect to see more warm extremes.”

    Wow. Who’d a thunk something so counter-intuitive? You guys are tits.

    • See Eli’s first point

      First if there is an increasing/decreasing linear trend on a noisy series, the probability of reaching new maxima/minima increases with time

      Not so obvious to all

    • Steven Mosher

      yes. its trivially true. The important bit is that H2012 doesnt attempt to establish that the distribution has widened and that using an anomaly step in your method can mislead you and give a result that is an methodological artifact. The K-S test on daily data would probably be better suited. just a hunch

      • I’m not sure H2012 did that trivial thing. What it showed was if you decadally average the past half dozen decades then the last three decades have more warm extremes. Those three decades happen to be the three warmest decades of of the six but there are other features of those decades that might explain the difference in heat extremes in those decades. Hansens simple decadal averaging has actually hidden all opportunity to identify what’s causing those extremes and allows the simple acceptance of assumptions.

    • Yet the skeptics were howling when that came out. It is so simple and obvious, it must be a trick of some kind. It can’t be true. It was just a reflex reaction to a Hansen paper.

      • Steven Mosher

        My reflex reaction was ” he didnt prove that the distribution widened” so, like, duh, mosher, he wasnt trying to prove that. Once that light bulb went on, the whole discussion becomes much more fruitful.

      • They only said it, and had a figure showing that it was dependent on the choice of base period and then explained why the base period they chose 1951-80 was the appropriate one. So no, they didn’t “prove” it. No way, but they did demonstrate that this happened given the surface temperature records from 1951 to 2011.

      • IMO, Zeke’s analysis is the appropriate one to make here.

        Whatever Hansen may or may not have said, this appears to be his position now:

        One must be careful not to misinterpret the Hansen et al. [2012] paper as indicating a change (broadening) of the variance of the temperature distribution with time. According to the authors of that paper [private communication], it was not their intent to study that issue and their paper did not address it, although it has been misinterpreted to do so by some who did not closely examine their mathematical approach.

        While Eli can argue what the paper said, I don’t think he should try and argue with the author’s intent, nor with the author’s opinion about what their paper says in this matter in light of Zeke’s analysis.

        Rather than argue over issues relating to literary interpretation and authors intent that the original author appears to disagree with Eli on, I’d suggest Eli spend time looking at substantive issues.

        One of these IMO is whether the distributions in Fig. 7 are varying in a statistically significant fashion. Zeke says they don’t, but he doesn’t put this to number. I would have been interested in seeing the Kolmogorov–Smirnov test applied here, for example, as a more quantitative way of looking at his results.

      • carrick when time permits I was going to look at it using K-S.
        my thoughts ( per some of our discussions) would be to look at long series of daily data.. tmin and tmax.

      • Carrick,

        There might still be significant differences, but they were orders of magnitude smaller than the variance changes using the original method. I didn’t do a rigerous test, but merely stated that “If we take this approach for every decade, we get the frequency density plots shown below in Figure 7, which shows little change in variance over time.”

        To be perfectly honest, this was mostly just a fun exercise for me to learn how to properly create frequency density plots :-p

      • Carrick, Eli is not arguing those points (although he may take it up with HSR) but with those who said that he mistated what was in the HSR paper including Mosher and Manakar. The text speaks for itself. You are correct.

  19. David Springer

    The Skeptical Warmist | January 20, 2013 at 12:16 pm |

    “Amazing how this planet is so connected that heat in one area/region can lead to cold in another, isn’t it?”

    Not at all. If the energy entering and leaving the system is constant then you’d expect exactly that. Duh.

  20. David Springer

    Steven Mosher | January 20, 2013 at 11:49 am |

    ” I hestitate to give a date on the release cause there are a couple other balls in the air.”

    Muller doing handstands again?

    • Steven Mosher

      No. there are basically two people who can do the work and they are over loaded to the max. The work is done, a write up takes time and the other balls in the air are
      1. the data paper
      2. the monthly update process
      3. end user support
      4. Station data and charts showing scalpeled data.

      So, oceans is pretty much done. Folks who want to roll their own have everything they need. Gridded land, pick the gridded ocean you like and
      combine. Combining requires you to.

      A) align the datasets if you work in anomaly
      B) decide how to handle ‘coastal’ grid cells.

      Not a brain buster.

      • As erstwhile Vice President Henry Wallace is reported to have said about his boss, FDR, that “he could keep all the balls in the air without losing his own”.

        (And that in a wheelchair to boot.)

        Max

      • Steven Mosher

        David. there is a difference between getting a joke and refusing to acknowledge it. You’ve never been to tailhook.

      • @Steven Mosher: You’ve never been to tailhook.

        Hands up everyone here who’s been to tailhook.

        What floor of the Las Vegas Hilton was your room on, Steven?

  21. David Springer

    Steven Mosher | January 20, 2013 at 12:01 pm |

    “Yes gary, the changes in the arctic predictably lead to extreme snowfall in the NH. Warmer weather in one location can lead to increased winter snowfall in another location.”

    No chit? Amazing. Next NH glaciers start growing, albedo decreases, and before you can say Wooly Mammoth Steak the interglacial period ends. Didn’t anybody here read Clan of the Cave Bear fercrisakes?

    • Steven Mosher

      dont forget the spring. The increased snow in the winter ( moist air hitting a region cold enough to produce snow) melts earlier in the spring.
      its not that hard to understand. As a theory AGW is easy to defeat if you ignore what it says.

      • David Springer

        Increased snow melts earlier in the spring.

        How does that work, exactly?

      • David Springer

        Clutching for memories of my childhood in upstate New York I seem to recall that the deeper the snowdrifts hung around longer in the spring than the shallower snowdrifts.

        I could be wrong. I must be wrong because Steven Mosher tells me that more snow melts sooner.

        I’d still like to know exactly how that works.

        This thread is such a target rich environment I’m gonna get myself on moderation if I don’t byte my tongue soon. It’ll have to be hard enough to draw blood I’m afraid.

      • That would be because the melting season starts earlier.

      • David Springer

        Hey Steve, if Sarah Palin can see Russia you can probably see Mt. Shasta from where you are, right?

        http://usatoday30.usatoday.com/tech/science/environment/2008-07-08-mt-shasta-growing-glaciers_N.htm

        Explain please. Maybe Jim D can help you out as he seems to have a firm grasp on glacial dynamics.

      • k scott denison

        Jim D | January 20, 2013 at 6:04 pm |
        That would be because the melting season starts earlier.
        =======

        And the data showing that the melting season starts earlier when the snow is heavier is where?

      • The whole article is about how exceptional Shasta is compared to the normal situation, which is perhaps why it makes news. I could point to this.
        http://techonomy.com/2012/12/climate-change-threatens-americas-ski-resorts/

      • Mt. Shasta?

        On a clear day (but there aren’t any).

        Try Mt. Diablo.

        And, yes, AGW is much harder to defeat than CAGW.

        (I wouldn’t try it.)

      • ksd, the melting season starts when daily temperatures rise above 0 C. In a warming climate this shifts earlier, not surprisingly.

      • David Springer

        Dear Jim D whoever you are,

        You must have been playing hooky from climatology class on the day they explained the asymetrical nature of global warming. NH winter temperatures rise more than summer temperatures. Combined with moister air that makes greater snowpack in winter and less melt in summer. Hence Mt. Shasta, the canary in the coal mine. Milankovich cycles do exactly the same thing on the way to ending interglacials. Orbital mechanics line up such that more sunlight is received in NH winter and less in NH summer. Net energy received remains unchanged yet despite no change in average annual insolation the end result is glaciers a mile thick approaching Washington, DC, which, all things considered, is an improvement.

        This enigma gives people like BBD, and probably you too, a serious case of cognitive dissonance because ice ages begin and end with no change in forcing producing an infinite climate sensitivity number.

        Pekka JC SNIP Pirila, remarkably enough, was able to think long enough and hard enough to figure out that climate sensitivity is non-linear. How much time it takes you to realize that is anyone’s guess but I’m going to guess ‘never’.

      • k scott denison

        The Rutgers data shows that NH snow cover has decreased slightly over the past several years for spring but stayed the same or grew slightly for winter.

        Don’t know if this is “evidence” for Mosher’s statement or not.

        Max

      • David Springer

        k scott denison | January 20, 2013 at 6:36 pm |

        “And the data showing that the melting season starts earlier when the snow is heavier is where?”

        Jim is dead, Scotty. Please don’t spoil the fun I’m having making the corpse twitch by asking the hardest questions right off the bat.

      • Jim D

        “Climate Change Threatens America’s Ski Resorts”

        Ho, ho. What garbage.

        We had similar articles here in Switzerland around 2005/2006.

        Since then we’ve had more snow than ever, with ski seasons starting earlier and ending later.

        Don’t believe everything you read in the papers, Jim (especially if it has anything to do with “climate change”).

        Max

      • David Springer

        Jim D | January 20, 2013 at 6:39 pm |

        “The whole article is about how exceptional Shasta is compared to the normal situation, which is perhaps why it makes news. I could point to this.
        http://techonomy.com/2012/12/climate-change-threatens-americas-ski-resorts/

        The exception proves the rule, Jim. Hooky again?

        So the article you quote is saying “Children won’t remember what skiing is”.

        That sounds vaguely familiar somehow. ;-)

        I’m afraid what children won’t remember is what science is when all they’re taught is dogma.

      • David Springer

        Man, you guys are brutal. People like you go fishing for brook trout with salt water gear and 300lb test. There’s no sport in it.

        manacker | January 20, 2013 at 6:58 pm |

        Jim D

        “Climate Change Threatens America’s Ski Resorts”

        Ho, ho. What garbage.

        We had similar articles here in Switzerland around 2005/2006.

        Since then we’ve had more snow than ever, with ski seasons starting earlier and ending later.

        Don’t believe everything you read in the papers, Jim (especially if it has anything to do with “climate change”).

        Max

      • I think some of the skeptics are pretending not to understand why warmer springs lead to earlier snow melts. Throwing in glaciers and ice ages is some kind of misdirection game.

      • Are you saying the Met Office ignored the AGW theory wrt 2012?

      • k scott denison

        Delayed reaction but:

        David Springer | January 20, 2013 at 6:53 pm |
        k scott denison | January 20, 2013 at 6:36 pm |

        “And the data showing that the melting season starts earlier when the snow is heavier is where?”

        Jim is dead, Scotty. Please don’t spoil the fun I’m having making the corpse twitch by asking the hardest questions right off the bat.
        ===================
        made me spit my coffee this morning! Thanks for the laugh David!

  22. David Springer

    Geostatistics and Geoinformatics

    I wasn’t aware a publication could have a negative impact factor.

    Just goes ta show ya learn something new every day!

    Congratulations boys.

  23. Paul matthews

    Yes to what David Springer says.

    I would like to see the reviewer comments from the climate journals that rejected the paper before it was submitted to this brand new commercial engineering journal. As far as I can see the journal has published no papers. There are two in press, this one and and introduction to the journal by one of the editors!

    • David Springer

      In the beginning, in the recesses of deep time when scitechnol.com was created (2011)

      http://web.archive.org/web/20110201001202/http://www.scitechnol.com/

      The home page looked like a three ring banner ad circus. Ringling Bros. would have been envious. The title of the page is:

      OMICS Publishing Group: An Open Access Publisher

      Evidently a typo. Should be COMICS.

      Seriously Mosher. A circus. You fell in with a bunch of clowns. I’m not sure if you belong with them or not at this point. My magic 8-ball says “All indications point to YES”

  24. k scott denison

    Noted that WUWT has a report about NOAA experiment showing that siting affects nighttime temperatures… always warmer near buildings regardless of wind direction.

    Wonder how this affects the BEST data?

    http://wattsupwiththat.com/2013/01/20/noaa-establishes-a-fact-about-station-siting-nighttime-temperatures-are-indeed-higher-closer-to-the-laboratory/

    • Steven Mosher

      In the UHI paper we only used stations that were free of urban influence to a 10km radius. so effectively we used no stattions close to buildings.

      Trends dont change.

      • k scott denison

        Gee, the experiment was conducted in what appears to be a rural location as well. In fact, the multiple points are sitting in the middle of a large open field. And no urban sprawl in sight.

        Have you looked at the site? I’m guessing not.

      • k scott denison

        Oh, and ps, did you adjust the nighttime (Tmin) for all sites near even a one story building? Again, I’ll guess not.

      • k scott denison

        From the abstract:

        “To provide some physical basis for the ongoing controversy focused on the U.S. surface temperature record, an experiment is being performed to evaluate the effects of artificial heat sources such as buildings and parking lots on air temperature. Air temperature measurements within a grassy field, located at varying distances from artificial heat sources at the edge of the field, are being recorded using both the NOAA US Climate Reference Network methodology and the National Weather Service Maximum Minimum Temperature Sensor system. The effects of the roadways and buildings are quantified by comparing the air temperature measured close to the artificial heat sources to the air temperature measured well-within the grassy field, over 200 m downwind of the artificial heat sources.”

        Yup, sounds positively urban.

      • Steven Mosher

        k scott. yes, in fact i was the person who alerted anthony to the experiment long ago.

        Part of the work I am doing on CRN and MODIS LST data and looking at the effects of impervious surfaces close to stations.

        See my blog.

      • Steven Mosher

        scott. you dont get it. the rural stations we used had no buildings and no people within 10km.

        Also, looked at stations that were 25km away from any built structure.

        next

      • k scott denison

        No bulidings and no people within 10 km… interesting. Thanks for the info. Can you point me to a list of these stations so I can take a look via Google? Thanks in advance.

      • sure K scott. go download modis urban land cover, you will have to request access from the PI. Then get our station data. And from there its a piece of cake. To do it really right I would suggest using NLCD 30 meter data if you cant get modis. That requires more work but I show you how to do it on my blog.

      • Berényi Péter

        Which version of the UHI paper we are talking about? This one or the
        June 26 revised version?

        Depending on the source a station is considered “very rural” if its ditance from a MOD500 urban region is at least 0.1 degree (11.1 km) or 10/25 km.

        Anyway, presence or absence of buildings is not directly involved in the definition. Only if we assume there is no building farther from any MOD500 urban region than 10 km/11.1 km/25 km can we conclude no stations were used close to buildings.

        Is that assumption reasonable? What fraction of land surface is “very rural” under each definition? What percentage of the world population lives in that area? Are those people all homeless?

        In the year 2000 world population was 6.09 billion and 2.83 billion lived in areas considered “urban” by MOD500. That leaves us with 3.26 billion who should either live closer than 25 km to an urban pixel or be homeless. Is it the case?

        If not, how can we make sure there were no building in the vicinity of any “very rural” station?

  25. I’m reading ‘Clan of the Cave bear’ Bk 11, ‘The Valley of the Horses’
    by Jean Auel, as u write, David Springer. Ayla, cast out by the clan, travelling alone in a world of glacial cold.

    • David Springer

      Been there. Read that. I have this image in my right brain of Ayla as a real fur coat covered hottie but the left side informs the right that she must’ve been a lice ridden stinking scraggle toothed hoe bag. Sucks to be me sometimes.

    • Beth

      Jean Auel’s “Ayla” saga is a good read (the author obviously did some “homework” and added in a lot of imagination). The Neanderthal part (book #1) was the most interesting for me, but I enjoyed them all.

      Springer’s left brain has got it wrong: this was a cool chick (in her prime).

      What the hell, she domesticated both dogs and horses (plus a lion – but that one was just temporary).

      I’m waiting for the next book – maybe she’ll invent the wheel (or the precursor of the internal combustion engine) – who knows?

      Max

    • David Springer

      I’m SO gonna get moderated when momma gets home. LOL

  26. Fictional lice-nce, they call it.

  27. Max,
    The first book was waaaaay the best. Strong research and
    imaginative insights into Neanderthal culture. Later books got
    too indulgent, one woman discovering flint fire lighting, not ter
    mention domesticating animals … and Jondalar, spunk as he
    was, was a little too PC :: grin::
    Beth.

    • Beth

      Jondalar didn’t leave that much of an impression on me – Ayla was definitely the smarts (and looks) of the operation.

      How Springer’s left brain could imagine her as a lice-covered hag escapes me (must be something Freudian there – a childhood trauma?).

      Max

  28. I was hoping that this latest post would deal with the adnissions by both Berkeley Earth and Hansen that temperature change had been at ‘standstill’ or ‘pause’ for the last decade. However the present pause would be entirely lost within a thousand year simutation such as was used for the comparison.

    As for the comparison, what can one say when we have a paper from just one of the protaginists. Of course climate is all about averaging because around the world at a particular place temperature can vary by 30C in a single day. The first thing is to agree on is a definition of climate. Climate is simply smoothed weather, but there are many different smoothing formulas and periods. Because of the this variability I suspect Judith has decided not to take a lead in this. It starts at the place of measurement when integrating temperature fot the day could provide an exact measure, they take the mean of the max. and min. and call that the average. This is the first stage of smoothing but it also introduces sampling error. I personally favour 11 year central averging with the disadvantage that you are always 5 1/2 years behind today’s true average, but it is easy to correct for that.

    • David Springer

      I suspect Judith would just as soon place as much distance between herself and Richard Muller as possible. Speaking of laying down with dogs and getting up with fleas, she should have known better than to get in cahoots with a UC Berkelety clown. Anthony Watts should have known better too. The day the BEST project was announced some of us warned Watts but he wouldn’t listen. Had to learn the hard way he did. The force is weak in that one.

    • @Alexander Biggs: I was hoping that this latest post would deal with the adnissions by both Berkeley Earth and Hansen that temperature change had been at ‘standstill’ or ‘pause’ for the last decade.

      Has either Muller or Hansen evaluated the three decades prior to 2000-2010 for “standstill?”

      The BEST data shows that all four of those decades have individually been at a standstill, as can be seen on the left side of this graph. All four decades look the same.

      The right side shows the same data collected in one four-decade batch. Again all four decades look the same..

      Certainly the past decade has been “at a standstill,” as have each of the preceding decades. Zeno observed the same thing millennia ago when he pointed out that in an instant an arrow travels no distance.

      In climate terms Zeno would judge a decade to be an instant. How is this news?

      • David Springer

        Actually 1990-2000 wasn’t a standstill. Maybe it looks that way to the untrained eye?

        Did you like totally miss the late 1960’s and early 1970’s when global cooling was the big scare? It wasn’t as engaging with the public back then of course as climate disaster had to compete with nuclear armegeddon and Mai Lia massacres and stuff for the hearts and minds of liberal nutcases like you. Tell us your story about how you dodged the draft, Vaughn? Did you bail out like Clinton in school or were you more creative like Bush in the National Guard?

      • Vaughan,

        That’s a little too cute (and I don’t mean the little trick of using 11 years for each decade to create a better looking effect). There is no need to arm wave and say a decade is an instant. A little exercise for you. Assuming global temperatures are an AR process with a 0.2 degree C trend and a standard deviation based off real world observations, what is the probability that we would observe 16 years without statistically significant warming?

      • Thanks Vaughan Pratt but No, they don’t look the same. As Springer says they look the same to the ‘untrained eye’ A problem we have is there is no agreed definition of climate. People tend to think of climate as a constant, but even in a constant climate temperatore can change by 30C in a day. Climate has to be an average and the only question is: over what period? The period is not critical so long it is long enough to elimanate the random fluctuations. I favour 11 year central averaging to cancel sunspot effects.

      • Hell Zeke did it. (0-0) nothing wrong with that.

      • @DS: Actually 1990-2000 wasn’t a standstill. Maybe it looks that way to the untrained eye?

        Excellent point. It was an odd decade (one whose years have a third digit which is odd). Why should that be relevant? Well, ever since 1870, when saloons were unlicensed and pantaloons therein licentious, the even decades of HadCRUT3 have invariably risen slower than the odd decades on each side, as judged by trend lines fitted to each of the 14 decades since 1870 (woodfortrees can supply those). This has been one of the most predictable things about the otherwise unpredictable decadal climate.

        @DS: Tell us your story about how you dodged the draft, Vaughn? Did you bail out like Clinton in school or were you more creative like Bush in the National Guard?

        Even more creative than that, David. Clairvoyantly forecasting the draft years in advance, I arranged to be born overseas (from your perspective) in a HUAC, a Horribly Unamerican Australian Community whose ancestors had never set foot in the Americas. That did the trick quite nicely, some might claim unintentionally.

        @cynp: (and I don’t mean the little trick of using 11 years for each decade to create a better looking effect).

        Huh? I used exactly 120 monthly datapoints in each decade, totaling 480 datapoints. The right hand side uses the same 480 datapoints. How does that come to 11 years per decade?

        Try plotting any dataset for the period From: 1980 To: 1981 at woodfortrees.org. According to you that should be 24 months. If you click on Raw Data at the bottom you’ll see that it’s actually 12 months. This is because 1980 and 1981 mean January 1980 and January 1981 respectively (for July 1980 use 1980.5), and while From: 1980 is inclusive of January 1980, To: 1981 is exclusive of the month January 1981. I’m using the exact same definitions.
        (WoodForTrees.org was up earlier today but seems to be down just now.)

        I ‘fess up to little tricks when I use them, such as my ingenious little trick above to avoid the draft. Sadly I have no 11-year trick to ‘fess up to in the first place. I wish it had been otherwise so I could have congratulated you on spotting it. (Congratulations David Springer.)

        @David Springer: Austria’s great. I’ve been to Austria.

        Congratulations again, David Springer. Had you said Austria’s the BEST you’d even have been on topic.

        @cynp: A little exercise for you. Assuming global temperatures are an AR process with a 0.2 degree C trend and a standard deviation based off real world observations, what is the probability that we would observe 16 years without statistically significant warming?

        If you accept the RSS MSU records since 1979 as “real world climate,” someone more ignorant about climate than I am worked this out on February 10, 2010. His result, more than a year before the Santer et al paper to essentially the same effect, was that whereas 10 years barely hits two sigma, 16 years reaches several sigma. (But perhaps that particular little calculation was what you were referring to.)

        That ignoramus knew nothing, nothing I tell you, about climate. Though he’d taken honours statistics in college he was even more ignorant about climate than I am today, however impossible that may seem.

        @Alexander Biggs: I favour 11 year central averaging to cancel sunspot effects.

        Having come out of my millikelvin post badly scathed, I’m with you 100% on that. 11-year averaging was the basis for my 2011 AGU poster, which I ill-advisedly cranked up to double that period for my 2012 AGU poster last month. The rewrite I’m now working on backs off to your recommended 11 year averaging, albeit with the Greg-Goodman-approved filter F3 in place of the more simple-minded central averaging filter F1 you’re recommending which has bad side lobes in its frequency response as can be seen in Figure 5 of my poster.

      • The graphs are of 1970-79, etc. and oddly there is a jump in temp. from 79 to 80, 89 to 90, etc. So each decade does look fairly flat but if the 11th point was plotted then each would show a larger change. Interesting. So the person who said you did plot 11 years, possibly meant the opposite, that it would have shown the increase more clearly? Anyway, does not change your point that what is important is the long term trend. I still am not concerned about CAGW as I don’t think the evidence for the larger temp. increases is proven and I think humans can cope with a few degrees in 100 years. I think the next 10-15 years will be very interesting.

      • @Bill: The graphs are of 1970-79, etc. and oddly there is a jump in temp. from 79 to 80, 89 to 90, etc. So each decade does look fairly flat but if the 11th point was plotted then each would show a larger change.

        What do you mean by the “11th point,” Bill? Each of the four 120-point decades on the left is smoothed with a 3-month moving average, so that for example 1970-1980 covers the 118 months February 1970 (which averages the 3 months January-March of 1970) to November 1979. The “11-th point” would therefore be December 1970.

        The omission of December 1979 and (in the next decade) January 1980, and likewise for the other interdecade jumps, does create discontinuities, though the only sizable such is at 1990 which jumps sharply from 0.425 C in November 1989 to 0.833 C in February 1990.

        @Bill: I still am not concerned about CAGW as I don’t think the evidence for the larger temp. increases is proven and I think humans can cope with a few degrees in 100 years.

        I have no quarrel with that as it’s well above my pay grade. ;)

        I think the next 10-15 years will be very interesting.

        If 2010-2020 doesn’t rise sharply I’ll have to send my understanding back to the drawing board. I expect it to be similar to both 1990-2000 and 1970-1980 on account of the 20-year cycle in the upper plot in this chart.

  29. Jim D what do yer think Tony B’s doing in Austria? Why he’s
    on a skiing holiday lol… Using up some of that oil money they
    pay him.
    (Hmm, better add *sarc.*)

    • David Springer

      Austria’s great. I’ve been to Austria. In the summer. Never seen somewhere where brazzieres are so universally eschewed by so many busty blondes. My head was still bobbing up and down along the Autobahn all the way back to Munich and it wasn’t because the BMW 730i rental car I was driving had a poor suspension, if you get my drift.

      • David

        Your remark about the Austrian babes’ boobs shows definite right-brain activity* while your left-brain vision of Ayla as a lice-infested hag is in direct inner conflict with this.

        Max

        *Maybe that’s lower-brain activity instead (blood pressure only suffices to feed one at a time, they say)

      • David Springer

        I was in Salzberg the summer of 1991AD. Ayla was in Salzberg the summer of 20,000BC. The crystal shops probably weren’t there back then and just as unlikely neither were there the pampered pieces of feminine pulchritude which maketh male cups everywhere runneth over.

    • Beth

      Ever since UKMO told Tony B that there would be no more snow in England (because of AGW) he has succumbed to a snow fixation, a near-neurotic condition fueled by fear of changing climate. Makes him want to slip and slide on the stuff. Good news is it only tortures him in the winter months.

      Max

    • David Springer

      Who’s gonna be mean enough to ask Moshpup how much he got paid for writing this article?

      I can hear Muller now: “Put it on my tab, Steve. I love you, man. You know that, right?”

      • Steven Mosher

        Sorry. I don’t get paid for any of the work I do for Berkeley. I probably started working on their stuff months before I ever got invited to sit in on the staff meeting. For now its a labor of love. weird hobby I know.

  30. Berényi Péter

    “The Methods paper and UHI paper are under going final review prior to being submitted.”

    Dr. Curry, is this one the UHI paper you are referring to?

    • Berényi Péter

      I hope it’s not that one, which concluded about the UHI:

      The small size, and its negative sign, supports the key conclusion of prior groups that urban warming does not unduly bias estimates of recent global temperature change.

      “Negative sign?”

      Hmmm…If my study ended up with a negative UHI effect (after all the studies out there showing just the opposite on a local basis), I’d toss the study results, rather than drawing any conclusions from them.

      Wouldn’t you?

      Max

      • On a local basis it’s positive. But when you average it worldwide and squint at it properly, it becomes negative.

      • k scott denison

        Unfortunately, max, it is that self same paper. Negative sign, wow. Are there no truly inquisitive minds in *climate science*.

      • Naw, can’t be the same papers ’cause Mosher just tol’ us they dinna use no stinkin’ stations within 10K of a human.

      • Simple solution: Export the Urban Coolth to the Countryside. Don’t let any of it traipse over the ocean, though, or Katie bar the door.
        ===========

    • Don Monfort

      Guess that’s a variation on “two negatives make a positive” (turned around in climatology fashion).

      Max

  31. Max,

    I could do with some of that snow torture, only visited Austrua in hot,
    hot summer. Think that Ayla was an Austrian chick, by the way.
    ‘Ayla from Austria’. ) Blonde ter get the most out of faint sun.

    Beth

    • David Springer

      You in Austria in the summer (ex. right-brain image):

      http://images.google.com/images?sourceid=navclient&ie=UTF-8&rlz=1T4LENN_enUS461US461&q=beth+cooper

      Please don’t spoil it!

    • Beth

      Being next door (as it were) to Austria we’ve gotten our share of the white stuff so far this season, so I’m sure Tony can satisfy his snow fixation while he’s there.

      In the past, I usually satisfied mine with the first shovel load (but Ah don’ do no shovel work ennymo’, boss).

      Max

    • Beth,
      Yep, just like the last big cold spell, the next is going to shake things up quite a bit. Especially since we are squandering ALL of our resources (physical and intellectual) for the hysteria du jour.
      Skin color bias, I believe in part, stems from a behavior bias that all life practices in the day to day course of survival (if you don’t bother me I won’t bother you)..
      The Sun that has helped provide a climate does not make that distinction. Life adapts or it does not. Ask 99% of the species that existed before us.

      • David Springer

        I never seen one dog hate or shun another because of different coat color. And they say we’re smarter than them. I wonder.

    • @BC: Blonde ter get the most out of faint sun.

      1. Shouldn’t blondes have a higher albedo than brunettes? In which case wouldn’t blondes get less out of a faint sun?

      In a hot climate I’d buy a white car, in a cold climate a black one. Why does pigment work the other way?

      2. Possible answer: unlike your car when all the shady spots are taken, you can always step into the shade, and then a white pigment will radiate less heat than a black one. In a desert with no other shade, the burnous can provide the shade.

      Pigment serves to regulate cooling, not to protect against the Sun, for which other remedies are available for people, if not for cars.

      Buy that? Then I have a bridge to sell you.

      3. Dark pigment blocks sunlight from reaching the part of your skin where it activates vitamin D production. Light pigment lets more of the limited sunlight through for that purpose.

      While 3 is the official answer in every accredited education program, I don’t recall anyone ever having even hinted at 2.

      Yet both seem perfectly plausible.

      Once both hypotheses are in play, how would you go about choosing between them?

      Maybe the vitamin D answer is 75% and the other 25%. Or vice versa. Beats me.

      • David Springer

        It’s for the same reason polar bears are white. Like duh.

      • Good point. I forgot

        4. Camouflage.

        In the case of polar bears I’d guess the competition would be mainly between 2 and 4 since for 3 polar bear hair has been shown to be essentially opaque to UV, which is the part of the spectrum relevant to vitamin D. Polar bear skin is black but the “white” (more precisely translucent like snowflakes) hair presumably acts simultaneously as an excellent insulator and as camouflage allowing them to get closer to their prey.

  32. When you consider the effect of increases in human-caused carbon dioxide on global warming is virtually zero we can only stare with wonder at the amazing sensitivity of the government’s global warming models. Keep up the good work.

    Let us assume an increase in the concentration of atmospheric CO2 as an avoidable consequence of modernity. Let’s work with 300 going to 400 ppm because of industrialization. A 33% increase. Sounds like a big jump, right? But, in going from 0.03 to 0.04%, I guess we’d all have to be idiots to believe that extra 0.01% of CO2 in the air really makes any difference.

    • David Springer

      Wagathon | January 20, 2013 at 8:08 pm | Reply

      “I guess we’d all have to be idiots to believe that extra 0.01% of CO2 in the air really makes any difference.”

      Estimate, if you will, how many PhD man-hours alone it took over the past 30 years to make a half-assed case for it. A model, with poor and worsening skill as shorter term projections fail (cough cough pause cough cough), is as good as it gets. So much for Feynman famously saying if you can’t get a cocktail waitress to believe your theory you need a new theory. People like Richard Lindzen, Roy Spencer, and John Christy just to name a few are pretty far from cocktail waitresses too. I’m still deciding about Curry.

  33. David

    Could be a modern Ayla.

    (‘cept the fur-lined buckskins and the pet wolf are missing.)

    But we are drifting dangerously off-topic and are at risk of getting booted off the thread.

    Max

    • Yes, Dr Curry took her name off of this and is leaving Mosher, who does not consider himself to be one of its authors to defend it with the other usual sycophants.

  34. Mosher, I appreciate that your numerous replies accord well with Judith’s hopes for more courteous dialogue. Well done.

    • Steven Mosher

      you are welcome. really though willard reminded me what a fine young man zeke is so Im trying to channel my inner zeke.

    • David Springer

      Epitomy of politically correct young Steven is. Pass too this shall.

  35. DS 10/01 7.52pm

    Re image, same hair colour, (natural, mine’s wavier, same weight,
    but not same age :)

    • David Springer

      That’s all well and good but the deal breaker is centered around parasite load.

      • David Springer

        My bold

        Parasite load

        From Wikipedia, the free encyclopedia

        Parasite load is a measure of the number and virulence of the parasites that a host organism harbours. Quantitative parasitology deals with measures to quantify parasite loads in samples of hosts and to make statistical comparisons of parasitism across host samples.

        In evolutionary biology, parasite load has important implications for sexual selection and the evolution of sex, as well as Openness to experience.

        So you see, as a responsible member of my species concerned about the direction of evolution going forward, I can’t really help having a great deal of concern about parasite load. I’m a prisoner of my biological programming.

        I didn’t look to see why wickedpedia capitalized Openess, btw. Is there a movement of some sort by that name that has escaped my attention thus far?

  36. Steve, with due respect, I suggest you were more than a secretary as a reason you were not an author. You were part of the curve-fitting Co2/Temp thought, you were part of the volcanism claim, and you checked data and prepared the final draft. C’mon, Steve. You had a bigger role than Muller. Your explanation doesn’t fit. I don’t know the journal that rejected Best, but chances are it was a better known “climate” journal and it stretches credulity that they would have asked to have the methods published first. Important work gets attention by well known journals. Openness here is as important as open code, unless of course one believes in situational ethics. Not meant to sound harsh.

    • David Springer

      Well that was certainly direct and to the point, Bob.

      Are you any relation to THE Bob, by the way?

      http://en.wikipedia.org/wiki/J.R._%22Bob%22_Dobbs

      • David Springer, if I continue fretting about all the CAGW bulldonkey I may end up like THE BOB.

      • David Springer

        You should be so lucky as to end up like “Bob” Dobbs, the embodiment and epitomy of Slack.

        You need more Slack buddy. A double dose I’d say.

        http://en.wikipedia.org/wiki/Church_of_the_SubGenius#Conspiracy_and_.22Slack.22

        SubGenius members believe that those in the service of the conspiracy seek to bar them from “Slack”,[19] a quality promoted by the Church. Its teachings center on “Slack”[4] (always capitalized),[15] which is never concisely defined, except in the claim that Dobbs embodies the quality.[2][20] Church members seek to acquire it and believe that it will allow them a free, comfortable life without hard work or responsibility, which they claim as an entitlement.[9][21] Sex and the avoidance of work are taught as two key ways to gain “Slack”.[15] Davidoff believes that “Slack” is “the ability to effortlessly achieve your goals”.[19] Cusack states that the Church’s description of “Slack” as ineffable recalls the way that Tao is described,[6] and Kirby casts “Slack” as a “unique magical system”.[22

    • Steven Mosher

      sorry Bob. Ive been an author on many things. and for the results paper, well in my mind I wouldnt call what I did authorship. I suppose if I were an author on it you’d complain that reformating the citatations didnt count as author. However, since zeke and I suggested to Anthony that he had an issue with TOBS data, and since in your book offering ideas counts as authorship, please go to WUWT and demand that zeke and I be listed as authors on Anthonys latest paper. That is a fair test of your sincerity.

      • Steve Mosher, fair enough. I just wanted you to admit you were much more than a secretary on this paper.

      • Sorry Bob. You failed the sincerity test. If your intention was to get me to say I was more than a secretary, then claiming I was an author is an odd way to do that. Will you or won’t you demand that Zeke and I be added to Anthonys author list because we offered an idea?

      • Steve, I am happy to do that, but you should not argue equivalency, unless of course you know better.. My opinion, based on your statements of your involvement in BEST is that your role in BEST was substantially greater than TOBS. You equate the efforts, not me. Also, you failed to answer the questions above, namely, were there other journals, beside JGR, that rejected BEST.

      • Bob,

        I think that Mosher wants us to believe that G&G, aka ‘journal of last resort’, was their second choice.

      • Steven Mosher

        Bob, I was pretty clear. The paper was submitted ( as far as I know ) to one journal prior to G&G. That journal editor wanted the methods paper published first. The decision was made to select a different journal. geostats seemed the right option given some of the comments mad eon the methods paper. There were two journals where it fit. one was selected

      • Steven Mosher

        Bob. Im not arguing equivalency, I’m testing your sincerity and you failed.
        Also, as I’ve stated before the paper was submitted to one journal prior to G&G. Now, lets see you go off to WUWT and demand that by your rules I should be an author on Anthonys paper

      • Steve Mosher, you say, ” Will you or won’t you demand that Zeke and I be added to Anthonys author list because we offered an idea?” I will email Anthony and make a suggestion that you should have been an author(and Zeke). But I have to say Steve, that is just your way of moving the pea. You, not I, made the equivalency argument, i.e. the amount of effort you gave to TOBS was equivalent. If you deny this, you are the one who failed the sincerity test. Understand!

      • Steve Mosher, just emailed Anthony.

  37. David Springer

    sunshinehours1 | January 20, 2013 at 5:23 pm |

    Mosher, Canada is cooling over the last 15 years if you use EC data and 1×1 grid squares and warming if you use 5×5 grid squares.

    http://sunshinehours.wordpress.com/2012/12/09/canada-grid-square-choices-5×5-warming-and-1×1-cooling/

    Where is your 15 year data? Gridded? 1×1 and 5×5.

    Yes. As you show in the link, depending on arithmetic choices Canada is either warming by ~0.05C/decade or cooling by same.

    As I’ve pointed out a million times there is no warming or cooling in the instrument record until arbitrary choices are made about adjusments. NASA documents the charts the choices it makes. BEST makes equivalent choices and from the same instrument record produces similar results. I’m shocked. Shocked I tell you. So shocked that on Watts Up With That on the day BEST project was announced I said the results would be essentially no different than past instrument record results.

    http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif

    Interesting, innit?

    • “BEST makes equivalent choices and from the same instrument record produces similar results. I’m shocked. ”

      actually not. go ahead and dump each and every GHCN monthly station ( about 7000 ) and the answer doesnt change. You acan, as we did, use unadjusted data ( like daily data which isnt adjusted ).. same answer.
      You can use hourly data warts and all. same answer. You can pick only rural stations with long daily records, un adusted. same answer.
      You can pick 100 random stations. same answer. you can pick 5000, and predict the temperature at other locations.. works.

      • Mosher says, actually not. go ahead and dump each and every GHCN monthly station ( about 7000 ) and the answer doesnt change. You acan, as we did, use unadjusted data ( like daily data which isnt adjusted ).. same answer.
        You can use hourly data warts and all. same answer. You can pick only rural stations with long daily records, un adusted. same answer.
        You can pick 100 random stations. same answer. you can pick 5000, and predict the temperature at other locations.. works.
        You sound like an author, not a secretary. C’mon Steve, come clean.

      • David Springer

        Can I drop SHAP and TOBS too? Pretty please?

      • David Springer

        Bob, given the nature (as opposed to Nature, haha I kill me) of the journal calling the named authors “authors” is a stretch. What they did is more like blogging. Comparing it to Principia Scientific in that regard is right on target. That paper wasn’t published it was blogged.

      • Mebbe he doesn’t like the attribution even more than he says he doesn’t like the attribution. Speaking for you, moshe. Egads, isn’t that against some rule posted or filed somewhere? Where’s willard.
        ==============

      • Please don’t channel me unless you have a good reason, dear kim.

        I’m no Zeke.

      • It’s Julio for the attrib, willard for the rules.
        ===============

      • Bob

        “You sound like an author, not a secretary. C’mon Steve, come clean.”

        nope. I’ve just done a lot of work with the berkeley data on my own. basically I wrote a software package to look at the data, so I look at the data. 6 ways from sunday. Weird hobby.

      • David Springer | January 20, 2013 at 10:29 pm |
        Can I drop SHAP and TOBS too? Pretty please?

        #####################

        well. SHAP consists of adjusting stations if they move from a mountain top ( cold) to a valley ( warm). Since we use raw data we dont do SHAP. a station that moves is….. a different station.

        TOBS? well if you change the time of observation you need to account for that. two ways. A) empirical recalibration. B) treat it as a new station.

        Or you can just use daily data. or you can just use hourly stations.

        answer doesnt change.

      • David Springer

        How come it changes when NOAA drops SHAP and TOBS?

      • David Springer

        Not that I’d accuse BEST of being NOAA, mind you. One’s a longstanding, legit organzation and the other is BEST.

      • Steven Mosher

        Doesnt change when they drop it either dave

      • David Springer

        Yes, the trend does indeed disappear according to NOAA.

        http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif

        Maybe you see about getting that web page of theirs corrected. You might need to call out the big guns with the Nobel prizes and such. Maybe even Al Gore himself. Does he take your phone calls?

      • david.
        The US is 2% of the land mass. If you change 2% by 50% how does the global answer change? perhaps you are a different page than I am when I say the answer doesnt change. there is the same issue with UHI since the land is only 30%. Throw out TOBS, throw SHAP and you have changed less than 2% of the data. large N is a bitch.

      • I need to start taking a shot every time someone trots out that old USHCN v1 chart. SHAP no longer exists, as of the switch to v2 circa 2009. They really need to take that old v1 page offline, as it confuses pretty much everyone.

        Also, Berkeley uses the raw data (pre TOBS or any other documented adjustments) for both the U.S. and the globe. You also get similar results when you run the NCDC’s PHA using no TOBS or any other non-PHA adjustments (see our poster for a direct comparison).

        Here is the USHCN v2 paper ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/menne-etal2009.pdf

      • Wondered about that. Treating Tobs that way would work for when a station keeper changed but what about if the changes in observation time were random (like a party the night before??)

      • Eli,

        Random changes are fine and relatively easy to detect, as long as they are not both temporally and spatially correlated. If all the local observers get together and in the course of drunken revelry all decide to change the obs time the next day, it would cause problems.

      • More interested in frequent, which should cause the algorithm to thrash

  38. Mosher: Somebody has to be the first. There was a choice between 2 journals where we could be assured that the reviewers did not require tutorials in kriging.

    Geez, you pegged and then broke my bullshit meter with that one, Steve.

  39. David Springer

    Failing to redefine peer review in the literature per Phil Jones, Muller et al decide to redefine literature per John O’Sullivan.

    You can’t make this stuff up.

    http://www.amazon.com/gp/reader/B0080K3CHA/ref=sib_dp_kd#reader-link

  40. David Springer

    The journal below might be more apt for this limp paper.

    http://www.scitechnol.com/ArchiveJGSD/currentissueJGSD.php

  41. Brandon Shollenberger

    Has anyone else noticed there’s a clear seasonal cycle in BEST?

    • thisisnotgoodtogo

      It’s the doldrums

    • @Brandon Shollenberger: Has anyone else noticed there’s a clear seasonal cycle in BEST?

      There’s certainly a clear cycle after Pinatubo, see the right-hand graph cited
      here. What did you have in mind?

      • Brandon Shollenberger

        I don’t know why you ask what I have in mind. There is autocorrelation in BEST’s global temperature record similar to what I showed for BEST’s North America’s temperatures upthread: Peaks around ~12 months and dips around ~6 months. That’s a clear seasonal cycle.

        I’m fairly sure you could pick any 30 year period (at least from 1900 on) and it’d be enough to see a seasonal cycle.

      • The “clear cycle after Pinatubo” (1991) that I was referring to in this graph has a period of close to four years. That’s one reason the human eye is able to see a steady rise throughout the period 1991-2010, including the decade 2000-2010, even though the variance is so large as to suggest this rise would be impossible to see.

        Context is everything: the four plots on the left don’t give sufficient context to see that intriguing effect.

        I was wondering if you had that cycle in mind but evidently not.

      • Brandon Shollenberger

        Vaughan Pratt, I have no idea how that could be considered a “seasonal cycle” which is what I referred to. Seasons happen every year, not every four years.

        There may be additional cycles in the BEST record (or just parts of it), but what I was asking about is the seasonal cycle present in the entire record that shouldn’t be there.

  42. thisisnotgoodtogo

    joking

  43. Too funny. Mosher used to be rightly sceptical about GHCN adjustments when he ran with the sceptics. Now that he’s in with the AGW crowd he just believes them.

  44. Pingback: BEST is published – Stoat

  45. I am not a scientist, so if I am making a huge howler here, please be kind.

    In several posts above, Steven Mosher has said that removing this or that adjustment or changing the stations in the dataset makes no significant difference to the results.

    I can understand the latter (assuming that the stations are roughly equally reliable, on average), but am puzzled by the former.

    What is the point of adjustments if they have no effect?

    • Johanna

      “Not being a scientist” may actually be a plus here.

      If “removing the data points makes no difference to the results”, I’m beginning to see how climate modeling works (or, rather, doesn’t).

      (I’m no “scientist” either, just a lowly, cantankerous engineer.)

      But I like the “peer review” approach of Feynman (who WAS a scientist): bounce it off a cocktail waitress; if it flies, you’re ready to publish; if it crashes, it’s back to the drawing board.

      Max

      • Manacker, I’ve been a barmaid, and Feynman (hopefully) implied that they are not necessarily stupid.

        But back on topic, Tallbloke’s thread on adjustment to the Alice Springs record (in the middle of a desert in central Australia) is the kind of thing that makes me very wary of the shapeshifters who adjust the records:

        http://tallbloke.wordpress.com/2012/10/11/roger-andrews-chunder-down-under-how-ghcn-v3-2-manufactures-warming-in-the-outback/

        It seems to me that not only is temperature record adjustment a Wild West where not even consistency is required (see the Alice Springs story), it matters a lot because we are mostly only taking about fractions of a degree.

      • Johanna

        I’ve tended bar in a past life, too – but Feynman’s point was really that one does not need to have a lot of formal degrees to be able to think logically and especially to be able to differentiate between hype and substance, as your question to Steven Mosher demonstrates.

        Max

      • Johanna

        Yeah. If Tallbloke’s Alice Springs story were a single anomalous case, one could write it off as a foolish error.

        But it isn’t.

        [Get ready for howls of outrage alluding to a “conspiracy theory”.]

        Max

      • manacker

        I can’t say that I have done my best work in a bar, however, the ability to make a complex issue “make sense” to someone who knows nothing of the topic has been the “acid test” for me re: my pet notions. There have been many a time I have left a bar with my metaphorical tail between my legs wondering if it were the alcohol or the idea just stank. Upon reflection the subsequent morning, most times, it was a bad idea. I had let my confirmation bias add and subtract important numbers to get the result that I thought made my idea look great, when it didn’t.

        So, back of the envelop calculations in a dimly lit stale beer smelling bar
        netted me more confusion, which upon reflection and recalculation in the light of day, made the answers on the test the next day make more sense to me and the prof.

        If only climate science had people who reflected and recalculated in the light of day.

      • Feynman liked barmaids.

      • @Eli: Feynman liked barmaids.

        Josh, the connection between someone selling you a drink and someone you buy a drink for is that you pay in either case. The difference is who gets the drink.

        Logically you’d leave the bar more drunk, and therefore less successful, than Feynman.

    • Johanna,

      Mosh is saying that the adjustments make little difference globally. You can see that here: http://www.yaleclimatemediaforum.org/pics/0211_zh4.jpg

      Oddly enough, the adjustments do make a big difference in the U.S., which leads to quite a bit of confusion when folks assume that the same must be true for the whole globe.

    • Steven Mosher

      johanna, as zeke explains below the adjustments in question ( TOBS and SHAP) do not make any significant difference to the global average.
      There are two reasons for this.
      1) the adjustments are not made to every record in the US
      2. the US comprises 2% of the sample.

      If you change 2% of the sample by 50% well, do the math.
      Bottom line, it was cooler in the LIA

      • Mosh,

        SHAP no longer exists, even in the U.S. (it was replaced by the PHA). Now NCDC only does TOBS and homogenization (PHA). TOBS itself isn’t really needed as a separate adjustment, since it tends to get picked up pretty easily as a breakpoint in automated homogenization.

      • Mosh and Zeke

        Thanks for your responses. Just to clarify – are similar adjustments made uniformly across the global records, bearing in mind that the US records are only 2% of the sample? If so, who does this and how, and If not, what are the implications of variable application of adjustments?

      • Johanna,

        The adjustments are largely the same both in the U.S. and worldwide with one exception: the U.S. records are subject to a separate time-of-observation adjustment, while global temperatures are not. Both are adjusted using the pairwise homogenization algorithm by the National Climate Data Center at NOAA. You can find more details about their approach here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/menne-williams2009.pdf

        The AGU poster (linked in the original post) is an analysis we did comparing the results of a blinded study of the Berkeley method and the NCDC method both on actual U.S. temperatures and on synthetic temperature data with different types of added bias. Also worth reading is this paper by Williams et al from last year: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf

      • Steven Mosher

        Thanks zeke. I think Ive told springer and goddard that 100 times and they still dont get the difference between ushcn1 and subsequent versions.

      • Thanks to both of you. Unlike some in the field, at least you are prepared to give straight answers to straight questions. I shall go away and study what you have provided.

      • sure thing joanna. dial the clock back to 2007 and I doubted everything that CRU and GISS and NOAA did. and with good reason, I thought. 5 years later after actually using the code I asked for, and after reading the papers, and after trying hundreds of ways of find problems I came to the conclusion that those doubts while sincere were misplaced. but I had to see for myself. An open mind is all that is required. If you have that then guys like me will help you.

      • Thanks Johanna, I’ll take away your jibe at the Baroness @ the Bish’s and treasure it. It’ll be at bottom of the pile in the back of the cave.
        =================

      • moshe, I’m so glad you retain an open mind about attribution. Keep it, you’ll need it in our future.
        =============

  46. David Springer

    Steven Mosher | January 20, 2013 at 1:32 pm |

    “Gimme C02 and Volcanoes and I can explain the rise in temperature.”

    No shiit, Sherlock. It’s lack of rise that isn’t explained. Hello? Earth to Steven. We haven’t gotten warmer in 15 years despite bumping CO2 from 370 to 395ppm during that time. So which volcano do you think will bail you out of this travesty?

    • thisisnotgoodtogo

      OVER accounted for, is BEST!

    • Natural variability can account for 0.1 degrees, or don’t you believe in that?

    • Steven Mosher

      hardly a travesty. if you are a lukewarmer, like me, then a rise from 370 to 395, should get you say… about .15C of warming PROVIDED

      a) all other forcings remain constant ( they havent )
      b) natural variability is neutral
      c) you change the laws of physics and get an instaneoous response to GHG forcing.

      Given that natural variablity is on the order of .17C per decade, you better plan on waiting 20years plus ( if you a luke warmer) to see the effects separate from the noise.

      • Natural variability is an inconvenient truth when talking about the “pause”. Ironic isn’t it.

      • Steven Mpsher

        Forget the “instantaneous response” story, Steven.

        Remember the warming that’s already “in the pipeline”, waiting to come out. Let’s say we’re putting as much “into the magic pipeline as is coming back out from earlier CO2 increases.

        And at a 2xCO2 ECS of 3.2C, the increase from 370 to 395 ppmv should get us 0.3C warming.

        And so far we’ve seen: zilch, nada, zero warming.

        So the “natural variability = 0.3C per decade.

        But wait!

        It’s only supposed to be around half of that!

        So let’s go back and correct the assumed 2xCO2 ECS.

        It comes out to 1.6C now.

        Hey! That’s what the latest studies by Schlesinger, Gillett and Lewis are also telling us.

        Good.

        We’ve solved that problem.

        Right?

        Max

      • Steve wrote- “you change the laws of physics and get an instaneoous response to GHG forcing.”

        Are you suggesting that surface temperatures would not warm virtually immediately if more CO2 were added to the system and if all other things remained unchanged?

      • Actually over just land the response would be quite immediate, and is, as shown by BEST.

      • Steve, I really wish you would jettison the idea of you be a “lukewarmer”. It is unbecoming and untrue unless you mangle or fabricate a definition (which I know you have). The hypothesis is that added carbon dioxide will cause X degree of warming. You are on board with that or not. There is no such thing as a “luke hypothesis” unless you fabricate a definition, which as I say you have done and are entitled to do so. But it is still somewhat juvenile. By the hypothesis above I am a warmer. See, felt good. Try it, and while at it come clean on what other journals beside JGR rejected BEST. After all, you are the Dean of Openness.

      • Steve, what exactly is a luke-warmer? I don’t know really, Dr. Muller seems to be what I’d describe as a luke warmer i.e. he clearly believes putting CO2 into the atmosphere will cause some warming, but he doesn’t believe that anyone can foretell the effects of this warming. The climate is a coupled non-linear chaotic system and it is impossible to forecast a future state. But what makes you lukewarm? And are there “tepids”?

        ” … In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the
        long-term prediction of future climate states is not possible.”
        IPCC TAR, Section 14.2 “The Climate System”, page 774.

  47. James Annan finds it amusing, not informative.

    http://julesandjames.blogspot.com.au/2013/01/best-laugh-of-day.html

  48. David Springer

    Have YOU examined 1992-2002? Linear trend 0.4C/decade. F*ck me that’s some awesome warming.

    http://www.woodfortrees.org/plot/wti/from:1992/to:2002/plot/wti/from:1992/to:2002/trend

    Nice try, Vaughn, but your lame speciousness doesn’t fly with me around.

    Vaughan Pratt | January 21, 2013 at 4:24 am | Reply

    Has either Muller or Hansen evaluated the three decades prior to 2000-2010 for “standstill?”

    The BEST data shows that all four of those decades have individually been at a standstill, as can be seen on the left side of this graph. All four decades look the same.

    The right side shows the same data collected in one four-decade batch. Again all four decades look the same..

    • David Springer

      shiit… correct link

      http://www.woodfortrees.org/plot/best/from:1992/to:2002/plot/best/from:1992/to:2002/trend/detrend:0.52

      And 0.52C/decade not 0.51C. I almost excluded ten whole millikelvins there. An unpardonable sin in the eyes of Mr. Millikelvin hisself no doubt. I wonder how he’ll forgive himself for missing a 520 millikelvin trend in a single decade? Enquiring minds want to know.

      • @DS: I wonder how he’ll forgive himself for missing a 520 millikelvin trend in a single decade?

        I wonder how David Springer will forgive himself for missing the statement “the even decade 2000-2009 didn’t trend up as strongly as the odd decade 1990-1999” to the right of Figure 3 in my AGU poster.

        The big print fooleth and the fine print schooleth. ;)

        (I wrote 2000-2009 and 1990-1999 for the benefit of people like cynp who might otherwise mistake 2000-2010 and 1990-2000 for 11-year periods. In both cases I meant 120 month periods.)

      • David Springer

        And that decadal on-again off-again behavior somehow reinforces your pre-conceived belief it’s due to a steady increase in greenhouse gases?

        Error. Does not compute.

      • Error. Does not compute.

        It does when you replace “your preconceived belief it’s due to” by “the hypothesis of.” That’s because detrending by that even-odd behavior (which is phase-locked to the solar cycle) in addition to detrending by SAW as per Figure 2 of my poster leaves behind a steady increase after filtering out noise with a period below 9 years instead of the 21 years I used in the poster. This more than doubles the number of dimensions of the image of the filter, namely to over 20, while keeping the number of parameters essentially unchanged. This in turn decreases the concern about over-fitting.

      • David Springer

        Oh I’m sorry I said you had a pre-conceived belief that greenhouse gases cause warming. Logically there’s only belief or disbelief. So if you don’t believe then you disbelieve. That’s good. We’re both in disbelief.

      • Logically there’s only belief or disbelief.

        Proof by fiat of the nonexistence of agnostics. There’s only theists and atheists.

        Richard Dawkins uses the same method of proof to prove the nonexistence of God. God does not exist because Richard Dawkins says so.

      • David Springer

        Correct. Agnostics are alternatively referred to as weak atheists. There is only one logical alternative to belief and that’s disbelief. Believing is like being pregnant. You are or you are not.

      • David Springer

        Disbelief in one thing does not imply belief in a different thing.

        The positive atheist professes belief in a godless universe. The weak athiest believes the question is not answered and disbelieves both i.e. has no beliefs in the matter.

        So, to the question “do greenhouse gases positively raise the earth’s average surface temperature”? I neither believe they do or believe they do not. I’m an experimentalist not a theorist. Show me the well controlled repeatable experiments. So far I can’t find a simple experiment using a ~12um laser showing that downwelling radiation on a body of water has any insulating effect at all or if it just causes more water to evaporate with no change at all in the bath temperature. It that can’t be demonstrated then AGW has no theoretical underpinning across 70% of the earth’s surface and nearly all the thermal inertia.

        Is the typical computer science emeritus at Stanford connected well enough to get access to some equipment better than scotch tape and saran wrap to perform the experiment I’d like to see?

      • @DS: So, to the question “do greenhouse gases positively raise the earth’s average surface temperature”? I neither believe they do or believe they do not. I’m an experimentalist not a theorist. Show me the well controlled repeatable experiments. So far I can’t find a simple experiment using a ~12um laser showing that downwelling radiation on a body of water has any insulating effect at all or if it just causes more water to evaporate with no change at all in the bath temperature.

        To quote the first of my three posts to Climate Etc, about a year and a half ago, “I’d like to propose a strengthening of the skeptic argument that downward longwave radiation or DLR, popularly called back radiation, cannot be held responsible for warming the surface of the Earth.”

        Please pose your question to those like retired Canadian meteorologist Alistair Fraser who still push the old-fashioned idea that the atmosphere is heating Earth’s surface twice as strongly as the Sun.

        A more meaningfuly description of the greenhouse effect is as follows. Increasing CO2 shuts off some 60 absorption lines for each doubling. (And that’s just the dominant species, C-12 O-16.) Each line not already dominated by a stronger line controls approximately 2.3 GHz of spectral bandwidth. Those lines that are still “open” permit Earth to cool by direct radiation from the surface to space at the frequency of the line.

        As any given line starts to close, the radiation to space at that frequency comes less and less from the Earth’s surface and more and more from the atmosphere. Initially the latter kind tends to come from low, hence warmer, altitudes, but as the line closes off it comes from progressively higher, hence cooler, altitudes.

        Every doubling of the GHG in question adds around 5-6 km to that altitude for each absorption line of that GHG that is in the process of closing. And every extra km of altitude reduces the effective temperature at that altitude by up to 10 C (but no less than 5 C, and then only with air that is at 100% relative humidity).

        That’s all there is to it. Downwelling radiation certainly exists but after subtracting out the variation resulting from the above mechanism, downwelling and upwelling radiation (which from hour to hour can be way out of equilibrium in either direction depending on insolation, surface temperature, cloud cover, etc.) are in essentially perfect equilibrium when averaged over periods much longer than a year. As such they have no bearing whatsoever on the greenhouse effect, which is far too slow to be detectable in less than a decade.

        In short global warming is a very slow process resulting from a very gradual closing of the so-called atmospheric window, like the extra warmth you would get from zipping up your jacket very very slowly.

        Quite apart from the difficulty of locating a practical 12 um laser (all the long-wave lasers in my basement are 10.6 um though I might be able to get them to lase at 9.6 um and with more effort up to maybe 11 um), I don’t see how beaming one down on water could tell you much if anything about the greenhouse effect.

      • “A more meaningfuly description of the greenhouse effect is as follows.”

        That’s well done, doc. And I am serious.

      • David Springer

        Increase in CO2 increases downwelling IR. On dry land it results in a surface temperature increase (through slower rate of radiative cooling) which in turn increases upwelling IR and restablishes equilibrium in that manner. Over water DWIR is absorbed in the first few microns and water molecules peel off to become vapor with no rise in temperature if the air is not saturated and it rarely is or you’d see fog all the time. The vapor rises until it condenses from adiabatic cooling releasing the energy and warming the atmosphere.

        A surface is warmed in both cases to reestablish equilibrium but in the case over the ocean the warmed layer is the top of the cloud layer. The cloud layer itself will rise to a higher altitude which should be about 100 meters for each CO2 doubling displacing what was previously cold dry air with a warm cloud. This isn’t disputed either and is why the signature of CO2 warming is a hot spot in the middle troposphere in the tropics. Maybe you’ve heard of that signature. Maybe not. The new cloud deck, being 100 meters higher, now has more greenhouse gases beneath it shielding the ground from the influence of the cloud while at the same reducing the amount of greenhouse gas between the cloud top and space. So over water the totality of the effect may be no more than no change in surface temperature, a reduced lapse rate from surface to cloud, and a greater lapse rate from cloud to space.

        Without experimental confirmation or refutation of DWLIR’s effect on evaporation rate and temperature of a water body no one can know. The fact that sensitivity empirically obtained has mostly sharp probability peaks between 1C and 2C with only fat tails causing the average to go higher than that I leans towards AGW being a land-only effect for the most part. This is reinforced by all observations which find recent warming is greatest where there’s the least water available for evaporation – i.e. deserts more than ocean, frozen surface more than thawed, and so on. Follow the water.

      • Thanks to David for adding the finishing touches to the nice and meaningful description of the greenhouse effect provided above by Dr. Vaughan Pratt.

      • @DS: Over water DWIR is absorbed in the first few microns

        There are two problems with this.

        1. You’re making the same mistake as Alistair Fraser, imagining that there’s more long wave radiation coming down than going up. As Kiehl & Trenberth’s Figure 7 from 1997 should make clear, the opposite is true: surface water emits more long wave radiation than it absorbs. This makes it irrelevant what happens to long wave radiation that is deeper than a few microns: it just bounces around inside the water. In the top few microns the net radiative flux is upwards.

        2. Even if (counterfactually) there were more downward than upward long wave radiation, evaporation is only from the top nanometer of water while DWIR penetrates more than a thousand times that far. The remaining 99.9% of water molecules below the layer evaporating cannot evaporate and would therefore be warmed by this hypothetical excess DWIR, even if it did exist which it doesn’t.

      • David Springer

        Sure, there’s net lw emission from the ocean. Not much as percentage of sw absorption. Most of the solar energy leaves the ocean in latent form. This is very unlike land and there’s a simple reason for it. If there’s water available to evaporate that’s the path of least resistance. By a goodly amount too judging by the disparity.

        Mebbe you should take an oceanagraphy class so you’d know what ocean heat budget looks like. I did.

        Here ya go.

        http://oceanworld.tamu.edu/resources/ocng_textbook/chapter05/chapter05_06.htm

        Opens up with Trenberth’s cartoon heat budget then gets into a whole buttload more detail. Get back to me when you understand why the global maps showing incoming and outgoing energy in appear they way they do and when you understand what it means when the average oceanic heat loss to radiation is about 50W/m2 in the tropics and subtropics and latent loss is about 150W/m2.

        Which process would you say is the more important to understand especially over the ocean?

      • @DS: Sure, there’s net lw emission from the ocean. Not much as percentage of sw absorption. Most of the solar energy leaves the ocean in latent form.

        Agree with all three of those statements, though the third is not by as big a margin as “most” might suggest.

        Mebbe you should take an oceanagraphy class so you’d know what ocean heat budget looks like. I did.

        Faster might be just to read
        http://scienceofdoom.com/2010/10/06/does-back-radiation-heat-the-ocean-part-one/
        which is much more sharply focused on the topic we’re debating. Note that Nick Stokes makes the same point I do about net SW radiation at the surface being upwards (on average, which I neglected to say). Only net matters.

      • Vaughan DS, isn’t this where you should think about what you are averaging?

        The bulk of the oceans are radiating ~334 Wm-2, 24-7-365. To remain static, they would have to be balanced by a gain equal to their loss. The solar energy absorbed is ~330 Wm-2 (165 Wm-2 if you consider “average” but the sun don’t work at night.)

        Most of the surface energy loss by the oceans is related to evaporation. Using a little more reliable estimate, Stephens et al, rather than the out dated and erroneous K&T comics, latent loss is ~176 Wm-2 (~88 Wm-2 average) with a sensible heat ratio of roughly 0.59 for a total 12 hour loss of ~300Wm-2 (150Wm-2 average). There is ~30Wm-2 not accounted for that could be measurement error or a variety of other factors. Stephens et al. indicate a surface uncertainty of +/- 17Wm-2.

        The percentage of “surface” that is ocean (more accurately “moist”), ranges from 75% to ~65%. The moist surface would transfer energy to the lower thermal mass remaining surfaces. Approximately 30% of the 330 Wm-2 or about 100Wm-2 is advected from the moist to other surfaces. The average “apparent” energy absorbed is ~0.7*330=231Wm-2 reasonably close to the 236Wm-2 TOA.

        Since the “moist” surface is transferring an estimated 100 Wm-2 to the other surfaces and the “average” radiant energy of the moist surface is ~334Wm-2 plus the 100Wm-2 being transferred or ~434Wm-2. Remarkably, the average radiant energy of the ocean surface is ~425 Wm-2, about 9Wm-2 lower, but within the +/-17 Wm-2 uncertainty indicated by Stephens et al.

        That 9 Wm-2 BTW could easily be associated with that other latent where ice is formed at about -1.9 C (~307Wm-2) and thaws at 0C (~316Wm-2) degrees.

        It would seem that if you happen to mix the wrong “averages” to establish initial conditions for a dynamic model, that perhaps more time should have been spent on the “static” model before taking that leap.

        Also with roughly 100Wm-2 poleward energy transfer, the oceans as a heat SINK instead of the thermal reservoir providing the energy to maintain the atmospheric effect would seem a tad bassakwards :) With 100 to 120 Wm-2 internal or “Wall” energy transfer the norm, thinking that internal variations in ocean mixing and sea ice extent don’t have a significant impact on long term climate is insane. Grab the butterfly nets gang :)

        Note: the “moist” area or “moist” envelop is a simple way to divide and conquer the confusing transition from a near “radiantless” portion of a system (liquid oceans) to a nearly pure “radiant” portion of the system (dry air at very low pressure). If you want to get fancy, you could use Helmholtz free energy and a roughly -25 C isothermal boundary layer and keep track of the dissipation of energy and mass. I am sure there are other ways to make the problem even more complicated.

        A final BTW.

      • David Springer

        When you resort to blog science, like science of doom, you’ve lost the argument. As a bare minimum use an encyclopedia.

      • Vaughan Pratt

        Don’t know whether or not you realize it, but you have fallen into David Springer’s “logic trap” with your response.

        DS states:

        to the question “do greenhouse gases positively raise the earth’s average surface temperature”? I neither believe they do or believe they do not. I’m an experimentalist not a theorist. Show me the well controlled repeatable experiments.

        You respond with several paragraphs of eloquent prose describing various aspects of the hypothesis, but do NOT cite any “controlled repeatable experiments”, which would provide empirical evidence to support the hypothesis, as David has requested.

        This should be easy for someone (pardon me if this sounds snarky) who can predict the average temperature for the year 2100 to within “a millikelvin”.

        Max

      • @manacker: You respond with several paragraphs of eloquent prose describing various aspects of the hypothesis, but do NOT cite any “controlled repeatable experiments”, which would provide empirical evidence to support the hypothesis, as David has requested.

        David will have to “disbelieve” many more geophysical hypotheses than just global warming if he’s going to make “controlled repeatable experiment” his criterion. I bet we both could name a great many such. I’ll start with plate tectonics, which took many years to be accepted and for which there is still no “controlled repeatable experiment.” Your turn.

      • David Springer

        Plate tectonics measured by GPS.

        Next.

      • @DS Plate tectonics measured by GPS.

        Huh? How could repeating a GPS measurement tell you anything at all about plate tectonics?

        GPS has only been available to the public since 1983. In that time the average plate moved 30 cm. GPS accuracy at the start of that period was two orders of magnitude worse, and only recently has been reduced to a few meters. Under those conditions repeating a GPS measurement would tell you nothing about plate movement unless you waited half a century between repetitions. Having to wait nearly a lifetime to repeat the experiment isn’t what I thought you had in mind by a “controlled repeatable experiment.”

      • Vaughan,

        Using special equipment for geodetic use and collecting signals over long periods great accuracies have been achievable for years from GPS.

        Presently portable devices like Trimble R8 have an accuracy of a few millimeter both in horizontal and in vertical direction.

        I have a clear recollection that similar or even better accuracies were obtainable already 20 years ago at geodetic base stations using the publicly available signals and a lot of time.

      • David Springer

        Kind of makes you wonder what planet people have been living on for the last 30 years, huh? There were 10 GPS birds aloft by 1985. You could get millimeter accuracy if you wanted to wait a couple of days to refine the fix and the receiver wasn’t cheap. Back then if you needed it faster there were ground stations at known locations sending out correction signals so if you were in range and subscribed to the service (can you spell land surveyors and earth scientists) you were good to go for fast accurate fixes.

      • David Springer

        http://www.ion.org/search/view_abstract.cfm?jp=p&idno=1167

        Millimeter Accuracy Kinematic GPS Surveying
        Author: Mark P. Leach, Shane Nelson, Charles Slack
        Meeting: Proceedings of the 53rd Annual Meeting of The Institute of Navigation June 30 – 2, 1997
        Albuquerque, NM

      • David Springer

        GPS is accurate to millimeters. You REALLY need to broaden your horizons. Let me help you get started. Again.

        http://www.iris.edu/hq/files/programs/education_and_outreach/aotm/14/1.GPS_Background.pdf

        Millimeter accuracy for GPS requires a longer period of time to make the reading. If you’re not in hurry millimeter accuracy has been available for decades. Californian scientists should be especially aware of this since you boy have about a zillion of them employed watching every tiny motion across fault lines.

        Good grief.

      • Vaughan, the game plan is a higher colder place, but where is the place?

        Wall energy transfer is the monkey wrench. Here is a comparison of tropics and subarctic winter for a quadrupling of CO2.

        https://lh3.googleusercontent.com/-Cvi3eAiAy9s/UQK2Z6ZB2AI/AAAAAAAAG4Y/fDcj_U6Oy4g/s705/1500%2520versus%2520375%2520tropics%2520and%2520subarctic.png

        I was lazy and only plotted every 5 km for the first 20, but there seem to be a touch of non-linearity which could have a little impact on advection.

      • thisisnotgoodtogo

        Vaughn Pratt said
        “Richard Dawkins uses the same method of proof to prove the nonexistence of God. God does not exist because Richard Dawkins says so.”

        Is this double reverse twist sarcasm or something like that? I don’t follow what you’re trying to say.

      • Sorry, I realized after making that comment that it was unclear. It was not intended as sarcasm.

        In asserting “Logically there’s only belief or disbelief” DS appeared to be identifying “disbelieve” with “believe not” without actually doing so, confirmed in his next comment in the traditional manner of this particular debate. Exploiting the appearance of this identification has a long history in connection with the trichotomy theism-agnosticism-atheism, recounted here. By appearing to identify “disbelieve” with “believe not” DS was similarly attempting to dilute the definition of “believe not” to “not believe”, which logically are not the same thing (as DS is more than happy to point out) but which ordinary discourse rarely distinguishes (as DS does not like to point out).

        Dawkins himself has been represented as being unclear on his position, on the one hand declaring himself an agnostic, e.g. in The God Delusion and in recent interviews, while on the other rabidly staking out a clearly atheist position in his writings. In a “public dialogue” with the Archbishop of Canterbury a year ago chaired by the noted philosopher Sir Anthony Kenny, Dawkins clarified his position by denying that an agnostic has to be someone that attaches equal probability to the existence or non-existence of God. At 00:52 in the video Dawkins declares “I’m a 6.9” (on a scale where 1 = “I know God exists” and 7 = “I know God does not exist”).

        My own objection to Dawkin’s reasoning is that he appears to have uncritically bought into the Judaeo-Christian belief that there is only one god. This makes it easier for Dawkins to make 6.9/7 agnosticism sound reasonable. Applying Bayesian statistics to religions with a wider and more diverse range of gods, the probability that none of them exists is surely much less than the conditional probability, taking the prior (by fiat) to be that n − 1 of the n putative gods don’t exist, that the sole surviving god doesn’t either.

        On statistical grounds polytheism is much more plausible than monotheism. And on logistical grounds a single god hearing each and every prayer is about as implausible as a single Santa Claus hearing each child’s request after they’ve waited in line at the store for quarter of an hour (requiring the fortitude of a soccer mom to be leavened with the patience of Job). Even a year would not suffice without multiple Santas.

        Monotheism is childlike in its spiritual outlook. The high priests who invented it were treating their flock the way parents treat their young children, relying on the flock to respond with childlike enthusiasm. Had Paul followed his own advice in 1 Cor. 13:11 to “put away childish things” he would not have embraced monotheism.

        But I digress. My main point was that DS was exploiting a tactic that has a long history, namely insisting on a logical distinction that is not made in ordinary discourse, much like Max Manacker’s insistence that this century started in 2001, contrary not only to the ordinary understanding but also to ISO 8601.

      • So, if there if someone proposed that there are 47, or 312 Santas, then that’s more believable than the traditional tale? Can you name a religion with multiple gods, that makes any more sense than the Judeo-Christian monotheism? What’s the mostest gods anybody has ever had, Vaughan? Personally, I don’t think 4 or 5 gods for every man woman and child would do it for me.

      • Can you name a religion with multiple gods, that makes any more sense than the Judeo-Christian monotheism?

        That’s like asking whether you can name a comic genre with multiple heroes that makes any more sense than Superman comics. I can name them, and they may well make financial sense.

        That wasn’t Dawkins’ point. Dawkins was addressing whether God actually exists, as distinct from merely being a central fictive character in some spiritual genre. He wasn’t denying the latter, he was merely giving the former 69:1 odds against.

        Personally, I don’t think 4 or 5 gods for every man woman and child would do it for me.

        With or without sharing? When I was a child Elvis and the Beatles did it for me. I would have given at least 69:1 odds against their being fictional characters. (For comparison, never having handled or even seen a firearm shorter than a rifle or shotgun in those days—this was the early 1960s in Australia—I gave six-shooters even odds of being fictional creations invented for the purposes of having exciting shoot-outs in cowboy movies, much like ray-guns in SF movies.)

        Personally I would think any “real” gods “out there” would be well advised to keep a low profile when traveling anywhere near Earth. If two or more popped up on the radar at the same time they’d likely not receive a warm welcome from any monotheistic religion. Even one would at the very least be obliged to produce Mt. Olympus’s counterpart of an original birth certificate.

      • I wasn’t following your little discussion too closely, doc. Not going back to it, so I’ll just pretend to agree with you on this one. I got more important things to think about than gods.

      • Brandon Shollenberger

        This is way off-topic so I’m not going to delve into it, but I can’t ignore this comment from Vaughan Pratt:

        Monotheism is childlike in its spiritual outlook. The high priests who invented it were treating their flock the way parents treat their young children, relying on the flock to respond with childlike enthusiasm. Had Paul followed his own advice in 1 Cor. 13:11 to “put away childish things” he would not have embraced monotheism.

        Calling monotheism childish is silly and offensive. There is nothing about polytheism that is more or less childish than monotheism. In fact, there is nothing about polytheism that makes it more statistically or logistically plausible as Pratt claims. Pratt’s criticism of monotheism is completely baseless.

        Given this is not the appropriate forum for such a discussion, I won’t say anything more.

      • @BS: Given this is not the appropriate forum for such a discussion, I won’t say anything more.

        Fine by me, I’ll follow suit, except to point out that “childish” was used only in the King James Version of Paul’s advice. I used “childlike,” not “childish,” (twice in fact).

        So if you’re going to accuse me of being offensive please (a) at least quote me accurately and (b) don’t single me out when Christian Science literature itself promises that childlike trust in God quickly heals illness. Medically unsound perhaps, but how does that make it “offensive?” The first sentence of that article draws the very distinction you’ve glossed over:

        The word childish brings to thought images of fussiness, selfishness, and stubbornness, but when the suffix is changed to create the word childlike, we get a very different mental image–one of trust, innocence, and joy.

      • Brandon Shollenberger

        Vaughan Pratt, you’re now engaging in the worst kind argument, argument by misrepresentation:

        Fine by me, I’ll follow suit, except to point out that “childish” was used only in the King James Version of Paul’s advice. I used “childlike,” not “childish,” (twice in fact).

        So if you’re going to accuse me of being offensive please (a) at least quote me accurately

        You used the word “childlike” then quoted a source using “childish” as referring to the same thing. You conflated the two. That means I was perfectly justified in using either word. It’s true I shouldn’t quote you as using “childish,” but I didn’t do that. The lack of quotation marks around the word means it is not set aside as a direct quote. That means you’re criticizing me for using a legitimate paraphrase by falsely claiming it was a quote. And then you go on to say:

        (b) don’t single me out when Christian Science literature itself promises that childlike trust in God quickly heals illness. Medically unsound perhaps, but how does that make it “offensive?”

        This makes no sense. You portray me as singling you out, but you are the only other person in the exchange. It’s impossible to single you out when you were the only person being talked about in the first place. There is no reason I would randomly start criticizing Christian Scientists, and that’s even ignoring the fact what the article you linked to doesn’t use “childlike” to refer to the same thing you did.

        Just like I cut off the religious argument, I’m now done with this stupid semantic parsing you’ve forced upon me. If you want the last word, you can have it, but I would ask you not to level false accusations against me again.

      • > This makes no sense.

        Chewbacca strikes again!

        ***

        Here’s the that maddened Chewbacca:

        > Monotheism is childlike in its spiritual outlook. The high priests who invented it were treating their flock the way parents treat their young children, relying on the flock to respond with childlike enthusiasm. Had Paul followed his own advice in 1 Cor. 13:11 to “put away childish things” he would not have embraced monotheism.

        http://judithcurry.com/2013/01/20/berkeley-earth-update/#comment-288900

        Since Chewbacca was simply mad, that’s OK.

      • thisisnotgoodtogo

        Vaughn Pratt,

        Dawkins is a much stupider thinker than you have pictured him as.
        When push comes to shove, he declares that nobody could get their morals from the bible, and asked where , he replies “the same way atheists get theirs…from the zeitgeist…news reports , court decisions, dinner party conversations”

        Dawkins therefore gives more influence in the zeitgeist to a single dinner conversation than to all the prayer meetings conducted, all the services, , architecture and art, music and song, volunteer work and so on.

        Dawkins forgets that all one needs to have done is to have read a verse on a tract and have been affected by it. One good thought or goal.
        He forgets that the news reports were affected by the religion, and courts have been affected too. As well as dinner party converstions.

        He’s bonkers.

        He said that it was worse to bring a child up Catholic than for to be molested by a churchman.

        As well, he said that the word “Atheism” has a bad connotation, and so he went subjunctive mode, saying he pleads to have it called “Rationalism”.

        Nutty.

      • I have a friend who is a Dawkins fan. If you have quotes for what you claim Dawkins say, I’d be interested.

      • thisisnotgoodtogo

        Willard, I do have quotes.
        e.g.
        The Dubliner article “The God-Shaped Hole” by Dawkins.

        “Regarding the accusations of sexual abuse of children by Catholic priests, deplorable and disgusting as those abuses are, they are not so harmful to the children as the grievous mental harm in bringing up the child Catholic in the first place.”

        ” I can’t speak about the really grave sexual abuse that obviously happens sometimes, which actually causes violent physical pain to the altar boy or whoever it is, but I suspect that most of the sexual abuse priests are accused of is comparatively mild – a little bit of fondling perhaps, and a young child might scarcely notice that. The damage, if there is damage, is going to be mental damage anyway, not physical damage. Being taught about hell – being taught that if you sin you will go to everlasting damnation, and really believing that – is going to be a harder piece of child abuse than the comparatively mild sexual abuse.”

        Let me know what else you would like.

      • thisisnotgoodtogo

        Willard, here is Dawkins @6 min. for “Zeitgeist”
        http://www.youtube.com/watch?v=whHVI_s2HEA

        there are a number of instances where he’s using terrible logic

      • thisisnotgoodtogo

        um…just go to youtube and see the part 2 for the rest.
        His reply said that taking just one moral lesson from the text is cherry picking.

        What a lamebrain. If you only read a verse or two and adopted the sentiments, it’s not cherry picking at all. Supposing that it were a case of cherry picking. Still does not say that the moral was not taken.
        So there he changes to the basis for the choice.
        Then he gets into where we “really” get morals from.
        News reports. Parties. But not . NOT. NOT EVER…texts from you know where.

      • thisisnotgoodtogo

        Here’s a beauty from Dawkins on The Jewish Lobby. The man cannot distinguish a lobby group from a religion an ethnicity …population stuff, for Dawks sake

        “When you think about how fantastically successful the Jewish lobby has been, though, in fact, they are less numerous I am told — religious Jews anyway — than atheists and [yet they] more or less monopolize American foreign policy as far as many people can see. So if atheists could achieve a small fraction of that influence, the world would be a better place.”

      • Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

        Sorry, not following. BEST (the subject of this thread) shows a trend for that period of

        1990-2012: 0.141 C/decade

        If you’re allowed to pick any dataset and any period then you can prove anything you want, as I illustrated with examples just now in response to Max.

        If you stick to the data that this thread is about, namely BEST, and stick to honest decades, not your cherry-picked periods some of which aren’t even decades, then you get these trends:

        1970-1980: 0.060
        1980-1990: 0.034
        1990-2000: 0.264
        2000-2010: 0.268

        All that these trends show in conjunction with the following further decadal trends

        2000-2010: 0.268
        2001-2011: 0.030
        2002-2012: −0.062
        2003-2013: −0.004

        is that decadal trends are meaningless.

        The longer the time series the more significant. Those who complained in connection with my poster that 160 years is not long enough to be significant for estimation of multidecadal climate can hardly turn around and claim significance for a mere 10% of that amount!

      • Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

        While I’m still not following, it might be worth pointing out that absence of a trend does not prove absence of a cycle.

        Consider a 20-year cycle constructed as a sine wave from -10 years to +10 years, with its positive-going zero-crossing at 0 years. The trend from -5 years to +5 years is quite pronounced, going from -1 to +1. The trend from -10 years to +10 years, even though twice as long, is exactly zero, going from 0 to 0.

        So I’m not at all clear as to how you propose to use a 15-year period to prove that there’s no 20-year cycle.

      • Vaughan Pratt

        The question was asked

        Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

        You apparently misunderstood the question. As I understood it the question asks about the “large change in trend” in global temperature from the first half of this period (strong warming) to the second half (slight cooling).

        The curve below shows what is meant here (starting in 1993 instead of ~1990).

        http://www.woodfortrees.org/plot/uah/from:1993/to:2002/trend/plot/uah/from:2003/trend/plot/hadcrut4gl/from:1993/to:2002/trend/plot/hadcrut4gl/from:2003/trend/plot/rss-land/from:1993/to:2002/trend/plot/rss/from:2003/trend/plot/gistemp/from:1993/to:2002/trend/plot/gistemp/from:2003/trend/plot/hadcrut3vgl/from:1993/to:2002/trend/plot/hadcrut3vgl/from:2003/trend

        So the question (s I understood it) is: What caused this dramatic change in trend?

        Max

      • Max, go over the edge. Focus your peepers on ever shorter trends.
        ENSO neutral is about to lift your butt off its dime.

        So sweet was the afterglow of back-to-back La Nina and poof it was gone.

      • JCH

        The “about to” stuff interests me less than the “already happened” stuff.

        BTW, to your earlier query, BEST (land-only record) has published its values beyond the WfT end point, unfortunately only to end 2011, so far. It also corrected some 2010 values.

        The latest decade shows a flat trend, while the decade just before showed strong warming (Hansen’s “standstill” from the global record).

        Max

      • JCH

        PS You can download BEST (thru 2011) here:
        http://berkeleyearth.lbl.gov/auto/Global/Full_TAVG_complete.txt

      • David Springer

        You are correct, Max. Pratt misunderstood the question. That you understood it is testament to the question being clear enough and the creation of a straw man then seemingly intentional. Misunderstanding the question is a tool in the artful dodger’s toolbox.

      • Steven Mosher

        Logically there’s only belief or disbelief.

        Hmm. do you believe there are an odd number of stars in the universe?

        I mean you have to believae that there are and even number or disbelieve that there are.

      • Yes, and Eli defies you to provide an experimental proof to the contrary.

      • Eli

        experimental “evidence” (not “proof”)

        Perception = reality (?)

        A sizeable number of humans (arguably the majority?) “perceive” that there is a single supreme diety of some sort.

        [Sort of like the “scientific consensus” being used to suggest “evidence” that the CAGW premise is valid.]

        Max

      • David Springer

        So, aside from the fact that the odd/even cycle fell apart (15 years since any warming not 10 and a reversal), in the past two decades as I showed with woodfortrees links we went from 0.52C/decade warming to -0.05C/decade cooling (1992-2002 and 2002-2012 respectively).

        The 11-year solar cycle is easy to pick out of the record but it has nowhere near that much effect. In order to made some kind of case that this approximate 22-year cycle anything other than coindence (only 6 cycles in total since 1880 and the others were very weak in comparison to the the most recent). Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

      • @DS: In order to made some kind of case that this approximate 22-year cycle anything other than coindence (only 6 cycles in total since 1880 and the others were very weak in comparison to the the most recent).

        Are we talking about the same graph? I was looking at this graph, in which the dates of the 15 solar cycles from 9 to 23 are expressed by those 15 numbers placed at their respective dates. Each of the eight odd-numbered cycles is very well aligned with a peak of the upper curve; and the first two and last two such peaks are somewhat weaker than the middle four.

        Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

        None whatsoever. The quite precise alignment of those eight peaks with the odd-numbered solar cycles may well be just one of those odd coincidences that Nature seems to love to create in order to lead scientists up a garden path.

        @manacker: You apparently misunderstood the question. As I understood it the question asks about the “large change in trend” in global temperature from the first half of this period (strong warming) to the second half (slight cooling).

        DS’s question started out by referring to 6 cycles of a 22-year cycle. How does your obviously cherry-picked graph (all those strange dates are a dead giveaway) relate to a 22-year cycle?

    • JCH | January 20, 2013 at 11:52 am |

      Steven Mosher – will Wood for Trees ever update their Best data?

      • Willard

        Don’t know whether WfT will update BEST data, but you can plot it in Excel yourself and you will see:

        1991 through 2001: warming at 0.35C per decade
        January 2002 to today: flat trend

        Over same periods the SST trend (Hadley) was:
        1991 through 2001: warming at 0.24C per decade
        January 2002 to today: slight cooling at -0.08C per decade

        http://www.woodfortrees.org/plot/best/from:2002/to:2013/plot/best/from:2002/to:2013/trend/detrend:-0.05/plot/hadsst2gl/from:1991/to:2001/trend/detrend:%200.238/plot/best/from:1991/to:2001/plot/best/from:1991/to:2001/trend/detrend:0.348/plot/none

        So it is likely that the land + sea record (using BEST for land portion only) would show slight cooling over past 11 years or so, following a period of strong warming.

        Max

      • Willard

        That last WfT graph left off one trend line. Here is complete graph (keep in mind that updating BEST data shows a flat trend after 2001, rather than slight cooling as shown by WfT).

        http://www.woodfortrees.org/plot/best/from:2002/to:2013/plot/best/from:2002/to:2013/trend/detrend:-0.05/plot/hadsst2gl/from:1991/to:2001/trend/detrend:%200.238/plot/best/from:1991/to:2001/plot/best/from:1991/to:2001/trend/detrend:0.348/plot/hadsst2gl/from:2002/to:2013/trend/detrend:-0.08

        Max

      • MiniMax,

        Same graph, without detrending and other grafted trends:

        http://www.woodfortrees.org/plot/best/from:2002/to:2013/plot/best/from:2002/to:2013/trend/plot/best/from:1991/to:2001/plot/best/from:1991/to:2001/trend/plot/best/from:1991/to:2013/offset:1/plot/best/from:1991/to:2013/trend/offset:1

        I also took the liberty to add a chart for the whole spectrum.

        Gods’ eyeballs, we need Gods’ eyeballs!

      • Willard

        Don’t know about your eyesight, but mine is 20-20.

        I see exactly what I said I saw.

        A BEST record that warmed by around 0.35C per decade from 1991 to 2001 and cooled by -0.05C per decade thereafter

        A Hadley SST record that warmed by 0.24C per decade from 1991 to 2001 and cooled by -0.08C per decade thereafter.

        Nothing very astounding there. It all just shows that there is a current “pause” in global warming.

        It also points toward a lower ECS than previously assumed (around half), as independent published studies by Schlesinger and Gillett plus a “not-yet-published” study by Lewis are also suggesting.

        Seems to all be falling into place, Willard.

        Max

      • @manacker: Don’t know about your eyesight, but mine is 20-20.

        Max’s vision is 20-20 when you count just the letters on the eye chart that he got right. Had they counted the others he’d be judged legally blind.

        By cherry-picking his letters Max can claim any degree of visual acuity he wants. Likewise for decades. Were one to tie Max’s hands by insisting that he define a decade to consist of years whose first three digits are the same, he would find all the evidence working against him.

        Max’s ingenious sleight-of-hand is easily illustrated by looking at two sets of data concerning the slope of trend lines of specific decades expressed in degrees per decade.

        1. Decades as one might think naively to define them as above, namely 1970-1980, 1980-1990, 1990-2000, and 2000-2010; and

        2. Decades as Max likes to pick them, namely 2000-2010, 2001-2011, 2002-2012, and 2003-2013.

        For the former kind we have (in units of C/decade):

        1970-1980: 0.060
        1980-1990: 0.034
        1990-2000: 0.264
        2000-2010: 0.268

        Note in particular that according to the BEST data the most recent decade is climbing even more steeply than the previous decade, and way steeper than the two before.

        Now let’s look at what options are available to Max when he’s allowed to pick any year he wants for the start of his preferred decade.

        2000-2010: 0.268
        2001-2011: 0.030
        2002-2012: −0.062
        2003-2013: −0.004

        Obviously Max would be well advised to steer clear of 2000-2010 as it greatly undermines his point, however reasonable it might seem to some as a choice of “first decade of this century.”

        But given that 1980-1990 is barely distinguishable from his three remaining choices, it’s hard for him to argue that any supposed “pause” in global warming is any different from the evident pause we saw in 1980-1990. The untrained eye might see a pause in that decade, but the trained eye would be naive to claim that 1980-1990 climbed at all steeply.

        Moreover if we are allowed to pick our “decades” with the same freedom as Max, it suffi9ces to point to this decade:

        1977-1987: −0.066

        to make complete mincemeat of Max’s argument that after several decades of rising we’re now in a cooling period.

        Brr, 1987 must have been freezing in this BEST of all possible worlds.

        My theory is that Max doesn’t really believe all this stuff he comes up with, he’s too smart for that. He’s just having fun seeing whose leg he can pull.

      • Vaughan

        Thanks for your vote of confidence regarding my vision.

        However, your tongue-in-cheek sarcasm regarding my logical reasoning ability is misplaced.

        The decadal temperature trends you cite are all very interesting.

        They do not, in any way, invalidate the decadal temperature trends which I cited – they simply supplement them.

        The global temperature records show warming over three decades from around 1971 to 2001 and cooling for a bit more than one decade since then.

        This is true for BEST (land only) and, as I pointed out also for the Hadley SST record, as well as most of the global records.

        Prior to 1971 the record shows three decades of slight cooling.

        This was preceded by three decades of warming starting in 1911, which was statistically indistinguishable from the late 20th century warming period.

        From this one can conclude (as Girma has) that there are cyclical forces at work, which drive short-term oscillations, with an underlying warming trend that has gone back to the early 19th century (as we have been emerging from a naturally occurring colder period, called the LIA).

        Or one can make the observations “fit” forcing by CO2, for example, by removing everything else as “noise” (as you have done on the earlier thread).

        Both analyses are equally valid IMO, and the next few decades will tell us which one is closer to being correct, with the eventual “truth” probably lying somewhere between the two.

        ‘Nuff said.

        Max

      • Max, you show a trend line 2002 to 2013 that shows cooling. The BEST data on WfT, imo, ends at 2010.17. The two months after that are, imo, are screwed up for graphing purposes and are hopefully now fixed, but WfT has not updated BEST since the first release.

        2002 – 2010.17

        Which is not much different than this land only or this land only. This one, the one that is obviously broken, shows cooling and is just right for your pretty little 20-20s.

        So does anybody know if the current version of BEST has 2010.25 at -1.035 and 2010.33 at +1.098?

      • This comment puts Vaughan’s hypothesis under some stress:

        > Max doesn’t really believe all this stuff he comes up with, he’s too smart for that. He’s just having fun seeing whose leg he can pull.

        http://judithcurry.com/2013/01/20/berkeley-earth-update/#comment-288772

        But I’m not sure I have a better hypothesis than V’s.

        That OK, as might say Tony.

      • My 11:28 am comment was referring to this comment by MiniMax:

        http://judithcurry.com/2013/01/20/berkeley-earth-update/#comment-288783

      • Vaughan Pratt

        A correction for you.

        The “first decade of this century” started January 1, 2001 (not 2000).

        Max

      • The “first decade of this century” started January 1, 2001 (not 2000).

        You probably think the day begins at 1 am too, Max. ;)

        Geneva (which you presumably live closer to than most of us) is the home of the International Organization for Standardization, popularly abbreviated ISO. ISO standard 8601 begs to differ from you. That standard specifies that time of day starts from 0, days of the month and months of the year start from 1, and years of the decade, century, and millennium start from 0.

        A cockamamie scheme to be sure, but one that the public has grown so accustomed to that in Australia the media nominated Prime Minister John Howard as “party pooper of the century” for making your quaint argument when the rest of the world was joyously celebrating 1999-12-31T24:00 as the start of the first second of the new millennium.

        Why not 2000-01-01T00:00? Actually that’s equally fine too according to ISO 8601, which recommends the former mainly in conjunction with phrases like “at the end of the day” (however overused that might be) or its clumsier synonym “at the end of the last second of the day.” Like the Morning Star and the Evening Star, in Geneva the end of the day is the same instant as the start of the next day.

        Efforts to regularize verbs, spelling, and temporal indexing conventions can reliably be expected to continue for the foreseeable future by a vocal minority put here on Earth to amuse the peasantry with their pedantry.

      • Sorry, Vaughan, check Wiki:
        http://en.wikipedia.org/wiki/21st_century

        The century began on January 1, 2001, and will end on December 31, 2100.

        (The first century began January 1, 1AD, and it has continued that way.)
        http://www.hko.gov.hk/gts/time/centy-21-e.htm

        Similarly, the 1st millennium comprised the years AD 1-1000. The 2nd millennium comprises the years AD 1001-2000. The 3rd millennium will begin with AD 2001 and continue through AD 3000.

        Max

        PS I know when the day begins. It’s the same in Switzerland as anywhere else – at midnight, local time.

      • @manacker: Sorry, Vaughan, check Wiki:

        Wikipedia is full of inconsistent information, Max. The bald and unsourced statement you quoted doesn’t even mention the long-running controversy that is discussed at considerable length in this section of the Wikipedia article “Millennium.”

        Being unsourced, the statement you quote is by Wikipedia’s own definition Original Research. In contrast a great many sources are given in the Millennium article concerning the debate over this controversial topic. I take ISO 8601 as having settled that debate by creating an international standard that agrees with the majority view. However there are also other reasons such as given by the Wikipedia article on the proleptic Gregorian calendar:

        For these calendars [Julian and Gregorian] we can distinguish two systems of numbering years BC. Bede and later historians did not use the Latin zero, nulla, as a year (see Year zero), so the year preceding AD 1 is 1 BC. In this system the year 1 BC is a leap year (likewise in the proleptic Julian calendar). Mathematically, it is more convenient to include a year zero and represent earlier years as negative, for the specific purpose of facilitating the calculation of the number of years between a negative (BC) year and a positive (AD) year. This is the convention used in astronomical year numbering and in the international standard date system, ISO 8601. In these systems, the year 0 is a leap year.[2]

        The proleptic Gregorian calendar is sometimes used in computer software to simplify the handling of older dates. For example, it is the calendar used by MySQL,[3] SQLite,[4] PHP, CIM, Delphi, Python[5] and COBOL.

        In other words those arguing that 1/1/1 is the first day of the first millennium are assuming a world in which The Venerable Bede’s system is still in use. With the modern replacement of Bede’s 1 BC with a year 0, 2 BC with -1, and so on, we now have a rational system in which the millennium begins at 1/1/0. If we think of this as Christ’s logical birthdate (his physical birthdate has been estimated as a year or so earlier) then logically Christ turns 1 on 1/1/1 and for the next twelve months he is one year old (thereby justifying calling this the year 1 AD) even though he is in his second year following the usual convention by which a baby is not deemed one year old until the end of his or her first year. This makes much more sense than Bede’s clumsy system, and is easier to work with besides.

        But anyway, Max, congratulations on finding an observatory expressing the nonstandard minority view Evidently you’ve found a kindred spirit in Hong Kong. In the US in 1998, astronomers David Palmer and Samar Safi-Harb wrote similarly here, “I expect that, around February, 2000, people will start coming around to the belief that the millennium does indeed start with 2001, and plan their next party accordingly.” This expectation would appear to have gone largely unmet: it seems to have been merely an exercise in wishful thinking at the time.

        As I said above, “Efforts to regularize verbs, spelling, and temporal indexing conventions can reliably be expected to continue for the foreseeable future.” You should have no trouble finding more such isolated examples to back you up. Human nature being what it is, I bet there’s quite a few out there.

      • Pretty desperate!

        So when you count to ten or a hundred you start at 0 and end at 9 or 99 do you? Or is this just a convention that applies to counting to a thousand?

        What happened in year 0? I know what happened in 1BC, 1AD, 1000AD and 2000AD but I am a bit short of information for this year.

        Alan

      • @AM: What happened in year 0? I know what happened in 1BC, 1AD,

        There are two numbering systems in common use, the one standardized in England in the 8th Century by the Venerable Bede and popularized in Europe by Charlemagne, and the modern one standardized by ISO 8601 as well as by astronomers, adopted by MySQL, SQLite, PHP, CIM, Delphi, Python, COBOL, etc., and implicitly assumed by the majority of the public, which except for a few pedants accepted 2000 as the start of this millennium and this century. Sure there were parties in 2001, but the big money was in those in 2000.

        Where Bede writes 2 BC, 1 BC, 1 AD, 2 AD the modern system refers to the same years as -1, 0, 1, 2. That is, n BC becomes 1 − n. Fractional years between 1 January 1 BC and 1 January 1 AD are expressed as 0.25, 0.5, 0.75 in the modern system. Bede’s system makes no provision for fractional years: should 1 July, 1 BC be written 0.5 BC or 0.5 AD, or do you toss a coin? There is no ISO standard for Bede’s system that would answer this.

        If you take 1/1/1 as the logical date of Jesus’s first birthday, his n-th birthday is on the first day of n AD and he is n years old throughout n AD, and 3 months old in 0.25 AD. Makes sense to me, YMMV as they say.

  49. Given that the GCM and BEST both produce smooth temperature fields while the other surface statistical models do not it makes sense that BEST performs best with the GCM. The question is whether this has anything to do with reality, which the GCM is not.

    • Steven Mosher

      you have not looked at the data in question. I’ll suggest that you could also take empirical parameters describing the weather field, generate samples from that structure and get the same result. You could also take re analysis data and generate the same result. It should come as no surprise that an BLUE method performs better. Its pretty simple. Give me any dataset and the method will do better. That’s one of the points of memo 3. For folks who dont look at data and dont get math, we did a pretty picture. have a look

  50. I didn’t read properly the memos of Hausfather and Wickenburg until a while ago. There’s always in a way a good feeling when more careful work of others is in complete agreement with own expectations. Fig 4. of the Hausfather memo did even implement a test that I proposed in a comment at Tamino’s during the early discussion – and it gave exactly the result I expected, i.e. reversing the time order make the apparent variability larger for the early periods in a similar way as the original figure made the later distributions wider.

    In my view these memos confirm that no result that’s affected by the broadening of the distribution is what it looks to be. In particular that applies to the probability of exceeding some limit like 3-sigma in the temperatures. Numbers obtained from the figure for that do not represent that. Correct probability for exceeding 3-sigma is significantly less and could be estimated by replacing the widened distribution by an unwidened one.

    • Steven Mosher

      Thanks Pekka. I’m really proud of sebastian. He worked very hard on that memo and faced some pretty good internal reviewers. I’ll let him know you liked it ( he doesnt read blogs ). Kid will make a great scientist someday

    • In the mathematical normal distribution, a shift of one standard deviation increases the probability of exceeding 3-sigma by nearly 20, the ratio of a 2-sigma exceedance probability to a 3-sigma one. This is not far off what Hansen suggested, because the climate shift is about a standard deviation, and the distribution is near normal, so I think it won’t be far from this if every effort is made to remove artificial broadening.

      • Just have a look at the Fig 3. of the recent paper. Try to figure out, what the figure would look out when the original PDF is shifted to the new location. It appears clear that what’s is 9.3% or 9.6% in the figure will drop to a fraction of that, perhaps to something like 2% or 3%. That’s not a minor change.

      • Pekka, I am not completely sure which paper you mean. Several were presented including the memos. Hopefully you are not referring to shifting the baseline period.

      • I was referring to the paper that was discussed in the recent thread with Hansen in the title.

      • That is still about a factor of 20 over the 0.1% originally in the 3-sigma category. The numbers at the tail are not precise enough to guess the factor accurately. For a Gaussian it should be 0.13% above 3 sigma (one in 750 years to put it another way), while 2 sigma is 2.28% (one in 45 years). The extent to which these real curves are not Gaussian has to be evaluated, however.

      • I agree fully that the likelihood of a far tail increases by a big factor when a Gaussian distribution is shifted. I made a check by copying the graphics to a graphics application. There I redraw the original Gaussian to a curve I could move around. That way I could estimate that the share drops approximately to a third or a fourth.

        While the relative change to 3% is much larger than from 3% to 9%, the absolute change is twice as large in the latter.

        I continue to maintain that graphics that presents results that are misinterpreted by almost everyone to mean something that they don’t mean is a serious error that’s not acceptable. For a lay reader 9% is really much more serious than 3% (and not only for a lay reader).

        It may seem that I’m more strict with science than with skeptics. That’s a true impression. I do, indeed, expect more from the scientists. Mostly they provide that, but unfortunately not always.

        I do argue also against skeptics but it would be hopeless to do that every time it’s justified.

      • I would also note that attention was paid to the summer, not the winter, and that is because the winter has a broader distribution making the climate shift effect less dramatic on the frequencies of extreme events, even if the temperature change is just as large. This shows that Hansen had a particular message he wanted to emphasize, which was extreme events and frequency.

      • That he had a specific message and that he supported that with misleading arguments is to me the worst part of this all. That kind of behavior leads me to say that I don’t trust the scientist.

  51. Steven Mosher | January 20, 2013 at 1:24 pm |

    quote
    Captain, it’s SHI ship heat iislands.
    unquote

    A 2008 study – “Oceanic Influences on Recent Continental Warming”, by Compo, G.P., and P.D. Sardeshmukh, (Climate Diagnostics Center, Cooperative Institute for Research in Environmental Sciences, University of Colorado, and Physical Sciences Division, Earth System Research Laboratory, National Oceanic and Atmospheric Administration), Climate Dynamics, 2008)
    [http://www.cdc.noaa.gov/people/gilbert.p.compo/CompoSardeshmukh2007a.pdf] states: “Evidence is presented that the recent worldwide land warming has occurred largely in response to a worldwide warming of the oceans rather than as a direct response to increasing greenhouse gases (GHGs) over land. Atmospheric model simulations of the last half-century with prescribed observed ocean temperature changes, but without prescribed GHG changes, account for most of the land warming. … Several recent studies suggest that the observed SST variability may be misrepresented in the coupled models used in preparing the IPCC’s Fourth Assessment Report, with substantial errors on interannual and decadal scales. There is a hint of an underestimation of simulated decadal SST variability even in the published IPCC Report.”

    HTH

    JF
    BTW, why the blip?

  52. Correct me if I’m wrong, but it seems to me that Steven Mosher believes that curve fitting implies conclusive attribution.
    If that is true, he fails to understand what Prof. Curry has been trying to communicate. In any case, thanks to Mosher for posting here, as it makes the gap more obvious.

  53. My synopsis of my Q & A with Mosher on choice of journal in which to publish and the rigor of the peer-review:

    My Qs, Mosher’s As, in quotes from above. My comments follow each in [ ]s.

    Q: Was it Peer Reviewed?

    A: “Yes. There were three reviewers. I read the reviews and then checked our final draft to make sure that we addressed the points that we thought needed to be addressed.”

    [ Mosher read the reviews. The authors did not read the reviews? Mosher checked the final draft….not necessarily the reviewed version, but the final draft. Not the after-review corrected version, but the final draft. Not ‘we corrected points needing correcting’, but “checked our final draft to make sure that we addressed the points that we thought needed to be addressed” — note: not the points the reviewers thought needed to be addressed, not the points the editor thought need to be addressed, but only the points we (I take it this means only Mosher himself, as there is never a mention of any of the authors being involved in this publishing process) thought need to be addressed. ]

    Q: Was it sent out in it’s entirety?

    A: ” Yes. I prepared the final draft.”

    [ The meaning of the question was really ‘Was the entire paper, the whole kit-and-kaboodle, all the supplements, links to the data files, etc sent out to the reviewers?’ Mosher says only that he prepared the final draft (which should have only happened after the reviewers comments and subsequent corrections). ]

    Q: Was it sent to 3 world class experts in climate and stats?

    A: “The reviewers identities are not revealed so that I can only infer from their comments. They understood what we were doing and made helpful suggestions. This was in contrast to previous reviewer comments at other journals who seemed to struggle with kriging, so a geostats journal seemed the better fit.”

    [ What? Does he simply mean they dared not question the authors? Does he mean not like those stupid fellows over the the Journal of Geophysical Research? Who had the audacity to want (quoting Mosher here) ‘… to have the methods paper published first before they would consider the results paper’?” ]

    [ An aside here: As I understand it, the reason we are in this continuing endless controversy about surface temperatures, which the BEST Project was set up to settle, was that previous studies (such as Mann) were found to use a ‘method’ the produced a certain result independent of the data itself. Can BEST really wonder why JGR asked for a methods paper first? ]

    I have asked Mosher in Comments above if BEST would make the reviewer’s comments available to the rest of the world, so we too can assure ourselves that the reviewers “understood what [BEST was] doing and made helpful suggestions” and weren’t just along the lines of — as I can imagine in my most cynical of moods — ‘oh so thrilled to be actually reviewing a paper by authors that included a Nobel prize-winner…and yes, of course, I love it and thanks for the opportunity and maybe someday when I get my degree maybe I’ll understand all that stuff you did with the data too.’ In my more sensible moments, I do wonder about the effect of ‘offering’ this paper to G&G (well, offering to pay them to publish it, same thing, almost). How could this ‘never had a paper to publish before’ journal refuse….an important paper, with important authors, to get their journal started on the right foot. Given this, I wonder if they wouldn’t have published it if it was written backwards in pig-latin and suspect the same effect on the volunteered or conscripted reviewers.

    Well, maybe its just cynical old me….maybe its the coming thing for teams of important, prize-winning authors to publish in the vanity-science-press.

    Oh well, time will tell.

    • Kip, if you see anything wrong in the paper, you need to get to the point rather than doing what looks like a fishing expedition.

    • Jim D –> If you wish to discuss the choice of publishing venue (the journal choosen) and the answers Mosher has given to my questions on that topic, this comment thread is the place.

      If you wish to discuss what is ‘wrong or right’ about the other aspects of the paper, that would be some other comment thread.

    • Steven Mosher

      Lots of commenters you’ll have to wait in line.

      “[ Mosher read the reviews. The authors did not read the reviews? Mosher checked the final draft….not necessarily the reviewed version, but the final draft. Not the after-review corrected version, but the final draft. Not ‘we corrected points needing correcting’, but “checked our final draft to make sure that we addressed the points that we thought needed to be addressed” — note: not the points the reviewers thought needed to be addressed, not the points the editor thought need to be addressed, but only the points we (I take it this means only Mosher himself, as there is never a mention of any of the authors being involved in this publishing process) thought need to be addressed. ]

      1. The authors, the entire team was given the reviewers comments. Then people gave there opinion on which comments were relevant and which were not. Since all reviewers approved the paper, the questions were which comments were the most important.
      2. Authors then made a final version.

      That final version was handed to me for final processing.

      check box number 1. did the final version address the items the team thought were important. for example. Reviewer asks to add an explantion. team agrees. final draft conatins the explanation. simple

      checkbox number 2. Are the publishing guidelines met.

      Kip you can ask questions all day long and in the end you need to raise an issue with the paper. cause peer review in my mind is a check box. code and data are the acid test.

      more later

      • Yes, the BEST paper passed peer review in your mind, Steven. You know that Kip and others are raising an issue with the paper: has it passed legitimate peer review? You can’t send a paper to Marvel Comics to get their approval and expect anybody but a clown to believe that your box has been checked. You know that G&G has zero credibility. And on this subject, neither do you.

    • Steven Mosher

      Kip.
      ” [ What? Does he simply mean they dared not question the authors? Does he mean not like those stupid fellows over the the Journal of Geophysical Research? Who had the audacity to want (quoting Mosher here) ‘… to have the methods paper published first before they would consider the results paper’?” ]”

      No, I dont mean that they didnt dare question the authors. They read the paper. they approved it. had some suggestions. Some of those were worth adopting. Nothing major.

      Well, I didnt characterize the other reviewers as stupid. The problem was pretty simple. The results paper was being held back subject to the approval of the methods paper. The reviewers of the methods paper ( in one case) didnt know what a nugget was. There were other personal criticisms in those reviews that really had no place in a peer review. meh?
      Finally, audacity is you word. My word was odd. According to the physicists on the team, their experience was results papers typically came first and when the results depend on a known method there is no reason for a methods paper period. Since there is no GISS methods paper, and no CRU methods paper I could not argue with their logic. perhaps you can.

      • Your BEST paper has not passed legitmate peer review, Steven. Everybody knows that, including you. Why don’t you just admit that the BEST team decided to circumvent the peer review process? The reason being obvious and requiring no admission.

    • Steven Mosher

      ” [ An aside here: As I understand it, the reason we are in this continuing endless controversy about surface temperatures, which the BEST Project was set up to settle, was that previous studies (such as Mann) were found to use a ‘method’ the produced a certain result independent of the data itself. Can BEST really wonder why JGR asked for a methods paper first? ]

      Yes. i can wonder. I wonder because kriging is a method known to be BLUE. yet, there is no paper showing that CRUs method is BLUE or that GISS method is BLUE. So, it seemed odd. However, if you have questions about the method you can read cressie. or you can look at Roberts methods memos. they are aimed at a audience that might not be interested in all the math details.

      • Mr. Mosher,

        I think you are right in one sense. The problem is that there is no GISS methods paper, there is no CRU methods paper. There has generally been no in-depth explanation of methods used, no validation of methods, in some cases, not even careful records kept of methods, used in determining a global surface temperature dataset.

        That’s what BEST was supposed to do — resolve all these ‘lack of’s.

        If JGR editors were asking that your methods paper be peer reviewed first — and by this I assume we are talking about a paper that discloses, discussing, validates, proves, etc ALL the methodological components of BESTs novel approach, individually and as a total-combined ‘method’ — then I also assume that it was for this very reason — all previous papers and attempts failed to fully validate their methods leaving the world with a mess and confusion. That’s was BEST’s job — do it right — don’t make the same mistakes — and this should include testing and validating, proving the method used itself — and I would say first, then run it to get a result the world could have banked its future on.

        Instead, for reasons that seem inexplicable, at least to me, you have not done so. You did not run a methods paper through the system first, in fact, instead you took your ball and went home…for a while…then have shown up putting it in play somewhere where the referees won’t be so strict.

    • Steven Mosher

      Kip WRT releasing the reviews.

      I suspect that this would not satisfy you because you would then want to know if they were authentic or if I made them up. Plus they would not enlighten folks very much. 3 positive reviews with minor requests for changes dont get you what you want. Also, I’m not entirely clear that the reviews belong to berkeley earth. unclear on who controls that document from a copyright perspective. In any case, you are free to write to Muller and ask him. You do need to rememember that I am an unpaid volunteer there, so fire up your email and ask the boss.

      • This is a Eric Holder stonewalling tactic, Steve. You don’t need to know that. And if I told you, you wouldn’t believe me anyway.

        You would not be believed, because your story is not credible. You are not telling the truth about this, Steven. What has happened to you?

      • Mr. Mosher –> FYI: I have requested GIGS release the Reviewers Comments or give Rohde permission to do so.

        I have also requested confirmation that neither Rohde, nor the other authors, nor the BEST Project itself has been or will be charged publication fees and that SciTechnol or GIGS granted a waiver of fees for this paper.

        I’ll let you know what they say.

      • Which would be more potentially/possibly (not saying it was) unethical, Kip? They did pay, or they got a freebee?

      • Monfort –> It is part of OMICS/SciTechnol/GIGS’s corporate model to either demand full payment (they have been accused of not fully disclosing the amounts in advance), a discounted sum, or at their whim, granting a waiver of fees. Some of these discounts are based on country of origin — basically poor scientists or projects get a break.

        When I enquired about fee structures this afternoon, they offered me a full waiver of fees for UVI students wishing to publish in their journal Marine Biology and Oceanographics.

        I would not say either case is unethical. Nothing ethically wrong with paying for a paper to be published, anymore than it is to pay to have your wife’s poetry published. But then one can’t blame the world for having an opinion about why one had to take one’s science paper or wife’s poetry to a vanity press.

      • Kip,

        Actually , after hitting the button it occurred to me that I should have said “unseemly” instead of “unethical”. I didn’t correct it with a follow up comment, cause Judith got me on moderation, and I am not going to try correct something that has a good chance of not appearing anyway.

        I think it looks worse that they didn’t pay. Smacks of favorable treatment, to get some Nobel Laureate business for a “journal” that was nothing but a title, until the BEST opportunity turned up to fill volume 1 issue 1. Mosher said he was assured that the reviewers would not need a kriging tutorial. It would really be interesting to see their approving reviews. My guess is short and sweet.

    • I really feel that some people are being hypercritical here. I believe Muller and Mosher to be rather honest and trustworthy. None of the caviling here about the paper has any direct evidence to support it. Can we just stop the conspiracy theories and address the paper content itself?

      • What are you talking about, David? Do you realize that the validity of the paper and the ersatz “journal” that allegedly reviewed the paper are different issues? Do you believe that G&G is a credible science journal? Do you believe that the BEST team, including a distinguished Nobel Laureate, are happy that their paper landed all the way down at the bottom of the barrel, after failing to pass review from a real journal?

      • Well, it isn’t as bad as ‘There are some people whose papers I will no longer read.’

        I read moshe. He lays it out. Above, about, around, and in between the lines. But I won’t send you my code.
        ========

      • I dont think they are thinking too clearly. We show in our methods paper that the other methods have larger bias. They dont care about that. one minute they want to bash CRU and when you show them something demonstrably better, they forget feynman, they forget the scientific method, they forget open data and code and they get personal. Oh well. F

      • You are whining about the wrong issue, Steven. The criticisms of the content of the paper are not your problem. You can handle that. The paper is really OK, as far as estimates of global temp go. If it were authored by Jones, Mann, Schmidt et al, it would have been a slam dunk.

        It’s the phony peer review and your lame, dishonest cover story that is causing you grief. There are more than two geostats journals, Steven. It’s hard to imagine that the one you did not choose was less credible than G&G. We won’t ask you to name it. You would not have been any worse off, if Muller had created his own journal to review the paper. That would have had zero credibility, which is exactly where you are with G&G.

        You could have found a legitimate venue for the paper that the Team does not control. What were you people thinking?

      • Matthew R Marler

        Don Monfort: Do you believe that G&G is a credible science journal?

        How about E&E? That’s one that the AGW proponents tend to bash and the skeptics tend to defend.

        Regardless, the paper, data and code will be judged on their merits. Possibly some econometricians like McShane and Wyner will publish an alternate analysis in Annals of Applied Statistics, with comments and rejoinders. The BEST team are taking the advice of John Tukey and Frederick Mosteller: do not make an entire career out of one data set. Tukey also wrote that whatever is worth doing is worth doing badly.

      • What about E&E, Matt? Did BEST get rejected there too?

        You are missing the point. But I will answer your question. I haven’t thought much about E&E, but my impression is that it is a second or third-tier journal. But it is a journal. G&G is the equivalent of a Dominican Republic faux med-school diploma mill. A guy goes down there on a fishing trip and comes back two weeks later with a medical degree.

        Why did they put the paper in G&G, Matt? And don’t you think it is too hilarious that it landed in volume 1 issue 1 of a pay-for- play journal of last resort, after the big media splash they made about their Greatest of All Time dataset?

        But Mosher is amused at the skeptics, because they are inconsistent on peer review, or whatever. The truth is this incident is just more evidence of corrupt climate science pal review. If Muller had not pissed-off the powers that be, or if the paper had the right names on it, it would have sailed through.

      • Matthew R Marler

        Don Monfort: If Muller had not pissed-off the powers that be, or if the paper had the right names on it, it would have sailed through.

        I can’t tell whether that is a serious comment. Its implication is that you consider the paper to be worthy of publication, in which case your harping on the particular journal is a red herring.

      • Matthew R Marler

        Don Monfort: And don’t you think it is too hilarious that it landed in volume 1 issue 1 of a pay-for- play journal of last resort, after the big media splash they made about their Greatest of All Time dataset?

        I already answered that: I applauded their decision to publish in the first issue of a new journal, and I wrote that the paper, data and code will be judged on their merits, and I mildly implied that purported defects may be addressed in subsequent publications.

        If it were necessary that all potential reviewers agree that a paper be worthy of publication in order to be published, nothing would ever appear in published format.

      • Matt, Matt

        “I can’t tell whether that is a serious comment. Its implication is that you consider the paper to be worthy of publication, in which case your harping on the particular journal is a red herring.”

        I have said that I believe the paper would have been published, probably in JGR or similar, if Muller had not pissed off the Team, or if the paper had been written by Team members. Seems that Mosher’s little leaks of reviewers comments supports that. Is that clear now?

        If you don’t think that publishing in the initial issue of a trash journal makes any difference, then I guess you would think it is a red herring. My guess is you wouldn’t publish your paper in one of OMICS stable of hundreds of pay-to-play “journals”. Or maybe I got you wrong.

        ” …already answered that: I applauded their decision to publish in the first issue of a new journal, and I wrote that the paper, data and code will be judged on their merits, and I mildly implied that purported defects may be addressed in subsequent publications.”

        It is not a new journal with credible people behind it, Matt. Google OMICS. Unless you don’t care to know what you are talking about.

      • Matthew R Marler

        Don Monfort: I have said that I believe the paper would have been published, probably in JGR or similar, if Muller had not pissed off the Team, or if the paper had been written by Team members.

        Then we agree that the paper was worth publishing. You provide a legitimate reason to avoid established journals and go with a new one: personal pique of the editors of the established journal. Limitations of established journals, such as the limitation you highlight in that comment, are the reasons that new journals are established.

      • Matthew R Marler

        Don Monfort: It is not a new journal with credible people behind it, Matt.

        If the BEST paper was worth publishing, as you wrote above, then the journal has credible people behind it. Journals are judged by the papers they publish, as well as vice versa.

      • Matt:”If the BEST paper was worth publishing, as you wrote above, then the journal has credible people behind it. Journals are judged by the papers they publish, as well as vice versa.”

        That is ridiculous, Matt. It is especially ridiculous if you have actually read the information about OMICS that I pointed you to. No reason to discuss this any further with you.

      • Matthew R Marler

        Don Monfort: No reason to discuss this any further with you.

        OK, but you did say that the paper is worth publishing. Your only objections were that (a) the journal was not prestigious enough and (b) Muller may have offended the editors of the prestigious journals.

    • “It was submitted to one journal as I recall”

      however

      “This was in contrast to previous reviewer comments at other journals”

      So submitted to one other journal, or more than one other?

      • Steven Mosher

        one other.

      • Yes, according to Steven and Matt if you don’t make it into JGR, then the default journal of second choice is G&G. Call me crazy, but I think G&G is more like the last resort. The fact that only one paper has ever been published there should tell one something.