Congressional Hearing on Climate Change: Part II

by Judith Curry

The U.S. House of Representatives Hearing on Climate Change: Examining the Processes Used to Create Science and Policy has commenced.  The House website for the hearing is here.

Live blogging:  Gavin Schmidt, Eli Kintisch, Jay Gulledge

Real time rebuttal (password required): Kevin Trenberth, Andrew Dessler, Gary Yohe.

Charter

Some excerpts:

However, questions have been raised regarding the integrity of the processes employed by those scientists in generating the information for use in public policy. Such process questions have triggered concerns about the robustness of the information being used to support policy shifts, reducing public confidence in certain policy solutions.

The potentially monumental impact of climate change policy on the U.S. economy and nearly all aspects of daily life demand that not only are such policies grounded in science, but that the science itself is generated through processes and procedures that are universally accepted. This hearing will provide an overview of some of the process questions within climate change science and policy that have been raised in recent years.

Whether it is scientific method or regulatory procedure, process is defined as a systematic series of actions that are broadly known and well understood. Given the potential widespread impacts on the U.S. economy, climate change policy has received a level of scrutiny and analysis that rival some of the most important debates the U.S. has engaged in. As such, it is vital that the processes upon which climate change science and policy are based be widely accepted, understood, and adhered to.

Examples of process issues:

  • Climate models
  • Data quality
  • IPCC process
  • EPA Endangerment process

Testimony

Well Armstrong’s testimony was rather bizarre.  He is pushing simple time series forecasting methods, and doesn’t seem to know the first thing about complex system modeling using dynamic methods, and apparently nothing about climate modeling.  I think it was a good idea to get someone from outside the climate modeling field to comment on the methods used by the climate community, but this should have been someone knowledgable about scientific computing, complex nonlinear systems with practical experience.  Armstrong did not fit the bill here.

Muller’s punchline conclusion is:

Based on the preliminary work we have done, I believe that the systematic biases that are the cause for most concern can be adequately handled by data analysis techniques. The world temperature data has sufficient integrity to be used to determine global temperature trends.

Despite potential biases in the data, methods of analysis can be used to reduce bias effects well enough to enable us to measure long-term Earth temperature changes. Data integrity is adequate. Based on our initial work at Berkeley Earth, I believe that some of the most worrisome biases are less of a problem than I had previously thought.

Muller’s testimony provides little detail (and only one figure).  He states that three manuscripts will soon be submitted for publication, and any statements in his testimony is preliminary.

Anthony Watts objects to Muller’s testimony, see here.   Watts concern seems to be that Muller is making public statements about preliminary work for which the data has not yet been made available.  Watts even sent his comments to the House Committee.  The point is this.  Muller did not ask to testify.  The timing was not great in terms of the project’s readiness to make public statements about their findings.  I think Muller struck a reasonable balance by making a statement of relevance to the matter at hand, with the appropriate caveats.  His punchline conclusions are arguably premature, but this was a judgment call in terms of what to include in the testimony.

Christy’s punch lines:

Climate assessments like the IPCC have to date been written through a process in which IPCC-selected authors are given significant authority over the text, including judging their own work against work of their critics. This has led to biased information in the assessments and thus raises questions about a catastrophic view of climate change because the full range of evidence is not represented.

Because this issue has policy implications that may potentially raise the price of energy significantly (and thus essentially the price of everything else), the U.S. Congress should not rely exclusively on the U.N. assessments because the process by which they were written includes biased, false, and/or misleading information about one of the most murky of sciences – climate. In my opinion, the Congress needs at least one second- opinion produced by well-credentialed climate scientists but overseen by a non-activist team which includes those with experience in the scientific method, the legal aspects of “discovery,” and who simply know what is important in answering the questions at hand.

For hockey stick addicts, Christy’s testimony is a must read.  He describes how the hockey stick made it into the TAR (note Christy and Mann were co-chairs of the relevant chapter).

Glaser’s punch line:

In my view, EPA failed to observe basic requirements set forth in applicable law as to how a regulatory determination such as the Endangerment Finding should be made. These flaws are not technical. They go to the fundamental fairness and transparency of the way EPA arrived at its Endangerment Finding and the quality of the information on which EPA relied. The procedures EPA failed to observe are designed to ensure the integrity both of the decision- making process and the ultimate result an agency reaches. EPA’s failure to observe these basic requirements therefore undermines confidence in the substantive scientific conclusions in the Endangerment Finding.

A well argued statement, with interesting examples of  “process” issues re the EPA.

Emanuel’s punchline:

I am here today to affirm my profession’s conclusion that human beings are influencing climate and that this entails certain risks. If we have any regard for the welfare of our descendents, it is incumbent on us to take seriously the risks that climate change poses to their future and to confront them openly and honestly.

Far from being alarmist, scientists have historically erred on the side of underestimating risk.

Standard stuff. But according to the live blogging pundits, Emanuel did a superb job in oral testimony and in answering questions.

Montgomery’s punchline:

First, if the U.S. were to act without solid assurance of comparable efforts by China, India, and other industrialized countries, its efforts would make almost no difference to global temperature, especially if industrial production and associated emissions are simply exported to other countries. Second, even global action is unlikely to yield U.S. benefits commensurate with the costs it would incur in making steep GHG emission cuts. Third, globally, even with moderate emission reductions, benefits would not be much greater than costs, and, fourth, conflicting economic interests will make international agreements on mandatory limits unstable.

Not my area of expertise, but his arguments seem reasonable to me.

293 responses to “Congressional Hearing on Climate Change: Part II

  1. Thank you, Professor Curry, for the update.

    The root of the problem begins with Appropriations.

    Could you briefly explain the differences between this Committee and the Committee that Appropriates funds for Science?

    Thanks,
    Oliver

    • I had the good fortune to personally observe the unhealthy relationship between NAS and the Chair of the House Appropriations Subcommittee on Appropriations for Science in 2008.

      Former President Eisenhower exactly warned of the danger of a “Scientific-Technological Elite” in his farewell address on 17 Jan 1961:

      Web: http://mcadams.posc.mu.edu/ike.htm

      “Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades.”

      “In this revolution, research has become central, it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.

      Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research.”

      “Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.”

      “The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.”

      “Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”

      “It is the task of statesmanship to mold, to balance, and to integrate these and other forces, new and old, within the principles of our democratic system – ever aiming toward the supreme goals of our free society.”

      Video: http://www.youtube.com/watch?v=GOLld5PR4ts

      • Today’s hearing was great, but Follow the Money !

        In reply to my question, “Is the Committee hearing testimony today the Committee that will eventually appropriate funds for Science?”

        Prof. Curry [March 31, 2011 at 3:45 pm] replied “No,” and David Wojick [March 31, 2011 at 4:15 pm] added, “No is correct, but the situation is not that simple. Every government program requires four committee approvals, two each in the House and Senate. One of the two approves the program design (authorization) and the other approves the funding (appropriation). This is the House authorization committee. But they are the ones who direct the work, subject to appropriation.

        To stop distortions of climate data that arose from the cozy relationship between NAS President, Dr. Ralph Cicerone and Congressman Alan B. Mollohan, Chair of the House Subcommittee on Science, US House Appropriations Committee, investigators must Follow the Money !

        Today I witnessed a new interest in honest science on PhysOrg.com, and I expect that the editors of Science, Nature, PNAS, etc. and heads of federal research agencies, NASA, DOE, EPA, NSF, NOAA, etc. will also show a renewed interest in intellectually honest efforts [1-5] to understand Earth’s climate and weather, . . .

        If investigators Follow the Money !

        References:

        1. “Weather Action”
        http://www.weatheraction.com/

        2. “Super-fluidity in the solar interior: Implications for solar eruptions and climate,” Journal of Fusion Energy 21 (2002) 193-198.
        http://arxiv.org/pdf/astro-ph/0501441v1

        3. “Earth’s heat source – the Sun,” Energy & Environ 20 (2009) 131-144
        http://arxiv.org/pdf/0905.0704

        4. “Neutron Repulsion,” The APEIRON Journal, in press (2011) 19 pages
        http://arxiv.org/pdf/1102. 1499v1

        5. “The Sky Dragon Slayers”
        http://co2insanity.com/2010/ 11/26/slaying-the-sky-dragon/

    • Good news!

      “House lawmakers . . . want to obliterate the Obama administration’s climate rules.”

      “The chamber voted 255-172 . .. to nullify the EPA’s greenhouse gas regulations and the scientific finding they’re based on. No Republicans opposed the bill, but 19 Democrats broke ranks with their party to support the measure.”

      http://www.politico.com/news/stories/0411/52759.html

      I expect that leaders of NAS, federal funding agencies, and major research journals will “see the handwriting on the wall.”

      All fields of science will benefit from this action.

      With kind regards,
      Oliver K. Manuel

  2. “Why should I make the data available to you, when your aim is to try and find something wrong with it?”

    Says it all. If you are going to change the world through the use of force, it’s not asking too much to insist that you permit your work to be checked.

    • “Why should I make the data available to you, when your aim is to try and find something wrong with it?”

      That sentence has soooooo many things wrong with it and science on so many levels it’s hard to take it seriously. It evokes instant distrust and complete dismissal of any and everything the fellow ever did. This is the sort of thing you would expect a 9 year old to say.
      Ok……….. has the work ever been checked?

    • I do not trust Muller, so I share Watts’ concerns. I predict a whitewash from Best. Today’s testimony may be telling.

      • So far Muller’s comments (as reported by the Kintisch/Schmidt/Gulledge) seem to be balanced and defensible.

  3. Wow! Looks like the comment volume here is going to drop by about 1/2:


    Let me now address the problem of Poor Temperature Station Quality
    Many temperature stations in the U.S. are located near buildings, in parking lots, or close to heat sources. Anthony Watts and his team has shown that most of the current stations in the US Historical Climatology Network would be ranked “poor” by NOAA’s own standards, with error uncertainties up to 5 degrees C.

    Did such poor station quality exaggerate the estimates of global warming? We’ve studied this issue, and our preliminary answer is no.

    The Berkeley Earth analysis shows that over the past 50 years the poor stations in the U.S. network do not show greater warming than do the good stations. Thus, although poor station quality might affect absolute temperature, it does not appear to affect trends, and for global warming estimates, the trend is what is important.

    Our key caveat is that our results are preliminary and have not yet been published in a peer reviewed journal. We have begun that process of submitting a paper to the Bulletin of the American Meteorological Society, and we are preparing several additional papers for publication elsewhere.

    NOAA has already published a similar conclusion – that station quality bias did not affect estimates of global warming – — based on a smaller set of stations, and Anthony Anthony Watts and his team have a paper submitted, which is in late stage peer review, using over 1000 stations, but it has not yet been accepted for publication and I am not at liberty to discuss their conclusions and how they might differ. We have looked only at average temperature changes, and additional data needs to be studied, to look at (forexample) changes in maximum and minimum temperatures.

    In fact, in our preliminary analysis the good stations report more warming in the U.S. than the poor stations by 0.009 ± 0.009 degrees per decade, opposite to what might be expected, but also consistent with zero. We are currently checking these results and performing the calculation in several different ways. But we are consistently finding that there is no enhancement of global warming trends due to the inclusion of the poorly ranked US stations.

    http://berkeleyearth.org/Resources/Muller_Testimony_31_March_2011

    • But but but you guys already told BEST was useless and not to be trusted.

      • Who are “you guys,” hunter?

        What I do find interesting is how Watts was a big fan of BEST, uh, until it seemed that the results of their research would deflate his raison d’etre.

      • Joshua,
        If the derived average of world temps has indeed risen ~1.0 or even ~1.5 degrees over the past ~150 years the problem for your side only gets worse.
        Where is the disaster?
        Where is the apocalypse?
        You predicted disaster and inundations of Manhattan etc.
        They ain’t happening.
        And, by the way, I believe that BEST is not nearly done with its analysis yet, and your trollish pal, ianash was well called out for bs.

      • IIRC, the urban/rural sitings they compared were a random selection IN JAPAN!! Very little to do with the issue of the big landmasses. And very different circumstances.

        IAC, as AW points out, that 1.2°C figure just comes out of the blue, or maybe Dr. M’s fundament. It is nowhere advanced even by the more drooling of the Alarmist front agencies (GISS, NOAA, etc.) .

        Muller sounded and looks like he has a spine of foam rubber.

      • Harold H Doiron

        Hunter has stated the crux of the CAGW issue that I have also observed in other threads at Climate, Etc. Where is the problem than needs immediate, drastic, and coordinated global political action to solve?……and pleeeeaaaase don’t allow our politicians to take this unwarranted immediate and drastic action based on the 50 year predictions of unvalidated computer models that do not even correlate well with global temperature trends of the last decade’s start on the next 50 years.

        Why don’t we believe that humans can adapt to the observed gradual global temperature trends of the last 150 years, whatever the true root cause of these trends?

      • ian (not the ash)

        Just a note of clarification. The piece was penned by Willis E. (presumably Anthony is in general accordance).
        cheers, ian

    • “Wow! Looks like the comment volume here is going to drop by about 1/2”

      Oh, I don’t know. I don’t believe that there are many skeptics here that question the temperature record to any great extent. It may be different over at WUWT though.

      Also, it’s the SST record that’s important. I look forward to BEST’s analysis of that assuming they get the cash to do it.

      • A bit of hyperbole there, Rob.

        But what I do find confusing is how folks at WUWT alternately say that they don’t question whether the climate is warming (only to what degree or whether it is anomalous), and then run posts about how much it snowed in the Sierras to suggest that warming isn’t taking place.

        But I have seen quite a number of posts here about the UHI. Probably, however, your right that the number of posts about that won’t drop: the fact that data analysis shows no effect on trends (well, an insignificant affect that the “good”‘ stations show more warming) won’t get in the way of some folks.

      • er, you’re. If this site had a preview, I would make slightly fewer mistakes.

      • er, “effect”

      • I have never been too concerned about UHI, although I am sure it exists. But Muller is talking about the station quality issue, not UHI, a different issue. Nor can the quality issue be resolved by simply comparing the overall average for all good and all bad stations. Statistical theory is not that simple.

        My interest is in the mathematical problems with the area averaging method everyone uses, which I am waiting to see how BEST handles, if at all. Especially how to estimate confidence intervals for averages of averages of averages of averages. I don’t think the estimated global temperature is accurate to within the estimated temperature change, which means we do not know if it has warmed or not.

      • Muller is talking about how the station quality issue relates to claims about the effects of the UHI – about whether described temperature trends are a function of station quality and UHI.

        Please note:

        Many temperature stations in the U.S. are located near buildings, in parking lots, or close to heat sources…

        Whether you have been concerned about the UHI or not, it has been a long-standing and ubiquitous argument among “skeptics/deniers” to question the validity of findings of anomalous warming.

        Ask yourself this question. If BEST came out with conclusions that UHI showed a consistent and significant effect, how would you have responded?

      • I make a distinction between UHI, which is population dependent, versus local heat effects, which is what the quality issue is about. UHI is a well established phenomenon, so much so that EPA was considering a new program on it as a health issue some time ago. Local effects (LHI) can occur in rural areas. Paving a parking lot for example. Note too that LHI is likely to be step-wise and sporadic, while UHI is gradual.

        In principle one can correct for UHI, but LHI is just bad data, with no way to detect it or correct for it. The proper way to treat LHI is to incorporate large uncertainty ranges along the lines of the guidance: 1, 2 , 3, 4 or 5 degrees depending on the case. This could be done using the Watts people’s findings.

        But the whole temperature modeling scam depends on not having confidence intervals. Means without confidence intervals are statistically meaningless but the whole AGW issue depends on just that, meaningless means. Area averaging is not a valid statistical method, precisely because confidence intervals cannot be estimated.

        As to your question, if BEST actually looks at UHI, as opposed to LHI, and finds a significant trend I might believe them. If they find nothing then I will write them off as a hoax, because UHI clearly exists and has increased significantly over the last 100 years.

      • I make a distinction between UHI, which is population dependent, versus local heat effects, which is what the quality issue is about.

        Ok, thanks.

        As to your question, if BEST actually looks at UHI, as opposed to LHI, and finds a significant trend I might believe them. If they find nothing then I will write them off as a hoax, because UHI clearly exists and has increased significantly over the last 100 years.

        Well – they found a “trend, as it were (for the effect they were measuring) – but they found a statistically insignificant greater warming with “good” stations. I doubt whether they are focusing on whether there is a “trend” per se, as opposed to whether station quality has a qualitative impact on analysis that examines temperature trends over time.

        I assume that you are not saying that you would dismiss their findings unless they conclude that an UHI has invalidated findings that temperatures have increased?

      • Joshua, once again the issue of good versus bad stations and the issue of UHI are two different issues. So whatever BEST found regarding good and bad stations is not about UHI. Nor do I think that what they have found is significant.

        But you are correct that I am not demanding that UHI somehow invalidate warming in the data. (It may invalidate the result that land warming is greater than SST warming, as Watts claims.) My view is that the statistical methods themselves are so imprecise that this kind of hair splitting is irrelevant.

      • Joshua,

        One thing Muller said was that his preliminary results had no adjusts but agreed with the other three. He was surprised to see the agreement, but after adjustment they may disagree. I doubt it will be much difference, but it is a little early yet.

      • David,

        Are they looking at the original raw data? Or adjusted? I was under the impression that the original data gets adjusted multiple times before it hits the records and then adjusted periodically thereafter. If the crappy data helps massage the quality data before it ever hits the records, they cannot know.

        Further — every bad station is bad for different reasons at different times. Would love to know how they think that they can measure it when they don’t know how or why.

      • Stan, the problem is that we don’t know what they are doing, so all we have are their unsubstantiated conclusions. Way back on March 20, Climate Progress quoted Muller as saying “None of the effects raised by the [skeptics] is going to have anything more than a marginal effect on the amount of global warming.”

        This is as bad as I have ever seen it.

      • steven mosher

        The best skeptical paper published to date estimated that the UHI effect might be as high as .3C.

        Recall that the land is 30% of the total earth. Even iff the effect is .3C in the end that ends up being .1C of the total. mouse nuts.

        UHI is real, but it’s small. and the fact that it only happens in the land record nearly makes it a non issue.

        If UHI were HUGE you would see that UHA and RSS would be drastically different than GISS. They are not. So, UHI is real. But it’s overall effect in the land record is small, and the fact that the land record is 30% of the total record makes it even less important. Still, its important to figure out if its
        .05C ( jones) or more like .3c (mcKittrick). or something in between. If it were huge we would see it very easily by just looking at all rural records. we dont. we dont because its small.

      • steven mosher

        UHI is a meso scale phenomena. It is driven by the following factors

        1. The differential characteristics of the urban and rural hydrology
        2. canopy cover
        3. building height
        4. heat capacity of manmade surfaces
        5. radiation canyons
        6. sky view

        Typically when windspeeads top 7m/sec the UHI effect is neglible

        Siting issues are micro climate issues.

    • Thus, although poor station quality might affect absolute temperature, it does not appear to affect trends, and for global warming estimates, the trend is what is important.

      You just don’t get it.
      A poor-quality station doesn’t tell you much of anything at all. Poor-quality station data are so noisy as well as biased that it’s difficult to impossible to tell anything at all from it. And not all noise and biases can be filtered out – many of them are non-linear, non-monotonic, and/or may vary with time.
      Putting it another way, if, for example, one reading has an error of two or three degrees and another reading taken at a different time also has a two or three degree error, and not even necessarily in the same direction, you’re on a hiding to nothing trying to detect a trend of around a tenth of a degree.
      Even if you can keep the spread of errors down to less than a degree or so, it’s still very difficult to detect trends of an order of magnitude less.

    • Has anyone considered that if bad stations are warming at the same rate as good stations then both are crap?

  4. Did Emmanuel really say that Arrhenius’ climate sensitivity estimate was “solid”??

  5. does anyone know the location for the video of the hearing?

  6. “Watts concern seems to be that Muller is making public statements about preliminary work for which the data has not yet been made available.”

    Glass houses, stones, and whatnot: http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf

  7. Gavin just said: “If the central issue is whether man-made CO2 is having a major impact on the climate, then I would have to say ‘case closed’ ;-) ”

    Does this mean the said and “settled science”?

  8. I think the value of Mueller’s BEST will be in how it handles the UHI effect. I am interested to see how this compares to the previous adjustments done by others.

    I don’t expect it will make any seminal changes to the data trends, but could show some confirmation bias issues at play. I pretty much accept the 150 year temperature record, and having access to non-adjusted data quite frankly is all I ever wanted to see.

    If the adjustments don’t make much difference, the scientists would be much better off using unadjusted data as far as trust in climate science goes at this point. The closer you are to the raw data, the more likely your results are valid IMO.

    Proving some result through many dubious data processing steps and advanced statistical methods immediately throws up red flags. Torturing the data is weak science. For example, using PCA and assuming it has some mystical capability to separate temperature and moisture signals in tree ring data shows a fundamental trust in statistics that is misplaced. This is effectively using a “magical black box” which provides answers you like better than the less processed data.

  9. Dr. Curry,
    I have not heard any testimony but am under the impression Scott Armstrong knows a great deal about complex modeling and has rejected it as failed (at least long term modeling). Similar to Orrin Pilkey of Duke University who wrote the book (with his daughter) Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future. They also wrote an interesting peer reviewed paper on the topic available at http://nctc.fws.gov/EC/Resources/Decision_Analysis/List_Serve_Attachments_files/PilkeyArticle2008.pdf

    • Good article. I was involved with economic modelling off and on (some big offs) between 1966 and 2002, though I’m not a modeller. I recall, as a good example of the potential flaws of modelling, an instance where the Economic Planning & Advisory Council (EPAC) commissioned modelling of the same issue from three leading sources (Dixon, Murphy & Brain). They each produced very different results, and it was not at all clear why’ given similarities in their modelling techniques (using computable general equilibrium models). When we called them in to a meeting, it emerged that the differences arose entirely from the different assumptions made. That is, it was the subjective interpretation of each modeller that was the critical input; we might as well have just ascertained their views, as our judgement had to be made on our assessment of each set of assumptions. In general, I found that the critical issues with economic modelling were getting the assumptions and specifications right, and being able to derive an economically plausible story from the results – I was often able to show modellers that the results they presented were clearly not in accordance with reality or common sense. With AGW modelling, the underlying story must make sense if the results are to be accepted.

      • Faustino,
        Interesting story. One of the key points the climate modelers have not grasped is Lesson #4 on hindcasting. I used to use a computer model to predict the stock market. It cost me a fortune. I could tune parameters until I had a perfect hindcast but it had no predictive value at all. For some reason you get climate modelers who think the model is something real. It isn’t. It has a number of artificial and arbitrary parameters set to constrain the model into a “somewhat reasonable” reflection of climate but it isn’t real. It isn’t even close to real. They just cannot grasp it.

        Pilkey has extensive experience with environmental modeling of shorelines. He knows the models NEVER work. Pilkey and Armstrong are in communication with each other. Pilkey has reviewed some of Armstrong’s papers. Armstrong has been doing scientific forecasting for about 25 years. Simple models ALWAYS work better than more complex models.

  10. I’m just curious. If the full BEST analysis shows no UHI effect, does the type of analysis exemplified by the following post qualify as “‘junk science,” or “scientific fraud?”

    How UAH (University of Alabama, Huntsville) satellite temperature data supports Urban Heat (UHI) as a real and significant factor when estimating global temperatures.

    http://wattsupwiththat.com/2010/12/16/uah-and-uhi/

    I see many snazzy, scientific looking graphs there – yet apparently another analysis of data shows that a UHI has no effect on temperature trend observation.

    What is the precise definition of “junk science,” or “scientific fraud?”

    • Not junk or fraud, it just makes the issue controversial, as it already is. UHI is known to exist but no one has found a satisfactory way to find it in the temperature data. This is largely because of the lack of homogeneity in the spatial data. That is, thermometers in any given area do not move up and down together. Then too we already have several studies that claim to show that there is no UHI effect in the temperature data. These are not convincing, because the UHI effect is known to exist.

      What Watts people have shown is that we have a bunch of bad data. There is no magic way to get good results out of bad data. And that is just for the USA.

      • We’ve now seen Richard Muller “shoot from the hip” twice, re early results of his BEST group’s studies. Not an encouraging sign, imo.

        “Ready, FIRE! um, aim?” — we’ve already seen plenty of this. Sigh.

      • I haven’t seen any shooting from the hip. He’s made several fairly conservative statements. The group has 3 papers in progress, I anticipate that at least one will be submitted and publicly posted in the coming weeks.

      • Apparently shooting from the hip translates into reporting preliminary findings that Peter doesn’t agree with.

      • It certainly includes leaking and stating findings for which there is no documentation to analyze. This practice is called “science by press release” and Muller is doing a lot of it. So I agree with Peter, this is not an encouraging sign. Since we have no way of knowing what they have done, we have no way to respond to what they are saying. This has been going on for several weeks so trust is pretty well gone.

        It appears that BEST is going to conclude that the data says now what the same data said before, which is not news and which validates nothing. I am reminded of Wittgenstein’s line about buying multiple copies of the same newspaper, to verify the facts.

      • There have been several interviews, but no press releases. Muller did not ask to testify. There is a delicate balance of informing people of the project and not rushing to publication prematurely. They haven’t done a bad job in balancing this, IMO.

      • “Press release” is a metaphor for putting out conclusions without the supporting analysis. The AGW blogosphere is already awash with leaks and statements by Muller to the effect that skepticism is wrong.

      • well why then is muller praising watts and mcintyre and suggesting a climate ARPA to fund this kind of work? The blogosphere is awash with all sorts of #$%^

      • The AGW blogosphere is already awash with leaks and statements by Muller to the effect that skepticism is wrong.

        What about these statements from today (taken from the Live Blog session):

        12:53 Eli Kintisch: Muller: warming potential “concerns me” though he says that small warming we’ve seen thus far is falsely said to be responsible for many effects currently occurring…

        12:55 Jay Gulledge: I heard Muller say that claims that the small warming so far has “harmed the Earth” are “unscientific.”

        These seem to have led Gavin to remark:

        12:56 Gavin Schmidt: Muller is also confusing the issue.

      • Muller is saying that temperature skepticism is wrong. Your quotes are about climate impacts of the supposed warming.

      • Your quotes are about climate impacts of the supposed warming.

        Point taken, but I’d think his opinion regarding impacts is more damning to the alarmist position.

      • Quite possibly but that is not what his project is about, so his criticism of alarmists is basically irrelevant.


      • I am reminded of Wittgenstein’s line about buying multiple copies of the same newspaper, to verify the facts.

        That is a great line.

        I think “shooting from the hip” implies stating conclusions before the data are in, making rash statements about the implications of preliminary findings, etc.

        Stating that you have preliminary results, and describing what they are, doesn’t meet the bar, IMO.

        Honestly, the whole notion of interpreting “signs” seems pretty unscientific, to me. He stated what their preliminary findings are, and cautioned that they are preliminary. Sometimes a cigar is just a cigar.

      • David Wojik

        It’s a shame that Muller let a bit of the cat peek out of the bag before having all the facts, but let’s wait and see what the Berkeley report tells us when it is published. If it’s just another “jest fine” whitewash, that will be pretty easy to see, but let’s assume for now that this will not be the case and that some serious flaws will be flagged.

        Max

      • Just checking Muller’s written statement, it appears that the “poor station siting” issue (Watts) has been included in the Berkeley study, but that the UHI effect per se has not. This fits with my understanding as I read it from Dr. Curry on an earlier thread covering this.

        Let’s hope the report will specifically mention (for example) that there have been studies showing that the UHI distortion could be anywhere from a negligible 0.006 per decade (as reported by IPCC) to several tenths of a degree over the 20th century (as reported by several studies), but that this topic has not yet been investigated. Again: “the whole truth”

        Max

      • Roger Pielke, Sr. doesn’t seem to agree with you:
        http://pielkeclimatesci.wordpress.com/2011/04/01/comments-on-the-testimony-of-richard-muller-at-the-united-states-house-of-representatives-committee-on-energy-and-the-environment/

        Dr. Pielke finds Muller’s statement to be “premature” and “contradictory,” and cites Anthony Watts rebuttal with approval.

        Pielke Sr. concludes,

        “I completely agree with Anthony’s submission to the House committee in response to Richard Muller’s testimony. Richard Muller has an important new approach to analyze the surface temperature data. We hope he adopts a more robust and appropriate venue to present his results.”

        His remarks are worth reading.

      • We also know that his explanation about Climategate was not accurate. He obviously didn’t take the time and effort to understand what really happened. He’s a hip shooter.

      • UHI is known to exist but no one has found a satisfactory way to find it in the temperature data.

        I get what you’re saying, but I have to laugh because “skeptics/denialists” would twist a statement like that from the other side as being an example of “agenda-driven” science.

        Don’t you find it even the slightest bit curious that folks like Watts supported the statistical approach of BEST before the preliminary results were in, but now find them to be flawed methodology?

        As for the thermometers in a given area not moving up or down in sync with each other – is that a prevailing characteristic or only true of some small percentage?

      • Watts support was predicated on the fact that skeptics have been calling for an objective analysis of the data for a long time and Muller claimed to be doing just that. I no longer believe him and I suspect Watts does not either, given his letter to the Committee. Mind you, given that Berkeley is a stronghold of AGW activism, I am not surprised. Muller appears to be framing the analysis from a pro-AGW standpoint.

        I have never seen any data on the magnitude and distribution of the inhomogeneity, but I imagine it is large. That is the kind of thing that an objective evaluation would be looking at, in order to make some attempt at estimating the uncertainty, which is presently completely absent. This is what Muller’s group should be doing.

      • Anthony Watts

        Joshua, not flawed, simply not done, not even tried yet.

        Promise unfulfilled

      • Anthony,

        You certainly seemed to approve of the process prior to the announcement of preliminary results. As I recall, in fact, you posted about your inside knowledge of their methodology, and your approval of it.

        Now they have announced preliminary results, and made it clear that the results come with a caveat – and you are “disappointed?”

        Willis posted at your site the other day (also after preliminary results were released) to criticize the methodology itself.

        So let me see if I get this right. Your only objection is that they announced preliminary results with the caveat that it was preliminary, and you still support the methodology as you did prior to their announcement, and you are in disagreement with Willis because your assessment of their methodology remains consistent: you supported it before they announced any findings and you will continue to support it no matter what their final results are.

        Is that right?

      • I think it will be hard for Anthony to respond with all of the above words you stuffed into his mouth. I think with only 2% of the data evaluated so far, still being raw and unadjusted, any kind of assessment is premature.

        IF the BEST team are able go forward with the total process they promised, and be open and transparent with the adjustments and homogenization they find appropriate, then some progress toward the truth, what ever that is, might be possible.

        How ever it compares to the past outputs, of homogenized and adjusted GISS data sets, will be the point where my decision will be made as to the validity of the “standard data sets” post adjustment, before I will feel confident in using them to forecast with. I would still prefer clean raw data from [all/as many] stations as were reporting daily stats.

      • A word of caution, Richard?

      • Richard,

        Seems to me that Anthony was aware of their methodology prior to Muller’s statements about the initial results (with appropriate caveats); in fact he posted about his inside view into that methodology, and he had no complaints. In fact, he talked about the methodology favorably.

        After a statement with initial results we start hearing about a laundry list of complaints about the methodology, from Willis at Anthony’s blog, from Pielke at Anthony’s blog, and from Anthony himself (he offers complaints about more than just the timing of Muller’s announcement). Where were those complaints prior to the announcement of preliminary results?

        Has their methodology changed? Is it in some way different than how Anthony described it prior to preliminary results being announced?

        You say

        How ever it compares to the past outputs, of homogenized and adjusted GISS data sets, will be the point where my decision will be made as to the validity of the “standard data sets” post adjustment…

        It is my impression that the methodology they are using was pretty well explained by Anthony (with additional comments by folks such as Mosher) already.

        Maybe you are waiting to decide about their execution of their methodology? Ok, I would certainly understand that, but above you say you will judge it based on “outputs.” Does that mean as opposed to judging their work based on the combination of their methodology and the execution of the methodology?

        It just seems to me that criticisms of the methodology certainly have a lot more credibility if they were offered prior the announcement of preliminary results. Yes, many of the criticisms I’m reading now reflect similar criticisms voiced against statistical methodology used by others looking at the same data – and for those who criticized the BEST project for their methodology prior to the announcement of preliminary results, I see no problem.

        It doesn’t seem to me that Anthony fits into that category.

      • From the BEST web site: “However, the preliminary analysis includes only a very small subset (2%) of randomly chosen data, and does not include any method for correcting for biases such as the urban heat island effect, the time of observation bias, etc. ”

        Muller is making statements of preliminary findings without having actually applied the methodology yet. How can he have a preliminary result before he’s actually done the work?

        If, as Watt’s surface station project suggests is true, that 85% of surface stations are reporting poor results with most with errors of 2-deg C or more, then BEST’s work is far from complete. Way too soon to make such sweeping statements, especially to Congress.

        Far more responsible for him to tell Congress that it’s too soon to comment and by doing so would only raise questions about BEST’s objectivity… which it now has. Stupid move IMO.

      • Judith Curry

        I thought UHI was not to be part of the Berkeley study (at least not initially). Am I right?

        Max

  11. Steve McIntyre has just posted interesting comments and precis’s of testimony:

    http://climateaudit.org/2011/03/31/webcast-of-house-committee-hearings/

  12. If you want to see how abusive science “consensus” and government can work hand in hand take a look at this;

    http://hotair.com/archives/2011/03/31/green-regulation-in-ca-academic-fraud-retaliation-and-science-denial/

    What kind of example does the IPCC set in this context???

  13. Since government money caused this climate science fiasco, I will repeat my question from the first posting:

    Is the Committee hearing testimony today the Committee that will eventually appropriates funds for Science?

      • No is correct, but the situation is not that simple. Every government program requires four committee approvals, two each in the House and Senate. One of the two approves the program design (authorization) and the other approves the funding (appropriation). This is the House authorization committee. But they are the ones who direct the work, subject to appropriation.

  14. I simply want a transparent compilation of the and temperature record that other researchers can analyze. Whether Muller is pro-AGW or not is neither here nor there. What I am looking for is high quality data where any adjustments are transparent.
    With respect to Muller’s statements on LHI effects, it is premature to say the least given the coverage is primariy been of US sites. I guess we will have to wait for the papers.

    • There is no high quality data. That is the basic problem. What we need is an analysis of the uncertainties, not more data crunching.

      • You are right, I stand corrected – I should have simply said the most complete set of raw data. Going in I would agree that if the data are bad then there is little that can be done to clean them up. However, it might be possible to find a sub-sample of good sites that would allow for some assessment of what has happened in the last 50 to 100 years.

      • I think finding good temperature sites at this point is problematic unless pure raw data is used. As an idle bit of curiosity, I examined the kinds of manipulations that are done on raw data for the official temperature site near our small town before it is considered “accurate” and published. I did a quick casual writeup of what I found. http://climate.n0gw.net/GISS vs Raw.pdf

      • Hmmm. That link didn’t work.
        Let’s try: http://climate.n0gw.net/GISS%20vs%20Raw.pdf

      • “the various official adjustments to the temperature
        records are themselves much larger than any claimed temperature trend, and those adjustments appear
        to be based upon estimates, not measurements.”

  15. Gavin Schmidt: 
    All the dataset trends are available here: http://www.woodfortrees.org/plot/hadcrut3vgl/from:1998/to:2010.99/trend (that shows a positive trends 1998 to the end of 2010). Short term trends are not very significant though, but there is no call for misrepresenting their sign.

    Gavin, the question is where is the 0.2 deg C per decade projected by the IPCC?

    For hadcrut, the warming rate since 1998 is 0.00 as shown in this chart: http://bit.ly/dSA3Ly

  16. The monthly mean global temperature anomaly for February has been published.

    It is 0.27 deg C and is nearly half deg C less than the Feb-1998 maximum of 0.76 deg C.

    Half degree centigrade cooling!

    http://bit.ly/f2Ujfn

    • Girma

      “Half degree centigrade cooling!”

      In just one year!!!

      My God, that’s a decadal cooling rate of 5C per decade!

      Rev up your SUVs to save the planet from another Ice Age!

      Max

      • Take the time to read what you are commenting on 2011 – 1998 = 13 years
        .5 / 13=.03846 / year X10 = .384 degrees /decade cooling over the 13 year period. Still nothing to get your panties in a twist, whether positive or negative change!

        BUT it does show that trend line may be starting a change toward cooler if it holds much longer.

      • Rev up your SUVs to save the planet from another Ice Age!

        Hi Manaker- that has to be the daftest `sceptic´ comment on the net.

        I trust you are in jest?

      • Yeah, Sarah. Just like the 5C per decade cooling rate.

        Or IPCC’s o.2C per decade warming rate that didn’t happen.

        Max

  17. so you guys are fighting about the temperature of the land….is the ocean irrelevant? plus all those bits of the landmass that are not properly measured – eg all the deserts?

    • Graeme – sounds like BEST is not fighting about anything.

      To Muller’s credit, there were a lot of negative things said about him over things he has said and done in the past, but sounds to me like he did okay by science today.

  18. JCH – agreed….but it looks like a lot of preconceptions took a battering – eg the UHI effect being significant when so much of the area of the world is not heavily urbanised, etc

    • The non-urbanized areas are not where the thermometers were over the last 100+ years. And there were no fixed thermometers on the oceans. We have basically no data for most of the world, for most of the time.

      • Oh come, now. We have all these proxies. All the trees from the Indian Ocean, and…

  19. I agree David….and? why make a fuss about UHI effects? Why not just say that this temperature proxy is not helpful? I just do not see why so much ire is expended by one side of the debate on something that is obviously a very poor proxy of an index of global temperature. Surely there are better ways of leading our lives than fussing about the kind of paint on a screen around a thermometer somewhere….when at least 2/3 of the planet is not being measured. Not least of which, we have no real deep understanding of natural variables nor of the circumstances that might stop them working. Sorry, if this sounds like low-level recycling of what the Chief Hydrologist says, but …so many people are arguing about angels on pinheads, questionable CO2 measurements, questionable temperature indices….and yet the scary witch is still there beyond our power to measure, because the scientists do not want to go there

  20. Dr Curry is this the way you expected BEST to meet the light of day? I think we were all looking forward to the promised full transparency, data, method etc? Why are we all facing another declaration that cannot be falsified?

    I understood that it was “cast in stone” that BEST would not be declared, even to its contributors, until its full, final conclusion was ready and at that point and at that point only all the necessary data, method and code would be released.

    How then have we arrived at this nonsensical position which appears to fly completely against the original declared principles? My only conclusion – politics now controls science.

    Why did Dr Muller not say “BEST is not complete, we have not yet met our declared principles”?

    Welcome to the brave new world!

    • steven mosher

      In the open transparent world we have this motto.

      Release early; release often.

      I have no issue whatsover with the level of transparency shown to date.

      • Willis Eschenbach

        Since Muller has released absolutely nothing to date, no data, no code … what does your statement mean? Truly, I don’t understand your point here.

        w.

      • steven mosher

        read it again.

  21. Green Sand

    Were you expecting Moses to leadyou to the promised land? I hoped the science would say…we are very uncertain about what is happening

  22. Graeme,

    No not expecting Moses, but however I do expect people; especially those who purport to be scientists to comply with their own declare principles. I take it from your comment that you do not?

  23. Much clearer analysis here:

    http://climateprogress.org/2011/03/31/scopes-climate-hearing-richard-muller-and-john-christy/

    Dana says it best in the comments:

    March 31, 2011 at 1:29 pm
    I caught about an hour of the hearing. It made my blood absolutely boil. Armstrong flat-out lied about climate economics the whole time (I’m going to have to do a Skeptical Science post refuting his testimony, since there were no real economists there). Christy was really nothing more than a Republican denialist tool. At least twice he had the opportunity to refute the ’70s ice age myth, and failed to do so both times. We’ll probably have to do another blog post specifically about his testimony too.
    Emanuel was terrific. Muller was soemtimes okay, but when he talked about the hockey stick, totally wrong. The other 4 GOP witnesses were just rubbish. The whole thing was a travesty, except when Emanuel had the opportunity to speak.

    • Of course you loved Emanuel. His testimony had major errors in it.

      • The worst part about Emanuel’s written testimony is that he “cherry picked” the parts he wanted to emphasize and ignored the rest, in other words, he did not “tell the whole truth”.

        In a courtroom that can get you into trouble.

        Max

    • I thought Montgomery is an economist…and his comments seemed cogent to me

  24. Dr. Curry,
    By the way in reviewing he punchlines of the witnesses, I found Dr. Emanuel’s testimony to be very deceptive and historically inaccurate.
    Science has often been alarmist and wrong. Climate scientists, including himself, have been largely highly alarmist and inflammatory.
    To try and paint that picture as anything else is simply not an honest repreentation of what has been going on.

  25. How can he say urbanization would only affect absolute temperature?? As a city grows in size, the effect intensifies. That would have to mean a greater change in temperature, assuming a station was mostly rural, then became embedded in an increasingly urban environment. Initially perhaps, the station would measure a temperature similar to a near rural station. But then, as urbanization increased, the urban station temperature would increase at a faster pace. I don’t get it.

  26. So far the issue of openness and transparency of taxpayer funded scientific data appears not to have been raised. This appears to me to be a basic point where Congress can take a stand, i.e. if it is related to climate, where major policy actions are being considered and if it has been taxpayer funded, it must be totally open to FoI requests and subject to a completely independent audit. No more free passes for IPCC in this regard. Follow former President Reagan’s advice: trust, but verify.

    Let’s hope this issue gets discussed.

    Max

  27. In his written testimony, Dr. Kerry Emanuel writes:

    Over the past few decades, when solar output, as measured by satellites, has been decreasing slightly, there is little doubt that increasing global temperature is attributable to ever more rapidly increasing concentrations of greenhouse gases. We are undertaking an enormous experiment, and so far the response of the planet has been pretty much along the lines predicted more than a century ago.

    This statement is disingenuous.

    Solar activity in the 20th century reached a high level not seen in several thousand years, before decreasing again toward the end of the century. Emanuel fails to mention this.

    There were two statistically indistinguishable warming cycles during the 20th century, each lasting around 30 years: an early 20th century warming cycle (1910-1940) and a late 20th century warming cycle (1970-2000). In between there was a 30-year period of slight cooling. The only one of these three observed cycles that fits Emanuel’s description is the one from 1970 to 2000. The other two clearly do not. Again, Emanuel fails to mention this.

    The early 20th century warming was not caused by human GHGs (as there were hardly any at that time). Models cited by IPCC cannot explain this warming, yet several studies by solar scientists have attributed a major portion to the unusually high level of solar activity. This is also not mentioned by Emanuel.

    To leave all this information out is simply not telling “the whole truth”, but only that part which fits the desired message that “increasing global temperature is attributable to ever more rapidly increasing concentrations of greenhouse gases”.

    I hope someone in the committee understand enough about the whole issue to bore deeper into Emanuel’s statement and dig out “the whole truth”.

    Max

    PS A question: Is this testimony before this committee under oath, where the testifier swears to “tell the truth, the whole truth and nothing but the truth”?

    • andrew adams

      Max,

      I don’t get your point here. Emanuel is clearly attributing he mid 1970’s warming to CO2 – he doesn’t say anything about early 20th century warming, why should he?
      There is no contradiction between his claim and yours (which is in line with my understanding) that early 20th century warming is attributable to increased solar activity.

      • andrew adams

        Sorry, should read “Emanuel is clearly attributing he warming since the mid-70’s to CO2”

      • Attribution of (some of) the early 20th century period warming to solar activity is strictly conjectural, based on a statistical correlation with sun spot activity. There is no known mechanism, which is why indirect solar forcing is a major research topic. Lacking a mechanism we cannot say how much solar forcing contributes to the more recent warming (if it exists). Emanuel’s dismissal is unscientific.

        The history is interesting. The IPCC SAR dismissed a solar contribution to the early period. The TAR allowed a partial contribution, because of the correlation pointed out by the Danes. The AR4 drops the subject and looks only at the later period. The shift to the “last 50 years” (of which only 20 show warming) was a major retreat. The SAR attributed all warming over the last 250 years to AGW.

      • David Wojik

        Your point is well taken.

        Attribution of indirect solar forcing is based on empirical observations but lacks the definition of a specific theoretical “mechanism” (shifts in direct solar irradiance are too small). IPCC concludes that all natural forcing factors are essentially insignificant, but concedes that its “level of scientific understanding” of solar forcing is “low”.

        There are several studies showing that the level of solar activity in the 20th century was unusually high in several thousand years. On average, these studies attribute around 0.35C of the observed 20th century warming to this high level of activity.

        Most of this took place in the early 20th century warming cycle, which the models cannot explain by AGW or another “mechanism”. Emanuel does not mention this period, even though it is statistically indistinguishable from the late 20th century warming cycle, which he has “cherry picked” out of the entire modern record as the basis for the AGW premise.

        Records of the time tell us that the Earth warmed significantly after the Maunder and Dalton minima, yet we lack a clear definition of the “mechanism” by which this warming occurred.

        Cosmic rays / clouds?

        Stott et al. studied the question, “Do Models Underestimate the Solar Contribution to Recent Climate Change?”, and concluded
        http://climate.envsci.rutgers.edu/pdf/StottEtAl.pdf

        The results presented here suggest that climate models underestimate the sensitivity of the climate system to changes in solar irradiance, but a conclusive demonstration of an enhanced role for solar forcing requires an understanding of the physical mechanisms underlying such an effect.

        Let’s hope that ongoing research work, including the CLOUD experiment at CERN, will shed more light on the subject.

        Max

      • Emanuel is “cherry picking” the parts of the temperature record that support his desired message and simply ignoring those parts that do not.

        This is (as I wrote) not “telling the whole truth”.

        Max

      • andrew adams

        You wrote:

        Emanuel is clearly attributing he mid 1970′s warming to CO2 – he doesn’t say anything about early 20th century warming, why should he?

        If Emanuel is trying to sell the AGW message to the congressional committee, he obviously should not.

        If he is trying to give this committee an objective, scientific evaluation of the causes for warming of our planet, he should tell them “the whole truth” (and not just the “cherry-picked” part, which supports his “sales pitch”).

        It appears he chose to give a “sales pitch” rather than an objective, scientific evaluation.

        Pretty simple.

        Max

    • Craig Loehle

      Emanuel is assuming solar effects are instant. Why should they be? There could be a lag in heating just like if you turn on the burner under a pot of water. What if there is an 8 year lag?

  28. The issue with UHI seems slightly confused here. My understanding was that the skeptical concern with UHI was that the effect was underestimated in the adjustment for it that went into the ST record thereby artificially inflating the record, since the UHI anomly could be increasing due to urbanization. This is maybe what Anthony means when he says the work “has not been done yet”. If you simply took raw data randomly and did not adjust for urban effects, you will get an artificially high anomaly.

    The temperature anomaly is the possible reason why poorly sited stations do not have as much affect on the data as may be supposed. The anomaly is the measure of the difference in average temperature, so provided the site is consistently bad, it may yet produce a trend year to year. All the same, poorly sited stations doesn’t leave one with a sense of confidence. Nor does the poor distribution of measurements around the world.

    Furthermore, I think most skeptical arguments I have read have been generally critical of the quality control and due diligence shown with respect to the data, rather than flatly disagreeing with it, or dismissing it out of hand. A lot of arguments against cAGW are pointing to the existing temperature record to make their case, such as Girma recently did by showing there is virtually no recent warming.

    The problem as I see it is that if climate sensitivity to CO2 was high enough to drive the late 20th century warming, and with CO2 increasing in the atmosphere, then that warming should have continued and gotten stronger, not stopped virtually altogether. I can accept that man made CO2 has contributed to the warming, but unless the warming continued to accelerate from the time it was supposed to be having an effect, I struggle to see how high climate sensitivity can be justified unless that effect continued.

    • You are correct that the biggest issue is the complete lack of quality control. The sites are terrible, the databases are a mess, the adjustment process is bizarre, peer review is a joke and the assessment process is corrupt to the core. No rational person would make decisions on the basis of such an incompetent process. And we are being asked to change the world.

      When the amateurs have to keep explaining to the professionals that temperature readings of 200 or 300 are indicative of a problem, who can have much confidence in the pros?

    • warming should have continued and gotten stronger, not stopped virtually altogether

      Indeed: a damning critique… unless of course you care to take into account the fact that ocean warming has continue to rise, and in fact has <a href="http://scienceofdoom.files.wordpress.com/2010/03/ocean-heat-change-1955-2005-700m-domingues2008.png"accelerated dramatically.

      Although who cares, because nobody lives in the ocean.

      • Those graphs are little hard to read. When do they stop?

        I had a go at:
        http://www.woodfortrees.org/plot/hadsst2gl/from:1990/to:2010

        …and they don’t look terribly different from the global mean:
        http://www.woodfortrees.org/plot/hadsst2gl/from:1990/to:2010

        While I understand this is maybe not long enough to show a robust trend, never-the-less the point stands.

        Since the CO2 is an atmospheric forcing – ie the CO2 resides in the atmosphere, surely we should see that heating and transferring heat to the ocean? If not, then by what mechanism can the ocean warm but not the air temperature. I would have thought it more likely to see great variation in the air temperature but not necessarily affect the ocean (because it has such high thermal inertia).

        I don’t think I see it anyway from the graphs I did (assuming they are accurate), but how could the air temp NOT go up if the SST’s were?

        PS – not saying it can’t – I’m not laying any traps. There may genuinely be a reason I don’t see.

      • I’ve been going on the assumption that you’re aware that the overwhelming majority of the heat stored by the climate system is in the ocean, so the fact that you are looking at land and sea surface temperatures is something of a non sequitur.

        Air temperatures are inherently noisy: 1998 was a strong El Niño, and there were La Niñas of varying degrees in 1999–2000, 2000–2001 and from mid 2007 until early 2009. The whole point is to look at global trends if one is trying to understand global warming.

        If someone shows you only a very small part of a picture and omits most of it, you’re well within your rights to wonder why.

      • The atmosphere does not warm the ocean.

        It is the sun that warms the ocean.

        Max

      • It’s the sun that warms everything, Max. Do you have a point to make, or did Captain Obvious go on holiday and ask you to step in?

      • PDA (a.k.a. “Captain Obvious”)

        My comment was in response to the question by agnostic:

        Since the CO2 is an atmospheric forcing – ie the CO2 resides in the atmosphere, surely we should see that heating and transferring heat to the ocean? If not, then by what mechanism can the ocean warm but not the air temperature?

        Got it?

        Max

      • Are you saying the sun doesn’t warm the atmosphere? Are you saying the sun doesn’t warm an atmosphere made up of nitrogen + water vapor more than it would a pure nitrogen atmosphere?

        What are you saying?

        Maybe you should, you know, say it.

      • PDA: It seems to me that Agnostic asked you a legitimate question in a civil way and you are waving your hands.

        Of course the sun warms everything but given that CO2 traps heat in the atmosphere — not the ocean — it seems odd that the air temperatures would level off while the ocean would continue to warm “dramatically” as you said.

        Maybe the air temps are too noisy even within a decade. Maybe it’s the lag of the oceans absorbing heat from the nineties run-up. Maybe there has been significantly less cloud cover.

        Agnostic’s question is reasonable. I’m curious myself. If anyone has an answer I’d like to hear it.

      • Huxley, I’m answering the question as best I can. I’m sure others could do a better job.

        To me, it seems unremarkable that increased atmospheric temps lead to increased ocean temps. The top 1mm of the sea surface – what’s called the “skin layer” – controls the ocean-atmosphere heat flux. Less temperature difference means less heat escapes the ocean.

        I’m not “waving my hands.” This is my understanding of the oceanic thermal skin effect. I’d be happy to be corrected if I am in error.

      • PDA: Thanks.

        So the idea is that air temperatures ran up during the nineties, then roughly plateaued, and since then the ocean is responding to the plateau by warming because less heat is escaping?

        Going a step ahead, if the air temps stay steady, then ocean temps will eventually reach a new equilibrium temperature proportional to the air temp increase?

      • PDA, it would help if you could clarify. Your understanding is probably correct, but I don’t quite understand your understanding if you understand what I mean…

        “To me, it seems unremarkable that increased atmospheric temps lead to increased ocean temps.”

        Nor me. I am sure that air temperatures have a small effect on the upper layer of the ocean, as well as the upper layers of the ocean have an affect on the air temperatures. I would expect that if air temperature warmed by CO2 GHE to transfer to the ocean, mitigating the increase in temperature of the air but increasing the overall temperature of the SST. I would expect them to stay fairly closely linked.

        My question is simply by what mechanism would the air temperature not rise if the sea temperatures were continuing to rising?

        The ocean does of course receive most of its energy from the sun, but the bit we are most concerned with is the heat it receives via the atmosphere via the extra GHE from anthropogenic CO2. That heat exchange – the search for equilibrium – should couple the SST and LST closely on a global scale I would have thought.

        So why would the sea get hotter, but not traqnsfer some of that heat to the atmosphere?

      • ahhh. huxley seems to have answered for you. That makes sense – it’s the ‘pot of water on the stove gets hotter after the flame has been turned off’ theory.

        So it is a lag in the OHC due to thermal inertia. Got it.

        It doesn’t support or disprove cAGW, but it’s an interesting point.

      • So the idea is that air temperatures ran up during the nineties, then roughly plateaued

        No. The idea of a “plateau” is based on nothing but cherry-picked data using the 1998 El Niño as a starting point, quoting Phil Jones out of context, and blog bafflegab. If you want to know what is actually going on, you have to look at the trend: a frunning average, or a <a href="http://www.skepticalscience.com/images/temp_1998_trend.gif"best-fit line… anything other than taking a hot year like 1998 (El Niño) or a cold one like 1991 (Pinatubo) as your starting point.

        since then the ocean is responding to the plateau by warming because less heat is escaping?

        No.

      • So it is a lag in the OHC due to thermal inertia.

        No. Not even close. I’m almost certainly doing a terrible job of explaining this, but you’re not getting it.

        The heating of the ocean occurs as a result of solar irradiance penetrating the surface. Heat is generally reradiated to the atmosphere across the skin layer. However, increased infrared absorption of the air reduces the temperature gradient at the skin layer, which slows the release of ocean heat back to the atmosphere.

        Now, if you want to get into the whole “global warming stopped in 1998” canard, that’s another matter entirely.

      • However, increased infrared absorption of the air reduces the temperature gradient at the skin layer, which slows the release of ocean heat back to the atmosphere.

        er, except that the gradient is usually in the wrong direction – the atmosphere being generally warmer than the ocean, so no net heat transfer.
        Heat loss from the ocean surface is almost wholly from radiation and evaporation. Heat loss from the ocean also isn’t, as you suggest, slowed down by re-radiation from the atmosphere, as LW radiation doesn’t penetrate the surface skin, and so will arguably increase heat loss by facilitating evaporation.

      • the atmosphere being generally warmer than the ocean, so no net heat transfer

        Nope. On average the ocean is about 1 or 2 degrees warmer than the atmosphere.

      • PDA – I believe the gradient is across the skin layer.

      • “Across” the skin layer, yes.

      • On average the ocean is about 1 or 2 degrees warmer than the atmosphere.

        …except where it isn’t

      • Thus the use of the phrase “on average.”

      • The only place where I think PDA has made a mistake is on the remarkableness of the ocean skin layer. To me, as a layperson, I think it is amazing that something so thin and delicate could regulate the rate at which the mighty ocean releases its heat. The skin layer is a very cool cat.

      • Thus the use of the phrase “on average.”

        I phrased that so as to illustrate that net heat transfer can only take place from the ocean to atmosphere at times and places when the ocean is warmer than the atmosphere – so “on average” is perhaps not all it seems.

      • PDA

        You have it all wrong.

        The HadCRUT surface temperature record (cited by IPCC) shows clearly that since the beginning of 2001 there has been no global warming of our atmosphere at the surface (nothing to do with 1998 El Nino year, as you put it).

        The ARGO measurements of ocean temperature replaced the od unreliable XBT measurements in 2003; these show that the ocean has been cooling since 2003.

        IOW, our planet has been losing energy over the past several years, even though CO2 levels have increased to record heights (and IPCC had projected warming of 0.2C per decade.

        The warming did not happen as expected. This “unexplained” “lack of warming” was referred to by Kevin Trenberth as “a travesty”.

        UK Met Office attributed it to “natural variability” (a.k.a. natural forcing).

        Those are the observed facts, PDA.

        All the arm waving and surly talk are not going to change that.

        Your best bet is to say “one decade means nothing, since that’s only ‘weather’; it takes exactly three decades (like we has from 1970 to 2000) to be ‘climate'”.

        Or maybe, even better, to simply say nothing and accept the facts.

        Max

      • Breathtaking, Max. Duane Gish would be proud.

      • The skin layer is a very cool cat.

        Yes, indeed.
        As water is pretty much opaque to LW radiation, such radiation is only emitted and absorbed by the surface layer.
        Air is actually a very good insulator, so it’s only really by evaporation from the skin layer that the ocean loses heat. It’s only really when a strong wind is blowing across the surface that an appreciable amount of heat is lost by conduction – which is incidentally also the reason that cars need radiator fans.

      • Before anyone jumps on me, I should add that the ocean surface also loses heat by LW radiation.

      • PDA or Peter317 or anybody else, if you look at the daylight diagram of the ocean surface, between which colored marker(s) is the gradient that controls the rate at which heat caused by SW radiation leaves the ocean?

        I don’t understand it, but my hunch is it’s the gradient between the red and gold markers. I really have no idea whether or not that hunch is right. I doubt among readers that I am alone. Even if I am, I still want to know.

      • PDA

        You wrote

        ocean warming has continue to rise

        Well, actually, the upper ocean has been cooling since more reliable ARGO measurements replaced in 2003 the old unreliable and expendable XBT devices (which introduced a warming bias, according to NASA team leader, Josh Willis).
        http://www.ncasi.org/publications/Detail.aspx?id=3152

        Willis calls it a “speed bump”.

        But since earlier records are based on spotty and inaccurate measurements, I would take any pre-ARGO records “with a grain of (sea) salt”.

        Max

      • Max,

        There was, in point of fact, a “warming bias” introduced by the XBTs and a subset of Argo floats. As Willis found back in 2006, this led – unsurprisingly – to spurious indications of cooling in OHC. You’d probably have known that if you’d read Willis and Lyman themselves rather than just E&E.

        Subsequent studies have since confirmed the warming trend. But, since they don’t comport with your preconceptions, they can probably safely be ignored.

      • Roger Pielke Sr has an unpublished graph on his website from Josh Willis that appears to indicate slight warming is occurring in the depths reached by ARGO. Bob Tisdale posted the same thing here a month or two ago. And there have been studies published that are finding warming in the deep ocean.

      • If we’re talking about the same thing, that graph is here. The colloquy between Pielke Snr and Willis seems to have grown out of an earlier, quite informative, email exchange, reproduced here.

        I tend to agree with Dr. Pielke on the usefulness of OHC as a global warming metric.

      • Yes. The XBTs introduced a “warming bias” as Josh Willis conceded

        The much more expensive and reliable ARGO measurements, which showed cooling since 2003, do not.

        To rationalize that lousy past measurement with a spurious warming trend made the later better measurements look like a cooling trend is silly double-talk, and you know it. ARGO showed cooling from 2003 on and that’s it.

        This just tells me that the record before 2003 is questionable and after 2003 is probably a bit better.

        Max

      • PDA

        Further to the cited study by Craig Loehle, we have the recent study by Knox and Douglass entitled “Recent energy balance of Earth”, which confirms the recent cooling.
        http://www.pas.rochester.edu/~douglass/papers/KD_InPress_final.pdf

        A recently published estimate of Earth’s global warming trend is 0.63 ± 0.28 W/m2, as calculated from ocean heat content anomaly data spanning 1993–2008. This value is not representative of the recent (2003–2008) warming/cooling rate because of a “flattening” that occurred around 2001–2002. Using only 2003–2008 data from Argo floats, we find by four different algorithms that the recent trend ranges from –0.010 to –0.160 W/m2 with a typical error bar of ±0.2 W/m2. These results fail to support the existence of a frequently-cited large positive computed radiative imbalance.

        All seems quite clear to me, PDA.

        Max

      • It is strange that Knox and Douglass use four different algorithms, when one of them is simply wrong (the first one, which is influenced strongly by the regular annual cycle: first half year warmer than second half), two are statistically hopelessly weak (those based on single months) and only one makes any sense (the fourth one). It is amazing that an explicitly wrong calculation of trend made through to the publication, but I guess the International Journal of Geosciences has no effective peer review of the articles published. Evidently the authors are not experts on statistical analysis either.

        The only reasonable method gives the trend -0.040 W/m^2 with standard error 0.15. Thus it shows essentially zero trend with a rather large uncertainty. The two standard deviation upper limit is thus 0.26 W/m^2, which is lower than the Lyman et al. lower limit of the 90% confidence interval 0.53-0.75 W/m^2. Thus there is indeed some contradiction between a constant trend based on both XBT and ARGO data and the ARGO data alone, but a few more years of ARGO measurements are needed to draw clear conclusions.

      • Pekka, you might find Josh Willis’ unpublished graph of ocean heat content interesting.

      • JCH,
        One should not draw conclusions from unpublished data, but that seems to have a weak positive trend (about 0,18 W/m^2, with an uncertainty of about the same value calculated from the first 24 months and last 24 months of the data, the middle 15 months can be used only for the error estimate).

        The Lyman et al trend is clearly not consistent with this data either, although the disagreement is weaker after the warmest 12 month period at the end.

      • Pekka Pirilä

        “One should not draw conclusions from unpublished data”.

        Yeah. That’s why they’re unpublished.

        Stick with the published stuff (like the Loehle study that was cited). This tells us that the ARGO data show upper ocean cooling since 2003.

        Max

      • Max,

        You keep using this term “warming bias.” I do not think it means what you think it means.

        An instrument with a warming bias will register temperatures as cooler than they are in actuality. The XBTs as well as some Argo floats had a warming bias, so the cooling that was shown in Lyman et al. 2006 was spurious.

        I know you won’t let something as trivial as facts interrupt a good Gish Gallop, but I thought it might be amusing for some to see how doggedly you’re insisting on clowning yourself.

        Have a fine weekend.

      • PDA

        Duh! I got it now!

        It was April1, and your insistence that the ocean warmed since 2003 when the ARGO data shows it cooled was simply an April Fool’s canard.

        You almost got me there, PDA, until I figured it out.

        Max

      • “Most of the rapid decrease in globally integrated upper (0–750 m) ocean heat content anomalies (OHCA) between 2003 and 2005 reported by Lyman et al. [2006] appears to be an artifact resulting from the combination of two different instrument biases recently discovered in the in situ profile data.” —Correction to “Recent Cooling of the Upper Ocean,” Josh K. Willis, John M. Lyman, Gregory C. Johnson and John Gilson

        Duh, indeed.

      • PDA

        Thanks for link to 2007 report which corrected the 2009 report I cited.

        “Time Machine” at work?

        Max

      • PDA

        Indeed: a damning critique… unless of course you care to take into account the fact that ocean warming has continue to rise, and in fact has <a href="http://scienceofdoom.files.wordpress.com/2010/03/ocean-heat-change-1955-2005-700m-domingues2008.png"accelerated dramatically.

        ocean warming has continue to rise

        What?

        Here is what the data says about recent ocean temperature trend since 2002.

        http://bit.ly/dGaQ7P

        Global sea cooling rate of –0.07 deg C per decade!

      • Not so for SST’s

        se: http://www.drroyspencer.com

      • Spencer’s AMSR-E SST data cited by ianl8888 give a pretty good check of the HadSST2 data cited by Girma.

        Spencer’s data show an almost flat trend (almost no cooling) since 2002, while Hadley shows slight cooling over the same period.

        ARGO, which measures upper ocean temperature down to several hundred meters, also shows slight cooling since 2003 (despite PDA’s babbling), so there is fairly good agreement.

        I think we can lay this one to rest with the conclusion: there was no warming of the sea surface or upper ocean since 2003.

        Together with the “unexplained lack of warming” of the atmosphere at the surface (HadCRUT) since 2001, we have Trenberth’s “travesty”, and a serious challenge for the dAGW hypothesis.

        Thanks for data links.

        Max

  29. The real test of BEST will be if Muller’s global temp reconstruction is greater is similar to the rate of warming as measured by the satellites. UAH global warming is very similar to SST warming, but much less than the warming over land as measured by GISS and CRU. If Muller’s result is very similar to GISS or CRU over land, then he has not adjusted out the non-climatic warming.

  30. Kerry Emanuel’s testimony was full of disinformation. I completely agree with Steve McIntyre. http://climateaudit.org/2011/03/31/disinformation-from-kerry-emanuel/

    • Pooh, Dixie

      Congrats! Beat me to it. Full cite:
      McIntyre, Steve. 2011. Disinformation from Kerry Emanuel. Scientific. Climate Audit. March 31. http://climateaudit.org/2011/03/31/disinformation-from-kerry-emanuel/

      In his written and oral evidence at today’s hearing before the House Science Committee, Kerry Emanuel made untrue statements about deletion of data to hide the decline.

      Specific analysis from Emanuel’s written evidence (oral was similar).

    • I don’t know if Emanuel is a liar or simply a fool. But there is no excuse for this. When are the honest scientists going to stand up and scream “ENOUGH!”? We have example after example after example of dishonesty, gross incompetence, and corruption, yet no one wants to acknowledge that there is a clearly discernable pattern.

      If the honest scientists can’t connect the dots, one has to wonder if they are bright enough to do science.

      • “I don’t know if Emanuel is a liar or simply a fool”

        Does there have to be a choice?

        (Or could he be both, like the hockey stick geniuses?)

  31. I’m not so sure why people are getting so worked up about Muller’s testimonyI thought the main point was the data was being made freely available and therefore open to scrutiny by others. Who ever said that the BEST groups findings were going to be the final word on all the issues associated with the data set?

  32. Methinks BEST might change it´s acronym to BSET,
    as in beset…..by difficulties.

  33. Let’s look at the final data and report before jumping to any conclusions.

    • Or testifying to Congress.

      Oops.

      The proper response to Congress would have been — “we aren’t finished.” I guess the spotlight was too alluring.

    • It does not look like there will be a final report. They are publishing a series of journal articles so it will come trickling out, in bits and pieces, probably as long as the money lasts. My guess is that this is a program, not a project. But then maybe they will finally get around to the real issues. They seem to have very little understanding of the skeptical concerns, beyond those of Watts & Co, and maybe not even those. This makes “preliminary” statements to the effect that the skeptics are wrong particularly egregious.

  34. I have had a rather frightening thought. Maybe Dr. Emmanuel talked over, in detail, with his colleagues, what he was going to say to Congress. And no-one pointed out that it probably was not a good idea. That is, everyone agreed with him.

    • Jim

      That’s what is known as “consensus”.

      Maybe he talked it over with 2,500 of his colleagues.

      That would be known as “overwhelming consensus” .

      This can be extrapolated to “the science is settled”.

      Max

  35. Was Gavin on the government clock while he was commenting? Seems a dubious use of taxpayer dollars to me.

    • Unfortunately this would be considered a legitimate use of his time.

    • OTOH, Gavin’s work on the RealClimate site should not be on the taxpayer’s dime. A few years ago the RealClimate webmaster deleted all the timestamps on RC, presumably so this could not be used against Gavin et al.

      I think it’s high time these people lost their federal funding and cushy positions.

    • haha but the hearing isn’t a waste of taxpayers money?

  36. I have a lot of trouble with Muller predicting results when so little of the data is in. Emanuel gave disinformation on the hockey stick graph and he knows better than to do that.

    • steven mosher

      Muller is reporting his preliminary results with the appropriate caveats. You might question the wisdom of doing that, but he certainly has the right to say
      ” we did some preliminary work and here is what we found, check back when we know the full story” That’s a truthful statement. Its not scientifically relevant.

      Anybody who cares about science will take this preliminary statement for what it is worth. not much, scientifically. Anybody who expected Muller to deliver “the goods” for republicans will suddenly question the man, his project, blah blah blah.

      • Willis has a few things to say about that attitude over on Watts. It’s somewhat convincing.

      • I expected Muller to say “We took on this project because CRU and GISS have had an unscientific culture of secrecy for decades. The Climategate emails have shown real problems with their processes and the need for a trustworthy temp record is strongly felt by many people. At this point, we do not have any results which are reportable. Check back with me later.”

        That would have been the scientific response.

      • Rattus Norvegicus

        Oh, BS Cram.

        GISS algorithms have been available in the literature for years. When GISS finally release their code McIntyre et. al. were not able to get it running because it was finely tuned to the environment that the code had been written for (which was exactly why GISS claimed they didn’t want to release it).

        Luckily someone went and actually read the code and rewrote it in Python (the good folks at Clear Climate Code) and they were able to exactly reproduce the record which GISS showed. The data was always available at GHCN, so they weren ‘t covering anything up there. No real “culture of secrecy” and their results were fully reproducible. Get with it and stop spouting lies from other people.

      • Brandon Shollenberger

        Saying someone is repeating lies is a serious accusation. If you’re going to do things like this, you should be specific and back up your accusations.

        Failing to do so makes it seem like you are just trying to smear people you dislike.

      • Rattus Norvegicus

        Crap Brandon. Pure crap.

        Check out http://clearclimatecode.org Look at the little graph in the upper left hand corner of the site. No difference. I’ve run the code myself on my own computer. Same thing. Any idiot, but apparently not yourself, could Google the keywords and find that what I say is true. At this point there are lots of blog enabled emulations of GISS (Zeke, Lucia, Tamino and many others) using both adjusted and unadjusted data which show the same thing. I think that JeffId even showed the same thing.

        But then you are too lazy to do a simple Google search. BEST, with a minimal look at the data shows the same thing (Willis was wrong in his post. Looking at the latest GISS land only graph shows, and BEST is only looking at land data, .7C since the mid 1950’s. You can find it at the GISS web site, but I assume that you are incapable of finding that (even though it is the first hit if you look for GISS temperature).

        Have fun, but both you and Cram are spreading lies.

      • Brandon Shollenberger

        All my post said was you should substantiate your accusations. Instead of even specifying what lies are being repeated, you just attacked my character. This sort of behavior is typically viewed as nothing more than inane raving used to replace rational discourse.

        I don’t think a single word in your comment actually addresses anything I said.

      • Rattus Norvegicus

        Brandon: did you do the suggested search or look at my link?

        Did you even bother to look at the link? Obviously no. I did substantiate my claims. You just didn’t bother to check.

        .7C since for land only values? Check. you can look it up for yourself. GISS reproducible? Check, you didn’t click on the link. You are just showing how lazy you are. Here is the link to the GISS data http://data.giss.nasa.gov/gistemp/graphs/Fig.A.gif please read it. .7C since the mid 50’s is inline with this. But a simple Google search and then a click on the first link is beyond your ken. (hint, try GISS temperature, click on met stations only which is the thing we are looking at).

        Do the easy thing, and then show me that I am wrong.

      • Brandon Shollenberger

        Rattus Norvergicus, you haven’t even said just what is supposedly a lie. You can’t substantiate a claim if you don’t even bother to say what the claim is. Here’s what you need to do. Point out what lies Ron Cram has repeated, and show they are lies. You’ve done neither.

        You can insult me all you want, but you aren’t going to convince anyone. Indeed, you aren’t going to accomplish anything of value.

      • Rattus,
        GISS has been more forthcoming than CRU, but they have also attempted to keep their secrets. The code you point to has not always been readily available. Pretending it has is deceptive. And GISS changes their code regularly. As far as I know, we still do not know what adjustments were done after the Y2K problem McIntyre found. If you followed the storyline, after GISS corrected the problem, according to GISS 1934 was the warmest year on record in the US. Within a few days, 1998 was back on top. Now how did that happen? Did GISS archive the changes to the code? No. They did not. And your claims otherwise only make you look foolish.

      • Rattus Norvegicus

        Brandon, this started out with me calling BS on Cram, not you. In my reply to his “culture of secrecy” comment I pointed out that GISS algorithms and data had always been available and that when GISS released their code (which they stated they didn’t want to release because it was closely tailored to the environment it was developed in) sure enough, people were not able to get that code to run.

        I then pointed out that the good folks at CCC read the code and rewrote it in Python and reproduced the GISS results exactly.

        That is the answer to the substance of the lie which Cram was spreading, which I posted in my reply to Cram’s original comment.

      • Brandon Shollenberger

        Rattus Norvergicus, even if everything you said about GISS was true (and it wasn’t), it wouldn’t justify the insults you directed at me. Indeed, it wouldn’t justify, or even explain, anything in any of your responses to me. Your responses to me have been ridiculous regardless of anything to do with GISS or CCC.

        But since you keep going on about it, I feel obliged to correct you about CCC. You say:

        I then pointed out that the good folks at CCC read the code and rewrote it in Python and reproduced the GISS results exactly.

        CCC has not “reproduced the GISS results exactly.” Their version is extremely close, and the differences are quite negligible, but they do not match exactly.

        It is humorous you repeatedly said I am too lazy to look at CCC’s results, yet I apparently know them better than you do.

      • Rattus Norvegicus

        From CCC:

        In fact, the annual global, northern hemisphere, and southern hemisphere anomaly results are identical, as are the southern hemisphere monthly anomalies. The global monthly anomalies differ 7 times, out of more than 1000, each time by one digit in the least-significant place.

        If you look at the graph here the match is pretty exact. In other words, GISS does exactly what it claims to do. The algorithms were accurately described in the papers. They use the data they claim to use. The code implements the algorithms described in the papers.

        One can legitimately ask questions of the algorithms themselves. For example the smoothing of anomalies by a fixed factor of 1200km is open to some question.

      • Brandon

        After watching you in action here, it looks to me like you have a thin skin.

        Some (unsolicited) advice.

        Stick with the subject matter, as factually and unemotionally as possible.

        Don’t engage in debate by attacking opponents in feigned outrage for allegedly smearing you.

        Even if they DID smear you, let it roll off your back and beat them with facts..

        You’ll be much more effective that way.

        Max

      • Brandon Shollenberger

        Rattus Norvergicus, your latest response is as nonsensical as ever. You choose not to address the fact you exaggerated the very thing you accused me of being too lazy to research. That’s hardly surprising. However, this did genuinely surprise me:

        If you look at the graph here the match is pretty exact. In other words, GISS does exactly what it claims to do. The algorithms were accurately described in the papers. They use the data they claim to use. The code implements the algorithms described in the papers.

        This is about as ridiculous a claim as is imaginable. The fact CCC can replicate GISS’s results by examining the code GISS used in no way implies anything you claim it implies. Everything you say it implies may be true, but it is not indicated simply by the fact one can replicate results by examining GISS’s code. I could lie six ways from Sunday about how I generated a figure, but if you examined the code I used to make it, you could probably replicate it.

        As for your advice manacker, I think you’ve misunderstood the situation. I don’t have thin skin. Rattus Norvergicus hasn’t upset me. He’s just been a jerk making nonsensical responses, and I’ve been pointing it out.

        As for sticking to the subject matter, the only subject matter is I said he should substantiate his accusations. He ignored this topic for a while, and I consistently pointed it out. Now then, I highlighted his ridiculous behavior in addition to pointing out his failure to address the issue at hand, but that is perfectly appropriate.

      • Rattus,
        The unscientific secrecy of GISS is not a secret. I could provide you with a large number of posts like this one.

        http://wattsupwiththat.com/2010/02/17/after-two-years-of-stonewalling-nasa-giss-foia-files-are-now-online/

        GISS makes changes to their algorithms without documenting the changes. Anyone who has followed the story on ClimateAudit knows how the Leaderboard of warmest years in the US changes without any explanation either in the media or by scientific paper. The changes are unwarranted and clearly have a desired end in view. Go to ClimateAudit and use the search function. Search for Leaderboard or GISS or warmest years in US.

        Your personal attacks against me do not fool anyone who is willing to do a little reading.

      • Brandon Shollenberger

        Ron Cram, as far as I have seen, GISS no longer has these issues. It certainly has been secretive in the past, but I haven’t seen any indication it still is. After the Y2k fiasco and Climategate, it seemed GISS cleaned itself up.

        If I’ve missed anything, would you mind pointed me to information about how it still is so secretive?

      • Rattus Norvegicus

        Thanks Brandon. I now remove you from the spreading lies list.

        I would not characterize the problem discovered in August 2007 as a fiasco. It was a change in the input data supplied by a 3rd party which was not accounted for. Once discovered it was quickly fixed. The Horner FOIA emails suggest that this happened in a matter of a couple of days. It was a problem, but had close to no impact on the overall results.

      • Brandon,
        The point I was making is that GISS has a history of being unscientific and secretive. The fact GISS has been forced to respond to FOIA through the courts does not mean they have had a change of heart and are now being open and honest. There has been no change in leadership at GISS, no public apology about the secrets of the past, no promise to be scientific in the future. Because of this, there is no reason to believe they are being open now. It is possible they are just more careful about what they are hiding. As long as something is hidden from you, you do not know it is hidden. Do you see?

        We still do not understand how the adjustments for UHI can be of the wrong sign – why GISS cools the past and warms the present. We don’t understand why the warming over land is so much greater than the warming of SSTs. Perhaps the increased warming is not an artifact of UHI and poorly sited stations, but I have not seen any explanation from a physical basis which would explain more warming over land than sea.

        My criticism of GISS was accurate as stated. The secretive actions of GISS is probably part of the motivation behind BEST. Muller should have limited his testimony to those items which prompted BEST and not tried to predict the future regarding future findings.

      • Brandon Shollenberger

        Rattus Norvegicus, I honestly can’t say I care. The entire basis for you putting me on that list was your failure to read or some delusion you held. That you’ve now changed your mind without offering any sort of apology means nothing to me. Should I really be glad you decided to stop smearing me without any basis?

        All things considered, it should come as no surprise I find your characterization of the problem to be misleading.

        And Ron Cram, it would seem you can’t point to any actual secrecy on GISS part at present. The fact GISS has been secretive in the past, and the fact GISS’s work has seemingly unjustifiable elements, does not mean it is secretive now. Perhaps it would like to be, and perhaps it even is, but unless you can demonstrate such, you shouldn’t claim it is.

      • Brandon,
        The current secrecy of GISS has to do with the unanswered questions regarding the adjustments going back to the Y2K problem. These adjustments were never explained in any paper. While the effect of the adjustments may have been small, they had a very important result in that it changed the Leaderboard on the warmest years. This has all the look of a results-oriented adjustment having nothing to do with any instrument bias or other legitimate reason for adjusting data.

        I’m sorry if my earlier explanation was not clear. I THINK GISS may be hiding other things because of past actions but AFAIK GISS has never explained these results-oriented adjustments.

      • Brandon Shollenberger

        Can you be more specific? I don’t expect all adjustments to be described in the published literature. If the code is available and readable (CCC has demonstrated it is), we can know what was done, so exactly what are they being secretive about? Do they really need to explain every adjustment? I don’t see the need for GISS to say, “There is nothing available to justify this adjustment, so it is impossible to demonstrate why it was done. However, the impact of it is extremely small, and it doesn’t matter for our results. It would be nice if we could examine this issue in more detail, but we lack the resources to do so at this moment.”

        I mean, sure it would be nice, but is it really secretive for them not to say it?

      • Brandon,
        Yes, it is secretive for them not to explain any adjustments which effect the Leaderboard. The explanation should include a description of the error that was found which required the adjustment, a description of the adjustment itself and an analysis of the effects of the adjustment. None of this was done in a manner consistent with the standards of science.

      • Brandon Shollenberger

        Er, pointing*

      • Rattus Norvegicus

        An earlier reply was posted but got swallowed by the great internet memory hole.

        Updates to the algorithms are described here. Note the comment dated 2007-8-7. It was a data assimilation problem, just as described in the emails highlighted in Steve’s rather twisted first post when one searches for “leaderboard” on CA. Yeah, they don’t like Steve too much, and I can understand why. Steve did find a problem, they identified the source of the problem, fixed it and put it on the updates page. Right out there for all the world to see.

        Notice also that asking nicely, as Nick Barnes and CCC have done, gets you an acknowledgment. As they say, honey catches more flies than vinegar.

        In your earlier reply to my original comment you stated that the code was not always freely available. This I am willing to concede. Yet it does appear that the stated reason for not releasing the code, that it was not portable and so was of minimal use, was correct. AFAIK nobody at CA was able to get the code to run.

        This does bring up an interesting question about the value of making scientific code available vs. an accurate description of the algorithms used in the code (as GISS and also CRU have done [see the Oxburgh report for their experience here]). I am of two minds on this. On one had a team of experienced software engineers can look at the code and find bugs in it. On the other hand researchers can use code supplied openly without questioning whether it is correct (in the sense of implementing the described algorithms correctly).

        The experience of the genomics community is instructive here. A few years ago several papers had to be withdrawn after a major bug in a popular library was found. I’m sorry I don’t have links, it was covered in a Nature News article. Would the science in question have advanced more rapidly if everyone had implemented their own version of the algorithms? Certainly the embarrassment of having to withdraw multiple papers would have been avoided. There are both pluses and minuses to having complete access available.

      • sorry, spam filter has been acting up

      • Rattus Norvegicus

        Ron, I have t ask: did you look at the updates page? If you are not satisfied with the explanation of the “Y2K problem” there, perhaps the first couple of pages of this will satisfy you.

        It really was a data assimilation problem and not an algorithmic problem. And as the entry in the change log I pointed to above for 2007-08-07 since USHCN has provided updates it has no effect. Of course updates to the algorithm are described in the paper which can be read here.

        There have been several blog reconstructions (Zeke and Tamino foremost among them, but JeffId did one too.) which show that the UHI corrections lead to small a cooling when compared to the raw data. This is to be expected.

      • Rattus Norvegicus

        Brandon,

        Once again, from CCC:

        It is our opinion that the GISTEMP code performs substantially as documented in Hansen, J.E., and S. Lebedeff, 1987: Global trends of measured surface air temperature. J. Geophys. Res., 92, 13345-13372., the GISTEMP documentation, and other papers describing updates to the procedure.

        Once again, this started out as a response to Ron Cram, not you. There is no “culture of secrecy” in operation at GISS. They use only publicly available data and they described the algorithms implemented accurately. That is the judgment of the people who know more about the inner workings of GISTEMP than anyone else with the exception of scientists who did the original work.

        The one point which you did raise, I am prepared to concede. The gross measure of the results (annual anomalies for NH, SH and globe) are identical. Monthly anomalies are identical in all but 7 cases. So at the fine level of detail they are not identical in every case, just in better than 99% of the cases. This is easily seen in the graph I linked to.

      • Brandon Shollenberger

        Rattus Norvegicus, once again your response is non-responsive. You said the fact the graphs match indicates certain things. I pointed out that was untrue. Your response doesn’t address what I say, but instead, you offer a different reason to believe those certain things. This sort of thing has been systematic in your comments. I suspect if you start responding to the things I say, our discussion will be much better than if you continue to respond to things I’ve never said.

        As a side note, you keep saying this started with you responding to Ron Cram, not me. That is true, but it doesn’t really matter. In your very first response to me, you said, “Have fun, but both you and Cram are spreading lies.” You grouped me in with Ron Cram, and you’ve never retracted that. It’s kind of cheeky to group me in with someone then repeatedly insist you were responding to them, not me.

      • Rattus Norvegicus

        Once again, I am only representing what the people at CCC say. I am not about to retract what I said about Cram, and you came to his defense. There is “unscientific culture of secrecy” at GISS. If you admit this (and I think the case has been made, mostly by the folks at CCC) I will retract my charge.

      • Rattus Norvegicus

        That should read “there is no ‘unscientific culture of secrecy’…”

      • Brandon Shollenberger

        No you’re not. You said something wrong, and when I pointed it out, you changed the subject to what CCC said. As for retracting what you said about Ron Cram, I’ve never asked you to do so. I’ve never suggested you do so. I’ve never even mentioned the idea. The only thing I mentioned you retracting is your grouping me with Ron Cram.

        Also, I haven’t come to the defense of Ron Cram. The most I’ve done is say you should be specific with, and substantiate, your accusations. That would be true regardless of whether or not I agree with them.

      • Rattus Norvegicus

        So Brandon, what exactly is your position?

        My position is that there is “unscientific culture of secrecy” operating at GISS. The results and conclusions of ccc-gistemp bear this out quite well.

        CRU has been slightly less exemplary, but then they have data which is bound by IP restrictions. However this data makes up less than 10% of the data used in their analysis. Using the publicly available data and the algorithms described in the papers which cover CRUTEM it is possible to reproduce their results (reproduce, no replicate) with a reasonable degree of accuracy. This was demonstrated by the Oxburgh commission. Doesn’t seem like huge problem to me, although their reaction to FOI requests from McIntyre and friends has been less than exemplary. I think the term best used here is “bunker mentality” rather than “culture of secrecy”. A true “culture of secrecy” wouldn’t have published an accurate description of their algorithms which allowed a subsequent group to reproduce their results.

        On the basis of the well documented experience of the CCC folks I called Cram on his charge. I brought up CCC in my first comment and have stuck to discussing their findings.

        And Ron, sorry for missing your comment earlier in this thread. AFAIK, the “Y2K” problem had to do with incorrect data assimilation when USHCN changed from v1 to v2. Both before and after the fix 1934 and 1998 were in a statistical tie for warmest on record in the US. 1998 has, until recently, always been the warmest year globally. The changes are discuss in the 2007-8-7 entry here. Some culture of secrecy.

        As far as code availability, yes they did not release the code until (what year was it? 2005? 2006? I don’t recall) but the stated reasons for not releasing the code proved to be true. It just was not portable and so was of minimal use. BTW, was anyone able to get that pile of shite to run? AFAIK the answer there is no.

        But this does bring up an interesting issue. Is open code availability always a good thing in science? I am of a mixed mind on this.

        On the one hand determined software engineers can pour over the code and find bugs in it. On the other hand it leads to other researchers using the code without questioning whether it is correct or not.

        The experience of the genomics community is illustrative here. Several years ago (and I don’t have the links here, sorry) several papers had to be withdrawn after a severe bug was found in a popular library. It is to the credit of the researchers who went hmm? when faced with an interesting question, but would science have progressed more rapidly if everyone had their own implementation of the algorithm? There is always a problem when trusting someone else’s implementation of a complex algorithm. Just because it is available doesn’t mean it is right.

        On a completely different track, ccc-gistemp seems to have become an interesting pedagogical tool. I am tempted to work on producing a UI which would make it easier to use so that people can explore the effects of various changes to the input data set and the algorithm itself. It might be fun.

      • Brandon Shollenberger

        Rattus Norvergicus, sorry for the delay in my response, but I don’t have much of a “position.” You said Ron Cram was repeating lies. I said you should be specific with what lies he was repeating, and you should substantiate your claims. That’s all.

        Ron Cram hadn’t given any specific information about GISS. If you’re going to say he’s repeating lies, you need to be explicit about what things he is repeating. You then need to show those things he is repeating are not just false, but known to be false. Being wrong is different than lying, so you are obliged to show this isn’t just a case of people being mistaken.

        Other than that, my only position is you consistently made things up. The most obvious example is you said I was repeating lies even though I never made any claims. That, and many other things you said in your responses to me, were completely nonsensical. Is it important? No. But it is worth highlighting when people say things which make no sense.

        Personally, I don’t think your representation of the situations is accurate. I also don’t think Ron Cram has been accurate either. I suspect the lack of specificity in both of your comments is partially to blame, and that’s part of why I said you should be specific.

        Then again, this probably isn’t the time or place for discussions of specifics.

    • Dan, I’d agree with you (and Steve McIntyre) that Emanuel gave disinformation on the hickey stick; he also presented “cherry-picked” data on past warming to emphasize the AGW effect (see earlier posts).

      But I’d agree with steven mosher that Muller simply reported (meaningless) preliminary non-results.

      More important will be what his committee will report when they have completed the study. Will they show some of the problems or will it simply be another whitewash?

      As I understand it, the committee will NOT investigate the impact of the UHI effect at this time.

      I thinkthis is a pity (in view of the fact that some studies show this could account for around 50% or more of the observed 20th century warming (rather than less than 10% or 0.06C, as claimed by IPCC).

      Max

  37. Didn’t Willis have a few comments on this thread earlier?
    I don’t see it here anymore.

  38. Peter Wilson

    Judith

    I would like to take issue with your comments about Scott Armstrong’s testimony, which you called “bizarre” for some reason. You claim that he knows nothing about complex, dynamical climate models, but I disagree.

    What he knows, and demonstrated quite ably, is that they don’t work, at least not as predictive tools. The point of all his simple time series, is that despite their lack of sophistication, they work far better at actually predicting outcomes than the “complex system modeling” you apparently prefer.

    It is a fallacy that one needs to understand the internal details of a model in order to comment on its output – the output stands on its own, and must be compared to the natural system being modeled in order to assess the validity of the model. Dr Armstrong is highly qualified at such assessment, as his record and testimony demonstrate.

    Critising Armstrong for his assumed lack of understanding of the internal workings of your favourite models is on a par with climate scientists who object to the involvement or criticism of statisticians who are not climate specialists. When your principal claim to fame relies on statistical interpretations of vast swathes of data, or ones claim to be able to meaningfully forecast the future evolution of the climate system, it is churlish (but typical of the field unfortunately) to reject the expertise of those specialising in respectively statistics or forecasting, on the shallow grounds that they are not “climate scientists”.

  39. Judy, it never even occurred to me that Muller had asked to testify, I assumed he (and the others) was (were) selected and invited.

    I saw Willis’ post at WUWT first and I asked there if he was testifying (was asked to) because of BEST. Or if he was asked for some other reason and he himself chose to speak of BEST. Someone there said he had testified before, which is a perfectly acceptable reason. So theoretically he could have brought forth arguments such as in the past. Therefore, I’m still not clear whether he or the committee chose his subject matter.

    If the choice was BEST or nothing, why could he not choose nothing – i.e. not to appear as his “findings” were preliminary with no “evidence” as it were. I fear that Stan’s conjecture might be relevant. Or was he compelled to appear, in effect subpoened?

    I do find the affair a bit disturbing, although of course I agree we can only know the quality of his when he publishes the whole shebang. That this event was IMO not handled properly is one issue, the study itself will be another. Unfortunately, the “take-home” this time has every chance of being telegraphed around the world free of caveat.

    I personally do not have a particularly good taste in my mouth; in the face of all the horribleness I’ve discovered in the past 3 years, a nagging little suspicion which I will be discreet enough to keep to myself has occurred to me.

  40. Here is the BAMS published CERES data as presented by Kevin Threnberth.

    http://pielkeclimatesci.wordpress.com/2010/04/27/april-26-2010-reply-by-kevin-trenberth/

    It is quite evident from net radiative flux that the planet warmed in the past decade.

    Most of the relatively large change in flux was a result of cloud changes. Most of the warming seems to have been in the shortwave.

    This follows quite large changes around 2000 – http://isccp.giss.nasa.gov/zFD/an2020_TOTnet_toa.gif

    It seems that clouds change on an interannual to decadal timescale – eg Zhu et al 2007 – https://www.cfa.harvard.edu/~wsoon/EarlyEarth07-d/ZhuHackKiehlB07.pdf

    • It seems that clouds change on an interannual to decadal timescale – eg Zhu et al 2007 – in what I meant to say was in a way associated with sea surface temperature.

      This global warming should show up in ocean heat content – as it seems to for Karen Schuckmann and colleagues in the deeper ocean to 2000m – http://archimer.ifremer.fr/doc/2009/publication-6802.pdf – which in itself raises an interesting question.

      It seems quite clear that the planet has warmed a little since 2000 – just need to figure out why.

      • Chief
        I hope I got this right.

        I am struck by the plot showing reflected SW radiation has decreased in the last ten years. The previous data that I saw suggested that it started to increase in 1998. If reflected SW is decreasing, presumably from fewer clouds/lower albedo, then we should be warming, yet OLR seems to have decreased indicating cooling. Is that your point? Is this the missing heat paradox? I presume your last link suggests that the missing heat is deep down in the ocean? Have I understood you correctly?

      • Heat can’t get to 2000 meters down on a decadal time scale. All of this heat-in- the-ocean theory needs a good GCM for the ocean, something we do not have. Nor do we have any long term data to make one with. The meager long term 3D ocean temperature data we have is much worse than the poor atmospheric data. Let’s not confuse wild speculation based on virtually no data with actual facts. We have enough of that already.

      • I am very careful not to toss mad ideas around with little reference to real world data. CERES data shows that the planet warmed. The von Schuckmann paper used ARGO data to 2000m. 4 out of 5 ocean heat content data sources integrate to 700m and show little change. von Schuckmann and colleagues integrate to 2000m and show some warming.

        Just what would you base your wild claim about heat not going to 2000?

      • There is a comparison with CERES, ISCCP-FD and Project Earthshine data. – http://www.bbso.njit.edu/Research/EarthShine/

        There was a big change in the late 1990’s – with more cloud – and little change since.

        The net CERES data is sloping up showing the planet gaining energy by convention. net = -LW -SW

        There was a little warming in the LW – mostly towards the end of the period – and a little more in the SW. Less energy leaving the planet in both LW and SW. That is indeed the so called missing heat.

        The von Schuckmann et al (2009) paper seems to show – from ARGO data – changes of heat content in the deep ocean not found in the top 700m. This is a very puzzling result – but it seems that the data is what it is.

        It CERES is understood it provides a reliable measure of changes heat content of the planet. Energy in – energy out = the change in global energy storage. This can be expressed as:

        Ein/s – Eout/s = d(GES)/dt – as energy out decreased in both LW and SW the rate of change in global energy storage is positive and the planet warmed.

        I think we should lay to rest the endless and quite pointless quibbling about surface warming or the lack thereof in the past decade.

      • Chief Hydrologist – I was looking at this just last night:

        http://www.agu.org/pubs/crossref/2011/2010JC006464.shtml

      • Interesting – it does confirm something doesn’t it.

      • Antarctic Sea Ice Extent data from recent years would suggest that the deep CircumAntarctic warming noted in JCH’s link isn’t coming from the surface.
        ===============

      • Joe Lalonde

        Chief,

        Look at the surface salt changes since 1967 and the atmospheric pressure changes since 1948.

        Science has missed a whole world of circular motion and the density changes that circular motion has generated. Interesting enough, stored energy is compression by circular motion.

  41. Joe Lalonde

    Judith,

    The deeper I delve into to the world of motion and science, the more I find that current science is in a whole world of hurt.

    And the funny thing is they do not want to know or change.

  42. ferd berple

    JC. You left out Armstrong’s punchline. I’ve read his presentation. He says:

    1. Current IPCC climate models do not meet the requirements for scientific forecasting.
    2. A simple linear model that predicts the climate will not change currently outperforms the IPCC models.

    The IPCC has itself recognized the models lack predictive power and has instead now taken to calling them projections instead of predictions.

    Further he says:

    3. Predictions of doom and gloom are nothing new. Past predictions have routinely been shown to be wrong, and government action from the predictions has routinely made the problems worse. As such, we can have little confidence that climate change is any different.

    Armstrongs punchline – why Climate Science hates Armstrong:

    “As we have noted, simple methods are more appropriate for forecasting climate change. Large budgets therefore are not required.”

    Climate Science has a conflict of interests. Any proposal that suggests the solution to climate change is simply to reduce government spending on climate science, is not going to sit well with climate scientists that rely on government funding.

    • Climate science regards Armstrong as irrelevant, with significant justification. Politically he is not irrelevant, since he was called to testify. Beyond his general common sense principles (which no one argues against), his dismissal of global climate models is unjustified and his characterization of climate model scenario simulations as forecasts is incorrect.

      • Peter Wilson

        Judith

        You regard Armstrong’s dismissal of Climate models as unjustified, but fail to provide any rationale. While they may be very intricate and sophisticated, they are also very clearly inadequate for the purpose of forecasting.

        And frankly calling them scenarios is a bit lame, if they weren’t intended to give some indication as to the likely future evolution of the climate system, what on earth ate they there for? Besides, its only ever when they are challenged on their results that they become”scenarios” – in just about every piece of “climate research” I have read in the past few years, they are treated as predictions, or at the least projections (the difference being..?).

        If you want to sell the idea that they are not intended to predict the future climate, you really need to sell this idea to the climate research community first, as they have obviously not got the memo.

        This is, I believe, just another case of climate scientists rejecting expertise from outside their narrow field, and thus making fundamental errors which are glaringly obvious to those without the special climate science blinkers on.

      • ferd berple

        Isn’t that really a symptom of the ongoing problems with Climate Science? The models are not performing. As you try and increase resolution to provide regional forecasts, they become unstable.

        Someone from outside of climate science, with considerable forecasting expertise, points our likely reasons why the models are not performing, and his testimoney is labelled “rather bizzare”.

        Everyone else has their “punch line” quoted, except Armstrong. That demonstrates bias. You may not agree with his message – that is no reason to selectively censor. Armstrong is hardly alone in pointing out the problems with climate models, or the problems that result when governments try and to pick winners and losers.

        Armstrong is irrelevant to Climate Science to the extent that Climate Science believes it has all the answer and has nothing to learn from experts outside of Climate Science.

      • I think the problem is that scientists consider Armstrong’s principles of forecasting to be bunk

      • Wanna bet?
        =====

      • Armstrong’s punch line was that climate science should be defunded. Given that Congress is looking for big cuts the scientists might want to pay attention to that one. I am certainly prepared to argue that we don’t need any more AGW-based modeling.

      • I think Armstrong should be defunded

      • Dr. Curry,
        You regard Armstrong as irrelevant but he is not. He’s right. Simple models are more reliable than more complex computerized models. Pilkey has also published in the peer reviewed literature about the failings of computerized models. Climate science wants to ignore him also but they cannot.

        If you are willing to take on the sky dragon ideas, why not take on the likes of Armstrong, Pilkey and Tennekes?

        Computer models are workable for short term forecasts, like weather forecasts, but have no predictive value longer term. There is no reason to have confidence in GCMs.

      • I do not regard Armstrong or Pilkey as experts on the epistemology of complex nonlinear numerical models. This issue of actual forecasting is irrelevant for the climate problem; no one can forecast the sun, volcanic eruptions, etc. on timescales of decades to centuries. That is why climate model simulations are referred to as scenario simulations or projections. Pls refer to my previous post on “what can we learn from climate models” http://judithcurry.com/2010/10/03/what-can-we-learn-from-climate-models/. If you think the answer is “not much”, it is a heck of a lot more than we can learn from the simplistic modeling approaches of Armstrong.

      • If the modeling results were not forecasts they would have no policy implications. They are clearly being interpreted as saying that if we continue our emissions BAU then bad things will, or are likely to, happen. That is a forecast.

        That these forecasts may be contingent on certain natural events not happening does not make them any less forecasts. You can call it a contingent forecast but it is still a forecast. For example the claim that “If you drink this stuff it will probably kill you” is a forecast. It is in fact a strong reason not to drink.

      • Of course they’re forecasts. J, simple models trump complicated models even in complicated systems when the results of simple models forecast better than complicated models. Ockham knew; we’ll get it yet.
        ============

      • Simple models actually show high climate sensitivity. I am thinking of radiative convective models.

        It’s funny because skeptics were kind of arguing the models need to be more complicated (need to represent clouds in more detail). Now they are saying actually such a complicated model would be worse.

        Make yer minds up yeah?

      • Dr. Curry,
        I agree that we can learn from complex models and that we can learn things we cannot learn from simple models. However, I completely disagree that complex models are in any way predictive.

        Have you read the paper by Pilkey? Have you read the criticisms of modeling by Tennekes?

      • Yes that is my point, that people are not using the complex models for forecasting. they are doing scenario simulations, or projections.

      • Dr. Curry,
        This sounds like semantics to me. Your “scenario simulations” are predictions under certain circumstances (such as the level of CO2 emitted by mankind). In other words, projections. When I say we can learn from complex models, I mean we can learn that we do not understand climate yet. The models make it clear we do not understand clouds, water vapor, oceanic oscillations, internal climate variability, etc.

        And I still don’t know if you have ever read the paper by Pilkey? Or if you are familiar with the criticisms of Tennekes?

        Where does this faith in computer models come from? Scientists are supposed to be skeptical. Why do so many of them assume a computer can generate projections 100 years into the future when the models are so consistently wrong when trying to predict just a few months into the future?

      • Ron, read my article again on what can we learn from climate models.
        http://judithcurry.com/2010/10/03/what-can-we-learn-from-climate-models/

      • Dr. Curry,
        Thank you for asking me to reread you earlier post on climate models. Your more recent comments seem to indicate a much greater faith in models than this post indicates.

        Thankfully your post does indicate you are aware of some of the complexity issues raised by Tennekes (although perhaps they were brought to you by a different researcher). However, I did not see in your post any awareness of the studies by Armstrong or Pilkey.

        You conclude:
        This post is envisioned as the first in a series on climate modeling, and I hope to attract several guests to lead threads on this topic. Future topics that I am currently planning include:

        *How should we interpret simulations of 21st century climate?
        *Assessing climate model attribution of 20th century climate change
        *How should we assess and evaluate climate models?
        *The challenge of climate model parameterizations
        *The value of simple climate models
        *Seasonal climate forecasts
        *Complexity (guest post)

        I would suggest a post (perhaps a guest post by Armstrong or Pilkey) on the question of how to demonstrate long term forecasts by complex computer models have any predictive value. This had NEVER demonstrated, only assumed.

      • Ron, this has never been assumed. No one (at least climate modelers, anyways) is assuming that climate models make forecasts on timescales of a century. We cannot predict what the sun is doing or when a big volcano will erupt, for starts. We can make scenario simulations (or projections).

      • While the climate models are not expected to provide forecasts the idea is certainly that they tell about possible outcomes and something on the probabilities of various possible outcomes under different CO2 emission scenarios. Without this belief they would not have any role in discussion of policy alternatives.

  43. The reason that simple models outperform complex models is at the heart of the problems with climate forecasting. Spectral (FFT) analysis of historical temperatures over millions of years shows peaks at regular intervals such as one day, one year, 11 years, 65, 1100, … 40k years, 100k years, etc.

    The 100k year peak is especially large. Similar to the IPCC rational for attributing CO2 to warming, the change in total solar irradiation resulting from the 100k year orbital cycle is not sufficient to explain this. So, if we apply the IPCC rational, as CO2 also shows a strong 100K year cycle, it must be CO2 that is driving the 100K year climate cycle.

    This is the logic that the IPCC and Climate Science use to attribute large 20th century warming to CO2. Since we don’t know what is causing the temperature rise, it must be human produced CO2. Yet, here we have a very large 100K year cycle, that dwarfs the 20 century temperature rise, that is lock step with CO2, yet we can be fairly certain that the 100k year cycle is not a result of human produced CO2.

    Yet, both CO2 and temperature peak very strongly with a 100k year cycle, and this cannot be explained by the change in TSI.

    Can the GCM models account for this? No. Why? Because the GCM’s make a HUGE assumption. They assume that because the models are complex, and because climate is complex, the models must reflect reality.

    Climate Science modeling theory:

    Climate = Complex
    Models = Complex
    therefore:
    Climate = Models

    What the climate models do not account for is what is not know. Climate Science assumes the amount that is unknown must be small, and therefore has little impact, but this is only an assumption.

    What Climate Science modeling theory missed:

    Climate = Complexity + UNKNOWN
    Models = Complexity
    therefore:
    Climate Models

    Climate Science is like the “clockwork” physics of the 19th century, where it was largely believed that everything left to discover had already been discovered; all that was left was to fill in the fine details. In reality, we now know that a “clockwork” universe is in reality an illusion.

    The purpose in using simple models is to recognize that the most important part of Climate Science is not the Complexity, which is what the Models are trying to recreate.

    The single most important part of Climate Science is to discover what is yet UNKNOWN. This is the value of simple models, as has been demonstrated time and time again in other fields.

    Simple models allow you to tackle what you don’t know. The CGM models cannot do this. The complexity of the GCM models hides the UNKOWN, buried under the complexity of the models themselves.

  44. Jeffrey Davis

    Emmanuel’s remarks should be tattooed on the forehead of every denier.

  45. I am looking at the Armstrong testimony

    “Exhibit 4” in the document is a graph of his “no change model” which is the one being touted by all the skeptics as a “simple model” does just as good as the IPCC models.

    It’s shameful that skeptics jump to support something they don’t understand. If they did understand it shame on them for supporting junk science.

    The problem starts with this statement by Armstrong:

    “We conducted a validation test of the IPCC forecast of 0.03°C per-year increase in global mean temperatures. We did this starting roughly with the date used for the start of the Industrial Revolution, 1850.”

    Well that’s quite shocking. Did Armstrong really create a linear 0.3/C decade warming since 1850 and call that the IPCC forecast?

    So I looked up the paper Armstrong has published this in

    and found that YES that is what he had done. Rather than admit it was impossible to test long range IPCC forecasts against HadCRUT data (the first IPCC forecast was in 1991) he instead decided to fabricate such an IPCC forecast. The hand waving justification is given as:

    “It is not unreasonable, then, to suppose, for the purposes of our validation illustration, that scientists in 1850 had noticed that the increasing industrialization of the world was resulting in an exponential growth in “greenhouse gases”, and projected that this would lead
    to global warming of 0.03C per year.”

    So apparently when you don’t have enough data the 1001 (or whatever it is) “principles of forecasting” say to just fabricate something up yourself from your imagination and call that a forecast of the model you are evaluating.

    He defines the “IPCC forecast” in such a way as to say the IPCC in 1850 would have been predicting a steady 0.3C/decade warming (so 4.5C warmer by 2000). This is why his simple model does so much better. It’s because the IPCC forecast he’s testing against is not actually from any GCM output, but from a bizarre fantasy Armstrong has come up with.

    In hindsight it is rather funny that skeptics pored over Muller’s testimony for problems, which if any were rather small and petty (ooh he said 1.2C warming on land!), when the real error in Armstrong’s testimony has gone completely unmissed, yet undermines his entire argument that simple models are better than IPCC forecasts. That conclusion has in fact been heavily promoted by skeptics.

    • Yes, I noticed this too. Incredible what kind of ‘expertise’ gets to testify in front of Congress these days. How can they take 0.03 C/year as a rate and apply it back to 1850? How was that even published? The International Journal of Forecasting is for economic forecasting, so I doubt if any climate experts saw this for review.

    • Rattus Norvegicus

      Actually I just took at look at the GISS chart here and the 1.2C number “since the early 1900’s” is correct if you take as a starting point the coolest year (1904?) and an ending point of 2010. So even this supposed error is not an error!

      • Brandon Shollenberger

        It’s almost as though you are trying to damn the very person you’re defending. This is what Richard Muller said:

        1.2 degree C land temperature rise from the early 1900s to the present. This 1.2 degree rise is what we call global warming.

        Your defense of Muller is if we cherry-pick a starting point, we can get a rise of 1.2 degrees. However, Muller said the rise in question was what we call global warming. This means your claim is Muller says global warming is the increase in temperature from a cherry picked year to the present.

        I’d prefer to think Muller made a mistake than think he is an incompetent or dishonest person.

      • Brandon Shollenberger

        In case it wasn’t clear in my above comment, my issue with this defense isn’t just the fact it relies upon cherry-picking. Cherry-picking is what I referred to with “dishonest.” “Incompetent” was referring to the fact you cannot just compare single years to each other to determine how much the planet has warmed.

        The idea that an important member of BEST thinks we can use a single year’s value like this is insane. It would discredit everything he says on the topic.

      • “The idea that an important member of BEST thinks we can use a single year’s value like this is insane. It would discredit everything he says on the topic.”

        Well it doesn’t discredit everything.

        My eyebrow was first raised when I heard him almost a year ago state that it was somehow ethically/scientifically wrong for Hansen to both predict next years GISTEMP temperature anomaly and also be maintaining that record.

        I guess he will figure out stuff pretty quick, but until then he might say a few things that are wrong.

      • Brandon Shollenberger

        I think it does discredit everything he says on the topic. It’s an extremely basic issue, and if he has that messed up a view on it, how can we trust what he says on anything else? He may be right about them, but he wouldn’t be a credible source anymore.

        Of course, this is all based upon the idea he intentionally picked a single year as a starting point, and it wasn’t just a slip-up. I don’t believe that is true for a moment.

      • Rattus Norvegicus

        Brandon,

        I actually agree with you here. He shouldn’t have done it this way. He should have used decadal running means or decadal averages which gives a better picture of what is happening.

        Using the average for 1900-1909 (1st decade of the 20th century) and the average for 2000-2009 (1st decade of the 21st century) I come up with a warming of .824C.

      • Brandon Shollenberger

        Oddly enough, I’m starting to disagree with myself. I downloaded the full testimony from Muller, and it disturbs me quite a bit. For example, he says:

        Human caused global warming is somewhat smaller. According to the most recent IPCC report (2007), the human component became apparent only after 1957, and it amounts to “most” of the 0.7 degree rise since then.

        Now then, I don’t know where he got the 1957 number from. There’s no real reference, and I haven’t gotten around to searching the AR4 for it. For now, I’ll accept it is in there, but I’m suspicious.

        Regardless, the value he gives is derived from using a single year for comparison. The problem is he specifically highlights the fact he did it. He outright said he was doing it. That’s ridiculous.

      • Brandon Shollenberger

        I don’t know how this comment got submitted. I don’t remember hitting the button, and the page never changed. Sorry it slipped in!

      • I would use the graph from the testimony itself linked below. You can see where Muller says early 1900’s, he means the period where the anomaly was in the -0.5 range compared to the current +0.7 range. The case for 1.2 C is easily made from this.

        http://berkeleyearth.org/Resources/Muller_Testimony_31_March_2011

      • Brandon Shollenberger

        That graph is basically the same as the one Rattus Norvergicus offered as far as GISS is concerned. It does have other records in it, but the GISS record is clearly the same in both.

  46. John Carpenter

    Brandon,

    After listening and reading his testimony, I think he is referencing the 2007 IPCC AR4 for that date. The direct quote is:

    “According to the most recent IPCC report (2007), the human component became apparent only after 1957, and it
    amounts to “most” of the 0.7 degree rise since then. Let’s assume the human-caused warming is 0.6 degrees.”

    • Brandon Shollenberger

      I know he referenced the most recent IPCC report. The problem is the report is large, and there is nothing to help find it. He didn’t offer a quote, paraphrase, section number, page or anything. That makes it difficult to verify his claim, and considering the values he gave for the amount of warming, it is hard to believe. It’s an oddly specific value.

      As it happens, I’ve read all of the assessment report which deals with surface temperature records. I don’t remember anything like what Muller said being in the report. Google didn’t help. I decided to skim through chapter 3 of it again, and I didn’t see anything to support his claim. A text search for “1957” in it didn’t pull up any relevant results.

      Then again, it’s a big report, and I may have missed whatever Muller was thinking of.

      • John Carpenter

        I understand what you are after now… sorry, can’t help you there. Good luck.

      • Brandon Shollenberger

        No problem, and thanks. After doing more looking, I’m fairly convinced the IPCC didn’t say what he claimed it said. Instead, it probably said something about 1957, and that led Muller to concluding an anthropogenic signal was only evident after 1957. This is not what he said, but it is the best interpretation I can come up with. It’s also very disheartening. Muller claiming the IPCC said something it never said is a very bad sign.

        For what it’s worth, a search of chapter 3 for 1957 brings up three results. All three are in discussions of how a change (different one each time) improves the quality of conclusions. That could be how Muller picked that particular year for his claim.

        I hope I’ve just missed something, but it doesn’t seem like it.

      • John and Brandon

        Any possibility that Muller had a typo in his presentation and 1957 was supposed to be 1975?

        IPCC refers to this date frequently (as the start of its “poster period” for AGW).

        Max

      • Brandon Shollenberger

        That’s a good thought. I didn’t think about the possibility he was 20 years off in his testimony due to a typo as that seems to be a remarkable mistake to make when testifying for Congress. However, it would make sense. The IPCC report refers to 1975 a number of times as the point in which warming resumed after the period of no warming (or possibly cooling) around the middle of the century.

        The IPCC report still doesn’t say what he claims it said, but at least it would be pretty clear what Muller was basing his idea on if it really was just a typo.

      • 0.7 C would apply to 1975 too from his graph, as it was quite flat between ’57 and ’75, so he mistakenly might have understated the case.

      • Brandon Shollenberger

        That’s a good, and funny point.

      • Rattus Norvegicus

        Seeing as the anomaly (from GISS since it is the easiest table to read) for 1975 is -.02 vs. .08 for 1957 giving an increase of .85C since 1975 vs. .75 since 1957.

        There is a valid reason for choosing 1975 or 1976 as a breakpoint. Tamino (tamino.wordpress.com) did a good post doing a breakpoint analysis of the data, although you will probably have to look at the http://www.skepticalscience.com/ wayback machine index to find it. He came up with a year of 1977 (I think, don’t recall exactly) for a definite change in the behavior of the system, and just looking at the GISS table shows that clearly something happened in the mid 1970’s.

        So I vote for the transposition error. Sort of like 2035 for 2350, eh?

      • Jim D and Brandon

        The observed linear warming (HadCRUT) from 1957 to 2010 was around 0.7C.

        From 1975 to 2010 it was about the same.

        Muller will supposedly tell us whether these numbers are reliable or not.

        Let’s see what his committee comes up with.

        Unfortunately, I don’t think it will change many minds, since both sides already “know” the answer. Right?

        But, no matter how it comes out, the AGW proponents will lose: either it will be seen as a “white wash” or it will demonstrate that the temperature record is flawed.

        Max

    • John Carpenter and Brandon Shollenberger

      Muller’s statement is logical (but at the same time worrying to me).

      According to the most recent IPCC report (2007), the human component became apparent only after 1957, and it amounts to “most” of the 0.7 degree rise since then. Let’s assume the human-caused warming is 0.6 degrees.

      IOW he says 0.6C of the observed 0.7C warming from 1957 to 2010 can be attributed to humans (i.e. principally CO2).

      HadCRUT does, indeed, show 0.68C linear warming 1957 through 2010 (linear trend of 0.124C per decade).

      Mauna Loa shows increase from 314 to 389 ppmv CO2 over same period.

      Using IPCC’s 2xCO2 CS = 3.2C, we should theoretically have seen 1.0C warming from 1957 through 2010 (including the portion “hidden in the pipeline”), so 0.6C warming from humans makes sense to Muller.

      This is because he accepts a priori the model-based IPCC 2xCO2 estimate of 3.2C on average and the Hansen “hidden in the pipeline” hypothesis.

      IMO this is a bad sign for his objectivity in assessing the validity of the surface temperature record, because he already “knows” what “it should look like”.

      Max

      • Brandon Shollenberger

        I think you missed what the problem is (or at least, what one problem is). Muller (seemingly) picked a single year and said warming since then was global warming. That would be cherry-picking, and it would be alter his results by a significant margin.

        That’s ridiculous for a person who is supposed to be working on a great temperature record in order to help resolve the dispute over surface temperatures..

  47. Judith, Armstrong’s point is basically that the IPCC reports make policy recommendations based on the ominous results of the various models, in essence and for all practical purposes, treating the results as forecasts. The whole alarmist position advocating massively expensive government intervention depends on the models scenerios being perceived as likely outcomes of increasing concentrations of CO2 in our atmosphere. While you, and perhaps most of the climate scientists understand the distinction between scientific forecasts and “scenerios” i would suggest congress and the general public do not. And i don’t see a huge effort from the AGW camp to reconcile the difference as it pertains to setting public policy. And isn’t Armstrong really attacking the argument of those who call for major and expensive government intervention, without the benefit of scientific forecasts? It seems to me Armstrong is simply pointing out that we have been down this road before, with dubious results, on virtually all other sky is falling issues over the last 150 years. Is Armstrong not truly an expert in Forecasting science and methodology? I don’t feel you accurately summarized his points. Is the purpose of these hearings to explain the merits of climate models or to help direct the policy of our government moving forward? I feel Armstrongs report speaks to that point.

    • The IPCC does not make policy recommendations. The UNFCCC makes policy recommendations based upon the IPCC reports. How to incorporate uncertain scientific evidence effectively into policy recommendations is a huge challenge. The scientific evidence regarding climate change is a relatively minor player in all this at this point; politics, economics and energy policy are the key drivers.

      • The science is still the central issue because the claim is that “the science is settled” and disaster looms unless we change our policies and destroy our economy. Without the science there would be no effort to change our politics and destroy our economy.

      • True, but this is only because the science is so uncertain. Also, there are basic policy positions implicit in the IPCC WG 2 & 3 reports. They are exploring alternative actions to deal with dangerous AGW, hence they assume and imply that dangerous AGW is real and that action is needed, which is a policy position. Within this assumed position they do not make recommendations. It is like the climate models which assume AGW then explore alternatives from within that assumption. It is all in the framing.

      • Agreed, its an issue of overly narrow framing.

      • point being that the IPCC Summary for Policymakers is the portion of the document most cited in a call for action. Would you not agree that this summary makes projections that are portrayed, or at least understood (by the masses) to be forecasts of the future climate? My original point re Armstrong was that he took issue with using model scenerios as you would have it, to justify a case for government intervention, when such policies should rely on scientific forecasts, that do not now exist.

      • Rob Starkey

        Judith- What you have written above- ‘The IPCC does not make policy recommendations” is imo incorrect.

        The IPCC’s AR4 is written with a summary for policy makers, and includes multiple sections where potential policies are summarized. Much of the report outlines “sustainable development” policies with are advocated. Any unbiased reader looking at the report would conclude that it is necessay to reduce CO2 emissions or a problem for humanity will be the unavoidable result.

        I do not understand your position on this one

      • I don’t think that’s quite accurate. The IPCC AR is organized around policy making, and the respective WG questions are posed in such a way as to point to policy. If the answer to the WG1 question is anything but serious anthropogenic forcing, the WG2 question becomes moot. And if the answer to the WG2 question is anything but serious consequences, the WG3 question becomes moot. And if the WG3 answer is anything but “we must do something, and we must do it now”, the SPM becomes moot.

        And yet all groups work concurrently, which means that they all presume the outcome of the upstream group. Even more absurdly, the SPM is published three months before and of the WG reports.

        They most certainly are doing what they’re doing with outcomes and policy in mind, or they have a pretty amazing crystal ball.

      • I agree that framing and circular reasoning is a big problem for the IPCC.

      • I’d take that a step further. In retrospect, the current mess was inevitable from the way things were initially set up. The fatal flaw in the way it was constructed was that the were supposed to survey the literature and then recommend policy. A step in there of dotting “i”s and crossing “t”s was left out.

        Belatedly, BEST is supposed to do just that.

      • “The IPCC does not make policy recommendations.” I think that should read “The IPCC [should] not make policy recommendations.”

        Selected readings from the AR4 working group 3 Summary for Policy Makers:

        “Public RD & D investment in low emissions technologies have proven to be effective in all sectors.”

        “23. Policies that provide a real or implicit price of carbon could create incentives for producers and consumers to significantly invest in low-GHG products, technologies and processes. Such policies could include economic instruments, government funding and regulation (high agreement, much evidence).”

        “24. Government support through financial contributions, tax credits, standard setting and market creation is important for effective technology development, innovation and deployment. Transfer of technology to developing countries depends on enabling conditions and financing (high agreement, much evidence).”

        Then there is the nifty policies chart which lists “Selected sectoral policies, measures and instruments that have shown to be environmentally effective in the respective sector in at least a number of national cases. ” I guess they are not recommending these policies, just sayin’ that they are “environmentaly effective.”

        Somehow I missed the “taxation,” “government funding and regulation,” and”transfer of technology” chapters in my science books.

        The IPCC AR4 doesn’t contain policy recommendations just like it doesn’t contain numerous citations to advocacy group grey literature.

        The politics and science have been commingled by the consensus advocates, including many of the scientists, so thoroughly and for so long that you often can’t tell where one leaves off and the other begins.

      • Fair enough, WG3 definitely strays into policy recommendation land.

      • There’s kind of a fine line that they seem to cross over. It’s the mission of WG3 to, among other things, describe policy options. The actual recommendation part is supposed to be for others. But praise for a policy option can be seen as advocacy.

        A good example is, as Gary quoted, “Public RD & D investment in low emissions technologies have proven to be effective in all sectors.” It’s one the one hand vague and banal enough to not mean anything in particular, and at the same time, seems to be cheerleading. This is how they cross the line, while probably not even realizing it. That would be different from a more neutral and specific statement such as “we expect the price of solar panels to be $XX/M^2 by 2015” (with an appropriate citation). See the difference?

      • “Public RD & D investment in low emissions technologies have proven to be effective in all sectors.”

        Don’t you just love how they make these kinds of broad, sweeping statements absent concrete examples? I’m trying to think of an example that fits that description, and coming up short. Maybe it’s all in the definition of “effective”.

      • ChE

        ‘Maybe it’s all in the definition of “effective”.’

        As a shareholder, for example of GE, I’d say that the”public R&D investment in low emissions technologies” (i.e. taxpayer funded payments to GE) were very “effective”.

        Without them, the share price development under Jeff Immelt would have been even worse (it dropped from a peak of over $40 three years ago to well below $20 last year and has now gradually crept up to around $20).

        And I’m sure the free US federal tax ride on $6 billion corporate earnings in 2010 also helped.

        And, hey, GE helped the current administration get elected – also very “effective”.

        So Immelt is now Obama’s “Job Czar” and I’m certain he will be “effective” in creating some GE jobs (somewhere in this world) with these extra taxpayer funded goodies.

        “Effectiveness” (like “beauty”) is in the eyes of the beholder, ChE.

        Max

  48. We grabbed your site, it is very diverse and interesting, Very pleased to have reviewed the contents of your site, Share your experiences if you can.