Atlantic vs Pacific vs AGW

by Judith Curry

I hope this will lead to a broader discussion about the contribution of natural variability to local climate trends and to the statistics of extreme events. – John Michael Wallace

The Chen and Tung paper is continuing to generate interesting discussion in the blogosphere, in particular the role of multidecadal oscillations in climate change attribution.

RealClimate

There is a new post at RealClimate IPCC attribution statements redux:  A response to Judith Curry, which responds to my post The 50-50 attribution argument.  I am ‘honored’ to receive such attention.

In the comments,  Bart Verheggen refers to this as a ‘take down.’

Here is how Gavin’s post is being billed on twitter:

Chris Colose:  @ClimateOfGavin on why @curryja’s attribution statements for 5+ years now are demonstrably wrong/illogical/irrelevant

Climate 4 Revolution: I predict more @curryja drama: more cherry-picking, frustrated teeth-gnashing, & desperate name calling of @ClimateOfGavin.

Well, I’ve read Gavin’s piece twice (now thrice), I can’t find a single point that he has scored with respect to my main arguments.  I understand what the IPCC etc. have done – I think their conclusion is wrong and that circular reasoning is involved (Gavin doesn’t get my issue with detection). Gavin closes with:

If Judith is interested in why her arguments are not convincing to others, perhaps this can give her some clues. 

I appreciate all the effort Gavin put into this, but he misses my main points. Certainly illuminates his own thinking, tho.

Gavin and I seem to live on different planets:  I live on planet Earth observations, and Gavin lives on planet climate model.  I would appreciate some discussion that points out anything significant in Gavin’s post that refutes my arguments. This tweet sums it up for me:

Shub Niggurath: This article shows how you cannot ‘debunk’ someone if you have not understood their point. 

DotEarth

Pursuant to the Chen and Tung paper, Andy Revkin has a very good post A Closer Look at Turbulent Oceans and Greenhouse Heating.  Revkin interviews  Joshua Willis, Andrew Dessler, John Michael Wallace, and Carl Wunsch.   Interesting choices, particularly since all 4 have an observational perspective (focus on data). Excerpts (JC bold):

Josh Willis (NASA JPL)

If you are wondering whether [the ‘slowdown’] is meaningful in terms of the public discourse about climate change, I would say the answer is no. The basic story of human caused global warming and its coming impacts is still the same: humans are causing it and the future will bring higher sea levels and warmer temperatures, the only questions are: how much and how fast?

But it is not clear to me, actually, that an accelerated warming of some sub-surface layer of the ocean (at least in the globally-averaged sense) is robustly supported by the data itself.

Andrew Dessler (Texas A&M)

I think it’s important to put the hiatus in context. This is not an existential threat to the mainstream theory of climate. We are not going to find out that, lo and behold, carbon dioxide is not a greenhouse gas and is not causing warming.

I do think that ocean variability may have played a role in the lack of warming in the middle of the 20th century, as well as the rapid warming of the 1980s and 1990s.

Mike Wallace (U. Washington)

Back in 2001 I served as a member of the committee that drafted the National Research Council report, “Climate Change Science: An Analysis of Some Key Questions.” The prevailing view at that time, to which I subscribed, was that the signal of human-induced global warming first clearly emerged from the background noise of natural variability starting in the 1970s and that the observed rate of increase from 1975 onward could be expected to continue into the 21st century. The Fourth Assessment Report of the IPCC, released in 2007, offered a similar perspective, both in the text and in the figures in its Summary for Policymakers.

By that time, I was beginning to have misgivings about this interpretation. It seemed to me that the hiatus in the warming, which by then was approaching ten years in length, should not be dismissed as a statistical fluke. It was as legitimate a part of the record as the rapid rises in global-mean temperature in the 1980s and 1990s.

In 2009 Zhaohua Wu contacted me about a paper that he, Norden Huang, and other colleagues were in the process of writing in which they attributed the stair-step behavior in the rate of global warming, including the current hiatus, to Atlantic multidecadal variability.  The paper (Wu et al.) encountered some tough sledding in the review process, but we persisted and the article finally appeared in Climate Dynamics three years ago. [See Judith Curry’s helpful discussion.]

The new paper by Tung and Chen goes much farther than we did in making the case that Atlantic multidecadal variability needs to be considered in the attribution of climate change. I’m glad to see that it is attracting attention in the scientific community, along with recent papers of Kosaka et al. and Meehl et al. emphasizing the role of ENSO-like variability. I hope this will lead to a broader discussion about the contribution of natural variability to local climate trends and to the statistics of extreme events.

Carl Wunsch (MIT and Harvard):

The system is noisy. Even if there were no anthropogenic forcing, one expects to see fluctuations including upward and downward trends, plateaus, spikes, etc. If the science is done right, the calculated uncertainty takes account of this background variation. But none of these papers, Tung, or Trenberth, does that. Overlain on top of this natural behavior is the small, and often shaky, observing systems, both atmosphere and ocean where the shifting places and times and technologies must also produce a change even if none actually occurred. 

The central problem of climate science is to ask what you do and say when your data are, by almost any standard, inadequate? If I spend three years analyzing my data, and the only defensible inference is that “the data are inadequate to answer the question,” how do you publish? How do you get your grant renewed? A common answer is to distort the calculation of the uncertainty, or ignore it all together, and proclaim an exciting story that the New York Times will pick up.

JC reflections

This discussion generated by the Chen and Tung paper represents the climate blogosphere at its best – my posts, the participation of Tung (at least via email), Andy Revkin’s post eliciting significant comments from 4 mainstream climate scientists, and Gavin’s response (there are other blogs discussing this as well).

Uncertain T. Monster is very pleased by the responses by Wallace, Wunsch, Dessler and Willis (but not so pleased with Gavin’s essay).

I take issue with the following statements by Dessler and Willis:

This is not an existential threat to the mainstream theory of climate. – Dessler

The basic story of human caused global warming and its coming impacts is still the same: humans are causing it and the future will bring higher sea levels and warmer temperatures, the only questions are: how much and how fast? – Willis

I do regard the emerging realization of the importance of natural variability to be an existential threat to the mainstream theory of climate variations on decadal to century time scales.  The mainstream theory views climate change as externally forced, e.g. the CO2 control knob theory.  My take is that external forcing explains general variations on very long time scales, and equilibrium differences in planetary climates of relevance to comparative planetology.  But it does not explain the dominant variations of climate on decadal to century timescales, which are the time scales of relevance to policy makers and governments that are paying all this money for climate research.

On decadal to century timescales, climate dynamics – the complex interplay of multiple external forcings (rapid and slow), the spectrum of atmospheric and ocean circulation oscillations, interactions with biosphere – determines variations in climate.  Until the climate community gets serious about paying attention to natural variability, then we aren’t going to make much progress in understanding or predicting climate variability/change on decadal to century timescales.

 

jo-nova-cartoon

 

 

 

390 responses to “Atlantic vs Pacific vs AGW

  1. Heh, the blogosphere is showing Gavin’s best.
    ==============

  2. ‘The concatenation of cooling phases of the oceanic oscillations’. Where’s Arthur Smith when I need him to track down my first use of the phrase?
    ==================

    • Pangloss

      “There is a concatenation of all events in the best of possible worlds; for, in short, had {many apparently unrelated and accidental events not happened} you would not have been here to eat preserved citrons and pistachio nuts.”

      http://www.pnas.org/content/103/19/7203.full

      Schmidt 2014

      A combination of factors, by coincidence, conspired to
      dampen warming trends in the real world after about 1992.

      http://www.nature.com/ngeo/journal/v7/n3/full/ngeo2105.html

      • I remember this line fomr Gavin. I particularly loved the use of the word “conspired”.

    • David L. Hagen

      Citing Ka-Kit Tung et al, 2014, Mark Steyn reports: Settled Science Catches Up with Steyn August 27, 2014

      Professor Mark Steyn just over five years ago: . . .
      “If you mean the argument on “global warming,” my general line is this: For the last century, we’ve had ever-so-slight warming trends and ever-so-slight cooling trends every 30 years or so, and I don’t think either are anything worth collapsing the global economy over. . .
      “Then from 1940 to 1970 there was a slight cooling trend. In its wake, Lowell Ponte (who I believe is an expert climatologist and, therefore, should have been heeded) wrote his bestseller, The Cooling: Has the new ice age already begun? Can we survive?
      From 1970 to 1998 there was a slight warming trend, and now there’s a slight cooling trend again. And I’m not fussed about it either way. . . .”

      A few months after my column appeared, Climategate broke, and among the leaked emails was this one from Dr Mann’s bestest buddy, Phil Jones, head of East Anglia’s Climatic Research Unit. July 5th 2005:
      “The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. Okay it has but it is only seven years of data and it isn’t statistically significant.”

    • http://www.newclimatemodel.com/the-real-link-between-solar-energy-ocean-cycles-and-global-temperature/

      from May 21, 2008

      “PDO/ENSO together with similar cyclic oscillations in all the other oceans combine to drive global temperature up or down regardless of the level of CO2 in the atmosphere”

      and:

      “it is necessary to disentangle the simultaneous overlapping positive and negative effects of solar variation, PDO/ENSO and the other oceanic cycles. Sometimes they work in unison, sometimes they work against each other and until a formula has been developed to work in a majority of situations all our guesses about climate change must come to nought.”

  3. Gavin and I seem to live on different planets: I live on planet Earth observations, and Gavin lives on planet climate model. I would appreciate some discussion that points out anything significant in Gavin’s post that refutes my arguments.

    I’m not sure I can supply what you want, but I can point out the “circularity” is often a function of where you are relative to the paradigm.

    Paradigms are inherently circular, viewed from the outside. From the inside, that circularity vanishes, at least some of it. Gavin is clearly arguing from within the paradigm, you are observing it from outside.

    The “Stadium Wave” hypothesis, per se, is outside the paradigm Gavin is defending, because it fails to acknowledge the fundamental assumption that the “climate” is essentially an “equilibrium system” that only leaves its “equilibrium” when “forced” by some external factor. Question that assumption and you’ve left the paradigm.

    • AK,
      Interesting point and analysis, “The “Stadium Wave” hypothesis, per se, is outside the paradigm Gavin is defending, because it fails to acknowledge the fundamental assumption that the “climate” is essentially an “equilibrium system” that only leaves its “equilibrium” when “forced” by some external factor. Question that assumption and you’ve left the paradigm”.

      Thank you for the perspective!

    • nottawa rafter

      One reason I trust Judith’s judgment so much is that she repeatedly demonstrates this ability to show detached objectivity with a touch of pragmatic circumspection. I also sense that if she were confronted with compelling evidence she was wrong, there would be no compunction in reversing her position and admitting her errors. Based on reading the work of many others on both sides if the debate, I’m not sure I can say the same for anyone else.

    • AK: “Neither physical nor social science is inherently circular.” (Popper, 1972) see link below.

      The thesis that alternative explanatory theories are paradigms between which choice is a leep of faith has some problems; described at the
      Iink below.

      http://books.google.com/books?id=D7P3X99RWvkC&pg=PA125&lpg=PA125&dq=paradigms+are+inherently+circular&source=bl&ots=bx-qGQzWxa&sig=FdO7dFlA5ld8az5LIBcWYABnOlY&hl=en&sa=X&ei=p1P_U7WxEoHMggTytoDIAw&ved=0CCkQ6AEwBA#v=onepage&q=paradigms%20are%20inherently%20circular&f=false

      • @rls…

        First of all, I regard Popper’s approach to be flawed. Therefore, offering a quoted (or linked) argument that, in turn quotes Popper counts for nothing. Unless you’re “arguing from authority”, which I assume you’re not.

        Of course, similar could be said about Kuhn’s work, although I would rather regard it as somewhat analogous to Darwin’s early work on evolution. Darwin’s theories, as originally propounded, couldn’t stand up to the masses of evidence the relevant sciences have gathered since then. Modern evolutionary theory has left Darwin far behind.

        Similarly, any sociology of Science, to be effective, must leave Kuhn’s work behind. That doesn’t make him wrong, just early and simplistic.

        That said, there’s a big difference between “physical nor social science” (sensu Popper), and a Kuhnian paradigm. Kuhn divided science into two parts: “normal” and “revolutionary”. In “normal science” the basic semantic (and physical) assumptions of the paradigm are not questioned. When they are, it becomes “revolutionary science”.

        Assuming that “science” (sensu Popper) includes both “normal” and “revolutionary” science (sensu Kuhn), there’s no inherent contradiction. “Falsification” within the paradigm is “normal” science, “falsification” that applies to the paradigm itself is “revolutionary”.

        AFAIK the primary reason practitioners of “normal” science don’t question the fundamental assumptions of the paradigm is efficiency: if you don’t take some things for granted, you’ll never get anything done. Sort of like trying to walk on a skating rink: no traction.

        When, for any reason, the collection of assumptions that makes up a paradigm comes under appropriate question, we see a potential scientific “revolution”. Such as when the Michelson/Morley experiment showed the failure of assumptions around absolute velocity in Newtonian physics.

        Applying the foregoing to “Climate Science”, we have, IMO, an old paradigm based on classical thermodynamics that predicted warming in response to increased CO2. The chaotic nature of the turbulent air/ocean circulation system was assumed to “cancel out” over space and time, due to early-mid 20th century ignorance of how complex non-linear systems actually work.

        The entire system of “GCM”-type models is assumed to provide predictions of any use whatsoever due to that assumed “cancelling out”. This is the traditional paradigm within climate science.

        But studies of chaos theory and other aspects of non-linear dynamics have shown some of the fundamental assumptions behind the traditional paradigm to be incorrect. And now, as anybody familiar with chaos theory could have predicted, we’re seeing the models fail. There are probably political biases involved in how those models were originally set up, but even without those biases, models built around the fundamental linear assumptions of classical thermodynamics cannot be expected to provide useful results. Not scientifically useful, anyway.

        The inherent circularity, then, is shown in the way the models simply produce the results they were created to produce, because those unwarranted assumptions were built into them. They are part of the paradigm. If you remove those assumptions, then there is no reason to assume that greater detail (smaller cells), more factors (land use, ice type, etc.), or any other tweaks will produce greater fidelity to what they are attempting to model.

        Once you question those assumptions, going outside the paradigm, the circularity becomes obvious. But actually, that’s true of every paradigm (sensu Kuhn). Which was my point.

      • AK, a very nice parsing of Popper and Kuhn. Had not thought about it that way before. Illuminating.

      • Thanks, Rud.

        A while back I read an interesting little book: Popper vs. Kuhn: the Battle for Understanding How Science Works by Massimo Pigliucci. I didn’t really agree with everything, but it made me furiously to think.

      • Dr Curry: The following is my effort to summarize some information on one of your previous posts, I’m doing so to contribute to the post and applogize if it distorts what you wrote:

        Dr Curry shows an example of IPCC circular reasoning in Overconfidence in IPCC’s detection and attribution: Part III. She explains that “circular reasoning is a logical fallacy whereby the proposition to be proved is assumed in one of the premises”. She says this occurs in AR4 as follows:
        a. It has a premise (#3) “Time series data to force climate models is available and adequate for the required forcing input…”
        b. The premise assumes the proposition that “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
        The premise assumes the propostition on this way:
        The preposition claim of adequate data requires a derivation of aerosol forcing, and that derivation requires an assumption of the proposition.

      • Thanks for reminding us of Part III. Sometimes I feel like I’m reinventing wheels

      • AK: Thank you. Very informative and appreciated.

      • Matthew R Marler

        another good book on those themes is “Error and the Growth of Experimental Knowledge” by Deborah Mayo.

      • Judith

        I don’t think anyone else has pointed to this interesting phrase from Gavin

        “Further, by global warming I refer explicitly to the historical record of global average surface temperatures. Other data sets such as ocean heat content, sea ice extent, whatever, are not sufficiently mature or long-range (see Climate data records: maturity matrix). Further, the surface temperature is most relevant to climate change impacts, since humans and land ecosystems live on the surface. I acknowledge that temperature variations can vary over the earth’s surface, and that heat can be stored/released by vertical processes in the atmosphere and ocean. But the key issue of societal relevance (not to mention the focus of IPCC detection and attribution arguments) is the realization of this heat on the Earth’s surface. ”

        Our knowledge of ocean temperatures is very limited and the historical Global SST largely irrelevant. Thomas Stocker confirmed that we did not have the technology to measure the temperature of the (deep) ocean. So here we are back to admitting that the Historical land record is the only one that matters and that has serious flaws as we go back in time and try to create a global version.

        Based on all this uncertainty and the blind spot we have with putting things into a historical context, it seems difficult to understand how we can attribute warming to mostly man with such certainty when measured over such a short time scale.
        tonyb

      • @rls…

        You’re welcome.

        @Matthew R Marler…

        Thank you. For anybody else interested, I found chapters 1, 2, 3, and 5 linked here.

    • Theo Goodwin

      To say that Gavin is working within a paradigm is a huge mistake. A paradigm is something like Ptolemaic astronomy with its framework of epicycles. Ptolemaic astronomy supplied the complete framework for Copernicus and maybe for Galileo. Kepler was the first to break out of it.

      To compare Gavin’s models with a paradigm is to say that the models supply the complete framework for climate science at this time. That is the huge blunder. All of us know that the models do not supply such a framework. Those who are criticizing the models and pointing to mainstream climate science’s failure to address natural variability have not departed from the overall framework of climate science.

      I think what you have in mind would be better described as Gavin’s model obsession, not his model paradigm.

      • The foundation of the paradigm is the fundamental assumption(s) regarding how models relate to reality.

        A paradigm is something like Ptolemaic astronomy with its framework of epicycles. Ptolemaic astronomy supplied the complete framework for Copernicus and maybe for Galileo. Kepler was the first to break out of it.

        Kuhn certainly used “Ptolemaic astronomy” as an example. But he also used Newtonian physics.

        And he also pointed out that his explorations of the mechanisms were not intended to apply to fields with substantial input of, and control by, money from corporate or government sources.

        Those who are criticizing the models and pointing to mainstream climate science’s failure to address natural variability have not departed from the overall framework of climate science.

        They have departed from the traditional paradigm based on assumptions only valid in linear systems. (See my post above.)

      • Theo Goodwin

        Thanks for not addressing a word that I wrote. As known since at least 1968, Kuhn used the term ‘paradigm’ with at least 100 distinct meanings. Debating what Kuhn said is not the issue here. The issue is whether Gavin and his fellows can claim to have produced the complete framework for climate science at this time. There is no question that they have not.

        In fact, Gavin’s work is not even theoretical. It is altogether practical. He is trying to recreate the salient features of climate in a computer. The man does not even work in theory. If he did he would present his theory so that we could criticize it. Instead, he offers model run after model run as substitutes for theory. No model can substitute for theory.

      • In fact, Gavin’s work is not even theoretical. It is altogether practical. He is trying to recreate the salient features of climate in a computer. The man does not even work in theory. If he did he would present his theory so that we could criticize it. Instead, he offers model run after model run as substitutes for theory. No model can substitute for theory.

        You don’t understand. Gavin is so immersed in the paradigm, he doesn’t even understand how it’s falling apart under his feet. Arrhenius presented the theory, and every model based on “equilibrium” and “forcing” is part of that paradigm.

        Gavin is like the people in the ’70’s who refused to accept plate tectonics, such as Sir Harold Jeffreys. To anyone not immersed in the paradigm, he’s arguing in circles. To those who are, he’s simply trying to explain “elementary climate science.”

      • Steven Mosher

        Paradigms are always incomplete. The work
        Under a paradigm is to complete it.

      • Well, if you want to be picky.

      • Actually, as best I can understand Kuhn, “extend” would be a better description than “complete”.

        But I don’t personally subscribe to Kuhn’s ideas entirely. Where he has a clear dichotomy between “normal” and “revolutionary” science, I see more of a gradient. But there’s a communication issue. It’s hard enough to speak in terms of Kuhn’s terms (heh), much less some improved paradigm that I’ve made up for myself and don’t share any symbols with my readers.

        But that doesn’t change the fact that some of the fundamental assumptions behind using “GCM”‘s to predict anything about the climate are invalid for the type of system they’re modeling. Which doesn’t mean they might not be right by accident, but does mean they can’t be assumed to have any relevance without proof. Which I can’t see any way of getting.

        Reading Mayo, I’m coming to realize my own work has been somewhat “reinventing the wheel”. But only somewhat. I’m trying to invent a wheel for a Mack Truck, she was creating one for a race car. Or perhaps vice versa.

      • Theo Goodwin

        AK | August 28, 2014 at 8:26 pm |

        “You don’t understand. Gavin is so immersed in the paradigm, he doesn’t even understand how it’s falling apart under his feet. Arrhenius presented the theory, and every model based on “equilibrium” and “forcing” is part of that paradigm.”

        Now you assume that the theory is the model and the model is the theory. That remains to be demonstrated. But that cannot be done because no model can substitute for theory. Theories consist of statements, however mathematical, that are true or false. Models do not consist of statememts at all. Models attempt to reproduce (the key word) the salient features of climate. The standards for judgement in each case are entirely different.

        Assuming that Gavin is “working within a paradigm” gives him far too much credit as a theorist. He might be working within a modeling paradigm, if we care to use another version of Kuhn’s idea, but that paradigm is not even a subpart of climate science and is independent of climate science.

        You write “every model based on “equilibrium” and “forcing” is part of that paradigm.” Do you really think that you can define the “paradigm” of climate science by reference to two concepts? Your suggestion is a non-starter.

      • Theo Goodwin

        Steven Mosher | August 28, 2014 at 8:46 pm |

        Your point contains a small ambiguity. It is the task of the normal scientist to complete the paradigm by applying it to factual circumstances. The concepts of the paradigm are complete. The paradigmatic case is Ptolemy’s astronomy which contained everything you could want to know about epicycles. It is true that epicycles were used in some novel ways but they remained epicycles.

  4. Judith said;

    ‘I live on planet Earth observations, and Gavin lives on planet climate model. I would appreciate some discussion that points out anything significant in Gavin’s post that refutes my arguments. ‘

    Well Judith, you used to live on ‘planet climate model’ as well, with frequent excursions to ‘planet with very little factual data’. You seem to have caught a rocket to planet sanity, A.k.a Earth.

    Perhaps you can remember your escape route from Planet climate model and plot Gavin a course to here, where the real world is carrying on as normal in his absence.

    tonyb

    • Interesting point Tony, my research prior to about 2004 was physical process studies motivated by improving climate models (combination of small scale observations, theory, modeling). My research the past decade has been observationally based, spiced with theory (i’ve increasingly abandoned the big climate models).

      • Judith

        and when you switched from one type to the other you landed on planet sanity. Welcome!

        Tonyb

      • ” …spiced with theory”.

        Theory!? You mean, cognitive models!

        Oh noes!!

      • The escape route from planet climate model is to begin your existence on planet Earth observations, and only after that start exploring planet climate model (or at least start your life co-existing in both). I’ve always lived on both planets. Observations are the basis, the very reason of what we do and why we exist, but they are difficult, ugly, incomplete, noisy, and sometimes hard to come by. Planet climate model, on the other hand, looks deceivingly real (honestly: model results most of the time look sooooo good compared to real world observations), always smooth, always there for you, no queues, no delays, lacking the difficulties, ugliness, incompleteness and noisiness of planet Earth observations. Even when it turns out that planet climate model is totally wrong, it still looks good! That is the lure of planet climate model, it can turn into a make-believe world that often looks to good to be true (and often is too good to be true). It is hard to resist, living on the smooth make-believe planet climate model.

    • Gavin is from Mars.

    • With your escape reference I am reminded of the idea that we need to advance space programs so that in the event the Earth is threatened by an asteroid, we will have some other options such as inhabiting Mars. Call it hedging our bets. Taking two independent approaches. A lukewarmist position might be compared to an insurance possible in the works.

  5. Interesting paper relating to this post: http://link.springer.com/article/10.1007%2Fs00382-014-2168-7
    (h/t @ImaBannedd)

    • Thanks for this link

    • After a decrease of SST by about 1 °C during 1964–1975, most apparent in the northern tropical region, the entire tropical basin warmed up. That warming was the most substantial (>1 °C) in the eastern tropical ocean and in the longitudinal band of the intertropical convergence zone. Surprisingly, the trade wind system also strengthened over the peirod 1964–2012. Complementary information extracted from other observational data sources confirms the simultaneity of SST warming and the strengthening of the surface winds.

      I’m not sure why this is so “Surprising”. Greater SST’s in the deep tropics, without a corresponding warming close to the poles (where melting ice can absorb the heat), would presumably lead to a greater temperature difference, driving higher wind speeds.

      Of course, that’s a view based on linear models, which aren’t reliable. It shouldn’t be surprising either way: we have no way of knowing enough about the system to have confident expectations how it would react to those SST differences.

    • Matthew R Marler

      Andrew: Interesting paper relating to this post: http://link.springer.com/article/10.1007%2Fs00382-014-2168-7
      (h/t @ImaBannedd)

      Thank you for that most interesting link. If it ever comes out from behind the paywall, please let us know.

  6. LightningCamel

    Humans are causing human caused global warming.
    Who would have thought it! Did someone who calls themselves a scientist actually write that and expect it to mean something?

  7. The hiatus just shows that co2 effect is weak & pathetic in comparison to natural variability – oceans ENSO & such drive warming it seems

    • So it seems. So what drives the oceans? Themselves? Have they licenses?

      Sure, they’re bump-em cars on a vast, tame, circular track, but when they need to get down the road to an important new attractor, they need a chauffeur.
      =================

    • James Hansen told us climate sensitivity to a doubling of CO2 was 6 degrees C. Ipcc 5 can’t say what climate sensitivity is but they are 95% certain it is all my fault. Over 130 climate models can’t account for the “pause”. Existential threats indeed.

  8. Maybe what is needed is more name-calling? How about a battle between Atlantic hidey-heat “deniers” and Pacific hidey-heat “deniers”?

  9. The point is that anyone that denies that 100% of the modest warming that has occurred in the past 130 years is not due to man-made warming is simply misguided.

    Though we may be amateurs in the climate science field, some of us actually know how to analyze and interpret the physical data
    http://imageshack.com/a/img23/8418/qvv.gif

    • BTW, in the above graph, the “fluctuation” curve shown is the noise that was removed in the global warming trend. This noise consists of ENSO, volcanic events, stadium wave, TSI, wind kinetic energy, and a few strong periodic factors that align with lunar/solar motions. The real signal that is left has the unmistakeable fingerprint of the logarithmic sensitivity of temperature rise to CO2.

      And before you say that the 1910 to 1940 warming is not understood, look into the way that a logarithmic sensitivity works in relation to an increase in value. Math fail anyone?

      Take it to the bank … swish .. nothing but net.

      Gavin and the scientists before him called it right all along.

      • Whistling past the graveyard.

      • You have to predict to distinguish an air ball from net and nothing but. Sorry, it’s in the rules.
        =================

      • Sure Ole Webby – we believe ya. Afterall, your credibility is beyond question. sarc

      • Keep on making the own-goals … Stadium Wave, ENSO, ItsTheSun, etc.

        We are here to patiently clean up the mess.

      • Bernd Palmer

        Looks rather exponential than logarithmic to me. Upside down?

      • Steven Mosher

        “And before you say that the 1910 to 1940 warming is not understood, look into the way that a logarithmic sensitivity works in relation to an increase in value. Math fail anyone?”

        I know that in simpler fits this has been apparent as well.
        Which leads to the question, why cant GCMs get the rise correct.

      • Unbelievable correlation!

        And I mean that…literally. It is unbelievably good. These thought experiments would be more successful if the results didn’t just absolutely reek of over-fitting.

        I suppose there was no repetitive feedback in place in all these forcing estimates until the answer was just so? One might even suggest that one started with the desired answer and worked backwards, but I wouldn’t be so bold as to suggest this.

      • delta F (1850-1950) = 5.35 * ln(310/280) = 0.54 W/m2
        delta F (1950-2010) = 5.35 * ln(400/310) = 1.36 W/m2

        It is called a sanity test in engineering – webbly fails yet again.

      • Yes, It is an “unbelievable” correlation that the Length-of-Day (LOD) Stadium Wave follows the global temperature with a lag of a few years (after the aCO2 contribution is detrended).

        A team of geophysicists at JPL were the first to identify this correlation.
        [Dickey, Jean O., Steven L. Marcus, and Olivier de Viron. “Air temperature and anthropogenic forcing: insights from the solid earth.” Journal of Climate 24.2 (2011): 569-574.]

        Now one can debate the validity of this correlation or what causes it, but you can’t debate that the Stadium Wave crew want to include this factor as a natural variation.

        The moral of the story is that you can’t have it both ways, i.e. Make up the Stadium Wave story but then refuse to apply it to detrend the global warming signal.

        If you make your bed, then you must lie in it.

      • ‘Oscillations in global temperatures with periods in the 65–70-yr range were originally reported by Schlesinger and Ramankutty (1994). Subsequent observational studies and simulations with coupled atmosphere–ocean models have found similar multidecadal climatic modes, typically originating in the North Atlantic Ocean; however, the excitation source or sources of these oscillations have not been unambiguously identified (Knight 2009). Our work suggests that the same core processes that are known to affect Earth’s rotation and magnetic field (Roberts et al. 2007) may also contribute to the excitation of such modes, possibly through geomagnetic modulation of near-Earth charged-particle fluxes that may influence cloud nucleation processes, and hence the planetary albedo, on regional as well as global scales (Usoskin et al. 2008).’

        They are suggesting that geomagnetics cause multi-decadal temperature fluctuation – not that anthropogenic warming causes geomagnetic variability.

        I’d suggest it is an own goal – but the evidence suggests random potshots rather than any rational goal oriented activity.

      • Rob Ellison | August 28, 2014 at 6:39 pm |

        delta F (1850-1950) = 5.35 * ln(310/280) = 0.54 W/m2
        delta F (1950-2010) = 5.35 * ln(400/310) = 1.36 W/m2

        It is called a sanity test in engineering – webbly fails yet again.

        Aussie,
        Thanks for the OWN GOAL.

        The warming from 1880 to 1950 is about 1/3 as much as the warming after, just as the model in the above chart shows.

        Get a ruler and measure for U-self.


      • They are suggesting that geomagnetics cause multi-decadal temperature fluctuation – not that anthropogenic warming causes geomagnetic variability.

        I’d suggest it is an own goal – but the evidence suggests random potshots rather than any rational goal oriented activity.

        Aussie,
        Thanks for another OWN GOAL.

        The LOD effect is indeed subtle and correlates to approximately +/- 0.1C in temperature amplitude swings. And yes, they do think it is due to some natural variability happening somewhere in the earth’s core.

        The point is, do we think that this value will grow beyond this effective +/- 0.1 C modulation?
        And does anyone think that this exhibits a long-term trend, which would indicate a more permanent gain or loss in angular momentum?

        They already can correlate shorter-term variations in LOD very accurately to ENSO variations, so it will just be a matter of time before someone will figure out the cause and effect.

      • The regimes transitioned in 1911, 1944, 1976 and 1998.

        http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/update_diagnostics/global_n+s.gif

        But hell – go with random years if you like. It would be par for the course.

      • Don’t have much time – have to go get my cast changed. Yellow this time.

        Yes – centennial to millennial change is anticipated in line with the cosmogenic isotope record. This it is suggested is a good proxy for solar UV – which modulates both ENSO and the PDO from the top down. This in turn results in the biggest changes in LOD and wobble.

        http://iopscience.iop.org/1748-9326/5/3/034008

        If he ever understood where the goals were it would be a miracle.

      • Steven Mosher,

        ‘Which leads to the question, why cant GCMs get the rise correct.’

        My pet theory is that some/many CMIP historical runs are poorly initialised with respect to real world conditions as of 1850, in particular with respect to volcanic forcing. This tends to lead to ocean cooling in the early section of the realisations, which probably didn’t happen in reality, and a consequent damping of surface temperature warming, particularly up to about 1950.

        An interesting test case is provided by the MPI-ESM-P historical run contributions to CMIP5. One historical run is initialised by their picontrol (as is the case for almost all other CMIP5 historical runs) and the other is initialised by a full simulation of the past thousand years, including volcanic (and other) forcings. The difference is particularly evident when you look at thermosteric sea level outputs. In the picontrol-initialised run thermosteric sea level drops over the first half of the run, hence ocean cooling, whereas the past1000yr-initialised run produces a slight upward trend. For surface temperatures the 1910-1945 trends are about 0degC/Decade for the former and about 0.1degC/Decade for the latter. Unfortunately there is only one realisation of each type so the surface temperature trends could just be due to difference in internal variability rather than indicative of something systemic.


      • The regimes transitioned in 1911, 1944, 1976 and 1998.

        Aussie, That is an OWN GOAL as the LOD stadium wave peaks and valleys coincide with those times. These may in fact not be regime changes but a slowly varying geophysical process causing the LOD modulation and which has an impact on the global temperature.

        Or are you against the Stadium Wave theory?

      • Matthew R Marler

        Rob Ellison: http://iopscience.iop.org/1748-9326/5/3/034008

        thank you for the link. Studies like that keep the “solar hypothesis” interesting. I sometimes feel as though I am watching a patient on life support, wondering whether she will ever be healthy, or will she continue to need life support. I can’t give up on her, but I do wish there were greater signs of strength and health than what we have had to date. Once in a while I feel a surge of hope, and then I read Leif Svalgaard’s posts over at WUWT and think to myself: “Nope, she’s not there yet.”

        I sort of hope that the sun’s low activity continues for a while and clarifies the picture; but if that causes cooling, human and animal suffering will increase.

      • Matthew R Marler

        WebHubTelescope: Quoting Rob Ellison: ”
        The regimes transitioned in 1911, 1944, 1976 and 1998.

        Aussie, That is an OWN GOAL as the LOD stadium wave peaks and valleys coincide with those times. These may in fact not be regime changes but a slowly varying geophysical process causing the LOD modulation and which has an impact on the global temperature.

        You and Rob Ellison have some pretty good debates, when you avoid ad homs and other distractions. That is definitely a point in favor of the csalt model.

      • Marler,
        They aren’t “pretty good debates” yet they may be entertaining.

        Entertaining in the way that the Harlem Globetrotters route the hapless Washington Generals whenever the two play a basketball game.

        I basically dribble circles around the Aussie, and some find that entertaining.

    • Right, uh huh, you betcha,

      https://lh6.googleusercontent.com/-chSdgzrNv_c/U_8oPA1cvJI/AAAAAAAALXs/yRRt19muWrw/w807-h523-no/1.1%2Bversus%2B2.2%2Bco2%2Bref.png

      You are assuming that the 1900 conditions are “normal” but based on SST, 1900 was about 0.3C lower than pre 1900 conditions. That is about 0.3C of uncertainty with about 0.9 C of total temperature variation which would be about 1/3. Can you say tercile?

      • Cappy, do you dispute the Stadium Wave?

        And why must you deceive by plotting an ocean-only curve?

        Do you have a problem with the truth?

        I know you make a living by throwing fish chum from your dingy, but the majority of humans don’t live on water, doncha know.

      • Webster, “And why must you deceive by plotting an ocean-only curve?”

        That ocean only curve represents the majority of the Earth’s surface and the vast majority of the Earth’s surface heat capacity. Since the natural variability is most commonly related to ocean (pseudocylclic) oscillations, why not use the oceans as a reference? The land “surface” temperatures btw tend to amplify the ocean fluctuations, that would be in both directions.

      • Cappy,
        Thanks for owning up to an OWN GOAL.

    • You are truly a genius. You remind me a lot of Ignatius J. Reilly.

      When a true genius appears, you can know him by this sign: that all the dunces are in a confederacy against him.
      Jonathan Swift

      Thanks so much for keeping us all on the straight and narrow path to wisdom.

    • Matthew R Marler

      WebHubTelescope: Though we may be amateurs in the climate science field, some of us actually know how to analyze and interpret the physical data

      I think it is fair to say that a lot of readers agree that you have a model with a high correlation to the data from which the parameters have been estimated.

    • John Vonderlin

      WHT,
      Upthread you wrote: “The point is that anyone that denies that 100% of the modest warming that has occurred in the past 130 years is not due to man-made warming is simply misguided.”
      Was this a Freudian Slip? It seems to say that anyone that denies… (the proposition)… is simply misguided. And the proposition seems to be “100% of the modest warming that has occurred in the past 130 years is not due to man-made warming.” If 100% of the warming is not man-made as the proposition posits, then none is man-made, all must be natural. And that would leave you claiming anyone who denies that all warming is natural is simply misguided.
      I usually find your postings well-written, whether I agree with your assertions or not. Did I misunderstand this one or were you violating the admonition, “Never use no double negatives, no how, no way?”

    • Matthew R Marler

      WebHubTelescope: The point is that anyone that denies that 100% of the modest warming that has occurred in the past 130 years is not due to man-made warming is simply misguided.

      There’s that word [deny] again. I question the science behind a claim that all of the warming of the past 130 years is due to anthropogenic CO2. It is not “simply misguided” to note the liabilities in the scientific case.

      Here is a copy of an interchange between us that was on Real Climate:


      Part of the surface warming of the period 1978-1998 was due to decreased cloud cover.

      Comment by Matthew R Marler — 28 Aug 2014 @ 5:30 PM

      Note how M.Marler makes a pure assertion with absolutely nothing to back it up. He conveniently ignores that decreased cloud cover could be a result of the warming, executing a cause/effect bait-and-switch on us.

      Comment by WebHubTelescope — 28 Aug 2014 @ 7:48 PM

      The change in cloud cover that I referred to has been published. You did not dispute it, but if you like I can retrieve it. I am happy to agree with you that either increased or decreased cloud cover could be a result of warming, or of CO2. It is another feature of the unpredictable changes in the water cycle and cloud cover that make prediction of the future of climate so uncertain.

      My reply at Real Climate went into moderation and seems to have disappeared. Hence my posting it here.


      • My reply at Real Climate went into moderation and seems to have disappeared. Hence my posting it here.

        Every comment is moderated by a human before it appears at RC, yet people have to sleep. It’s not like the klown show here where every krazed theory is given an immediate pass.

      • Matthew R Marler

        update: my comment to WebHubTelescope has emerged from moderation and now appears at the Real Climate thread initiated by Gavin Schmidt’s critique of Judith Curry.

      • It is instructive to read the Real Climate comment thread in the context of scientists that actually know the discipline inside-and-out, unlike the low-information skeptics who reside here.

      • vibes from the peanut gallery

        WHUT’s ongoing indigestion continues:

        “It is instructive to read the Real Climate comment thread in the context of scientists that actually know the discipline inside-and-out, unlike the low-information skeptics who reside here.”

        WHUTku

        …and yet you linger.
        Must be a secret desire–
        ‘Oh, help me convert!’

      • I linger here cuz I collect the OWN GOALS from the busy beavers who think they are trying to do actual debunkking.

        lots of stuff found here, LOD, stadium waves, ENSO, etc

        The more they try, the harder they fall.

        keep em coming, I know you can do it.

    • “The point is that anyone that denies that 100% of the modest warming that has occurred in the past 130 years is not due to man-made warming is simply misguided.”

      Well, I don’t know about “denying” anything, but I certainly tend towards the belief that 100% of the warming over the last 130 years is not due to man-made warming.

      Glad to know that WebHubbleTelescope doesn’t think I’m simply misguided. Wait a minute…I guess I’m not too worried about that after all.

  10. Gavin often describes Judith’s remarks as “confused”, but I am struggling to understand some of his. What does he mean by these:

    The statements that ended up in the IPCC SPMs are descriptions of what was found in the main chapters and in the papers they were assessing, not questions that were independently thought about and then answered. Thus while this dichotomy might represent Judith’s problem right now, it has nothing to do with what IPCC concluded.

    ??? Well if they were thought about and answered why were they not in the SPM? I don’t understand why there should be a dichotomy at all. I am very confused by this statement and I have read it several times. I think I must be missing something.

    assuming that a statement about where the bulk of the pdf lies is a statement about where it’s mean is and that it must be cut off at some value (whether it is 99% or 100%). Neither of those things follow.

    I don’t understand why he thinks Judith thinks this, or why it is unreasonable. My understanding of PDF’s is in fact to give a visual idea of where the greatest likelihood lies within a given probability range. Why does he think Judith thinks it should be “cut off”. I can’t work out what he thinks Judith thinks, and what the problem is with it.

    Is expert judgement about the structural uncertainties in a statistical procedure associated with various assumptions that need to be made different from ‘making things up’? Actually, yes – it is.

    Well it doesn’t sound like it to me! “Expert Judgement” being required to assess “structural uncertainties” (like what?) associated with “various assumptions“! That sounds a lot to me like “making things up”. Which, I hasten to add, I really don’t mind provided it is in the context of something acknowledged to be deeply uncertain. I do want to know what “experts” think but I don’t want to be fooled by their overconfidence.

    I understand his point regarding greater uncertainty regarding attribution the longer the time period, because as you go back in time the data is less reliable, but the corollary is true too: The IPCC statements were from a relatively long period (i.e. 1950 to 2005/2010). Judith jumps to assessing shorter trends (i.e. from 1980) and shorter periods obviously have the potential to have a higher component of internal variability.

    I have more questions about his essay I would like to clarify, but I’ll put in a different thread.

    • He should send his thinking to the EPA. They are in need of expert opinion.
      ============

    • Agnostic,

      ‘Well if they were thought about and answered why were they not in the SPM?’

      It wasn’t a question addressed in the SPM nor the report, that’s his point. He goes on to state why it’s really not a good question to ask.

      ‘I don’t understand why there should be a dichotomy at all.’

      You simply have to read the statement to which it was responding. The dichotomy referenced is the one Judith setup: warming mostly anthro/warming mostly natural.

      ‘I don’t understand why he thinks Judith thinks this, or why it is unreasonable.’

      The IPCC statement says that 95% of the PDF lies above 50% warming contribution. Judith has repeatedly made statements which show she understands the IPCC statement to mean 95% likelihood that anthropogenic warming contribution is 51-100% (or 51-99%) of observed. This is incorrect, since there’s no reason to set an upper bound for contribution at 99 or 100% of warming. And it’s clearly incorrect because the IPCC state their best estimate for contribution is about 100%. In other words the mean/median of the pdf lies at (actually, just above) 100% of warming. It’s an error of interpretation on Judith’s part.

      • Ahh . . . the IPCC needs to rewrite the dictionary on the meaning of ‘more than half’. Half is 50% of a whole (100%). More than half does not imply something > 100% outside of the IPCC dictionary.

        How would you define ALL of the warming. Normal dictionary would say 100%, but that would make “all” less then ‘more than half’.

      • C’mon Judith,

        It says ‘half of the observed warming’.

      • The higher the sensitivity of temperature to CO2, the colder we would now be without man’s contribution.
        ================

      • Paul, thankyou very much for your reply and attempt at clarification.

        I think I understand the point you (or are rather Gavin) are making but feel that the difference is specious. Presumably, by >100% anthro, we are talking about gross contribution from CO2 and it’s associated feedbacks. Therefore if the contribution to the warming is greater than 100%, the remainder are natural factors that reduce the warming to give us the net.

        But this implies that the other more intuitive understanding at the heart of attribution – the extent of anthro warming in the late 20th C expressed as a percentage – is 100% anyway.

        And that’s how it’s interpreted by virtually everyone; the word “most” meaning any value over 51%. And regardless, even allowing for this technicality, it does not address Judith’s main point which is that in light of the hiatus and the divergence from models used to determine the PDFs, it’s pretty hard to see what the justification for increased confidence for 100% anthro warming in the late 20th C is, whether it is the classical ordinary interpretation or a technical one based on reading a PDF.

        But you have further my understanding on this issue – thanks!

      • Is this meant to be a literary critique or a real scientific analysis? Even if you believe ‘more than half’ to be ambiguous they clearly state in the next sentence a best estimate of about 100%, which should remove any doubt that >100% is possible here.

        Is your confusion due to the multiple usage of percentages? Is it easier to understand to say: ‘Based on observed warming of 0.65ºC from 1951-2010, there is 95% likelihood that anthropogenic warming contribution is > 0.325ºC, with a best estimate of 0.7ºC. Does it make things clearer to say there is also 95% likelihood that anthropogenic warming contribution is < 1.075ºC (assuming normal distribution)?

      • This whole subthread expects precision that is impossible. You can’t argue these kinds of number when you really can’t decide between all or none.

        Personally, I’m willing to attribute some to man, and be grateful for it. Chilling is ridiculously more socially damaging than warming. And there is a radiative effect; I could demonstrate it in the lab.

        Well, I might need a little help. Let me see if Bill Nye is available.
        ==================

      • Agnostic,

        I think the simplest way to understand the IPCC statements is to look at the first figure in Gavin’s post. You can see almost all the area under the curve is to the right of 0.5 (50% warming contribution), hence 95% likelihood of >50%. You can also see the peak of the curve is at about 1 (100% warming contribution) and the median is about 1.1, hence best estimate of ~ 100% of the warming.

        The best estimate and >50% statements are simply two expressions of the same pdf. This is why Judith’s statement…

        ‘So, I interpret this as scything that the IPCC’s best estimate is that 100% of the warming since 1950 is attributable to humans, and they then down weight this to ‘more than half’ to account for various uncertainties. And then assign an ‘extremely likely’ confidence level to all this.’

        …clearly misunderstands the IPCC statements (even without the ‘scything’ :) ).

        ‘it does not address Judith’s main point which is that in light of the hiatus and the divergence from models used to determine the PDFs, it’s pretty hard to see what the justification for increased confidence for 100% anthro warming in the late 20th C is’

        Attribution exercises have been run for time periods of 1950-2000 and 1950-2010. I Could be remembering wrong but I don’t think it made much difference. I think there are some potentially legitimate issues with attribution, or at least some forms of it. For example, the use of a single scaling for all other anthropogenic forcers doesn’t sit well with me given the combination of spatially-heterogeneous large (>1W/m2) positive and negative factors – but really it needs to be shown why such a thing is an issue, and what effect it might have. There’s little more than conjecture being offered here.

      • PaulS:

        warming. And it’s clearly incorrect because the IPCC state their best estimate for contribution is about 100%.

        You admit that the predominantly subjective range of values 99-100% number is a “best estimate” (in plain English, they pulled it out of thin air and so it carries almost as much weight as thin air). You also allow that there is considerable more uncertainty in the underlying objective data, which are also provided in their report.

        So it’s rather curious that you argue we should attach more value to a subjective number, rather than the underlying objective numbers, from which this was based, and that it would be “clearly” incorrect to attach more to the objective numbers.

        Since the report contains both pieces of information—the objective range and a subjective estimate—the normal practice in science would be to focus on what is actually shown objectively to be true, and this seems to be what Judith is doing. I’d say there’s a lot more room for criticizing you and Gavin for quoting a statement of finding, and putting so much weight on it, when it is not objectively supported by the actual data provided in the document.

      • Carrick,

        I’m genuinely struggling to understand what you’re talking about here, but at a guess I think you’re suggesting the ‘best estimate’ of ~100% was arrived at in a separate process from the >50% statement. This is not the case. They are simply two properties of the same probability distribution, produced via the same process: 95% of the distribution lies above 50% of observed warming, the median of the distribution lies around 100% of observed warming. In chapter 10 they also state:

        ‘Based on the well-constrained attributable anthropogenic
        trends shown in Figure 10.4 we assess that anthropogenic forcings
        likely contributed 0.6°C to 0.8°C to the observed warming over the
        1951–2010 period (Figure 10.5).’

      • Paul S – thank you for your clear articulation. I seem to be hearing a conflation of semantic and methodological issues here.

        On semantic side, you are arguing (persuasively) that Judith is misunderstanding the language describing the percentages under the curve, especially the false limitation of 100% as a limit. Anthropogenic warming could be +150% of observed warming, with natural variability contributing -50%.

        On the methodological side, the assumptions that go into the PDF graph/plot are given percentages of likelihood which are based on statistical methods that are being challenged as inappropriate to the data.

        When you conflate the two, you can (seemingly legitimately) argue against Judith’s semantics without dealing with the methodological underpinnings of her critique.

        Do you make this distinction? Do you address the statistical methodology critiques that have been leveled at it anywhere that I can access to read your (The Team’s) response?

      • Thanks for the explanation, Paul S. Yes indeed, I was confused.

        “Estimates” have a specific meaning to me… they are based on objective data but also include subjective judgements.

        If I understand what you are saying, what you are describing 100% as a “best estimate” is just the median value of the distribution converted to a percentage. Whether this is a plausible result is another question (outside of my expertise to judge), but I think I understand your point now.

        I didn’t want to discuss the fact the range can extend beyond 100% because I was afraid it would be a point of further confusion. (And yes the use of percentages does lead to confusion on this thread.) Of course I understand that anthropogenic warming might be masked by natural variability or forcings, so expressed as a percentage of AGW warming to measured warming, the percentage can be greater than 100%.

      • > Is this meant to be a literary critique or a real scientific analysis?

        Something in the middle of what can be called auditing. Attribution of the anthropogenic factor to auditing is an open problem. Preliminary analysis reveals it’s about 100%, but that excludes the algorithmic processes that may be hidden in the clouds.

        So I’d say it’s 50-50, by which I mean 66-33.

  11. I agree that the comments by Willis and Dessler that the consensus on AGW/CAGW does not need to be modified don’t really follow from their other comments. Perhaps they can’t bring themselves to admit it yet or perhaps they are just being politically cautious so they don’t get a nasty tweet from Mann that might impact their funding. In a few years, much of the nastiness and worry about one’s grants if one speaks their mind will fade, much for the better.

    I am always amazed by two types of statements from what seem to be people with a scientific background. The first is that a 20 year pause doesn’t buy any more time (maybe 20 seconds it seems for some). This makes no sense as it really gives you 20 years. But if some of the arguments to explain the pause involve natural cycles, then you have the half cycle time, in each and every cycle, with pauses. And the half-cycle is 25 to 40 years historically.

    The second is the arguments related to ocean heat. If the argument is that the heat is going into the deep ocean, you may have hundreds of thousands of years as there is a lot of water in which to disperse a few hundredths of degrees. And you can’t say the danger from the heat is still as bad as the atmosphere and surface temperatures did not go up, only the deep ocean by a tiny amount. It makes no sense to argue this dispersed heat will eventually come out – at least not on time scales that matter to humans. If the surface waters increase by 0.5 C that can matter, but not deep water by 0.01 degrees.

    If the sun plays a bigger role and aerosols a smaller one, obviously this impacts the effect of CO2 and the models will need to change.

    • Regardless of the reason(s) for a model not matching observed conditions- if a model fails to match observed conditions (reasonably well) it requires revisions.

  12. Hundreds OR thousands I meant to type

  13. So the models are whacko. It would be highly useful to figure out why and how that happened, but I suspect we’ll never know. Eventually they’ll probably get better as researchers with eyes on a better main chance gradually purge the nonsense. I very much doubt that we’ll eventually understand that they are limited to the kind of results that Callendar’s model had. What a wasteful farce.

    Yeah, it’s tragic, too. I’m laughing through the tears.
    ============

  14. Global Warming is Dead

    God is dead. God remains dead. And we have killed him. How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it?

                — Nietzsche, The G – a – y Science, Section 125, tr. Walter Kaufmann

  15. I got bored. Did Gavin ever say it’s only a flesh wound? I suppose I’ll have to read it all to see if that bridge was crossed.

    • It’s worth reading. Some of it I can’t make out, but he at least makes some points that help illuminate his thinking. I can’t see that they really “take down” anything Judith has said, but I am going to give it a red hot go and try and understand it better.

      • I appreciate all the effort Gavin put into this, but he misses my main points. Certainly illuminates his own thinking, tho

      • Glad I am not the only one who found some of Gavin’s off point thinking unintelligible. Must have had a lot of homogenization disclosure concerns on his plate at the same time as this ‘climate team reply’. That does tend to muddle up stuff.

    • If you love, what you are reading about it will take you where you want to go.
      Each time.

    • “Did Gavin ever say it’s only a flesh wound.” Even flesh wounds have to heal or eventually they metastasize and overcome.

    • I think the Monty Python fans got the joke although I should have said it’s just a flesh wound.

      • Sure did anyway. The youtube version is amongst my permanent archives, already used multiple times in business discussions (albeit subtly).

  16. First of all, Tung and Zhou assumed that all multi-decadal variability was associated with the Atlantic Multi-decadal Oscillation (AMO) and did not assess whether anthropogenic forcings could project onto this variability. It is circular reasoning to then use this paper to conclude that all multi-decadal variability is associated with the AMO.

    Is it the case that they assumed all multi-decadal variability was associated with the AMO? Or is it the case that variability could have been AMO alone? That would then speak to the magnitude of its effect not whether it explain observations and thus invoke circular reasoning.

    This point is not an argument for any particular attribution level. As is well known, using an argument of total ignorance to assume that the choice between two arbitrary alternatives must be 50/50 is a fallacy.

    I don’t understand why this reasoning is fallacious. If you say “well, we think we have a good handle on most of the forcings and feedbacks, but there are probably some we haven’t accounted for….” then within your estimate and its associated uncertainty, you have room for further improvement and advancing science. It seems to me, what Judith is advocating is allowing for the climatic factors unaccounted for in the overall budget to be considered with the attribution level. It would be easier to understand if Gavin were to assert that (as he sort of partially does throughout) all significant forcings are accounted for and anything left over can’t make much of a difference. Then I wouldn’t be confused by his characterization of Judith’s view as being “fallacious”. Is this what the real argument is about? Whether on the one hand Gavin thinking we know enough, and Judith thinking that we don’t?

    However, none of these issues really affect the attribution argument because a) differences in magnitude of forcing over time are assessed by way of the scales in the attribution process, and b) errors in the spatial pattern will end up in the residuals, which are not large enough to change the overall assessment.

    I have read and re-read this, but I can’t get a handle on it. What is “way of the scales in the attribution process”? And what “errors” in what “spatial pattern” and why would they not be large enough to change the assessment?

    It is worth pointing out that there can be no assumption that natural contributions must be positive – indeed for any random time period of any length, one would expect natural contributions to be cooling half the time.

    I am not sure this is phrased in the way Gavin really intended. “Any” random period of time is far too imprecise. But again the corollary is true: there should be no assumption that natural contributions are negative either. Furthermore, if those natural variations affect trends over centennial time-scales, then they will appear to us as trends on the decadal time scale. This is the point regarding the MWP and coming out of the LIA, so it is not at clear why we should “expect natural contributions to be cooling half the time”. I would simply like a more convincing explanation of the processes involved at those time-scales in order to understand what proportion of the recent trend is likely to be anthropogenic. I think that’s Judith’s point as well.

  17. Forgive the question from a non-specialist, but as I understand the overall global climate has more or less averaged out over thousands of years, that is to say setting aside local changes that we made places where the neolithics grew wheat would also be suitable for growing wheat today. Glaciers that early Aztecs saw on their mountains would still be more or less the same place around 1900. At least science popularizers like Jared Diamond and others leave you with that impression.

    It seems like whether climate has ‘paused’ or not the natural variation has trended in a specific direction in the last hundred years, which I take from things that integrate over a long time like those Aztecs’ glaciers or where you can or can’t grow wheat. California is looking like a bad bet lately (not that wheat specifically is a major crop there of course) and formerly frozen parts of Canada are looking increasingly good.

    Any corrections to what I am getting wrong very much appreciated.

    • OK, I see your problem. Climate constantly changes, sometimes dramatically.

      So, now you can go on about your business.
      =================

    • There is a spectrum of time scales for natural variability. Looking at the Holocene (current interglacial), we have seen some major fluctuations. The key issue of interest in climate variations over the past 2000 years; this is the heart of the so-called hockey stick debate. There have been huge regional climate fluctuations; not clear to what extent these are global. Natural fluctuations don’t quite average out (e.g. solar, ocean circulation regimes) because the system is nonlinear and chaotic and can be ‘poked’ into shifting through an interaction of external forcing (natural or anthropogenic) and the circulations of atmospheres and oceans. Sea ice and ice sheets have short and longer term feedbacks on all this.

      My point is that we don’t sufficiently understand all this; I’m not buying the simple story of the CO2 control knob on climate on decadal to century timescales.

      • The holocene with its substantive excursions are both a significant constraint on the models (it is important to at least get the sign correct) which suggest bias and the problems with proxy reconstructions.

        http://www.pnas.org/content/111/34/E3501.abstract

      • The problems with the proxy reconstructions are legion.

      • You really have to look at the proxy reconstructions of ENSO.

        [1]S. McGregor, A. Timmermann, and O. Timm, “A unified proxy for ENSO and PDO variability since 1650,” Clim. Past, vol. 6, no. 1, pp. 1–17, Jan. 2010.

        These proxies are remarkable in their ability to align with the modern-day ENSO measurements. And since ENSO is a large factor in the natural variability in temperature, you may want to point out what is exactly wrong with these proxies.

      • Thank you Judith. If I had to distill/summarize, we are not sure whether the last hundred years is unusual because we cannot reconstruct the last several thousand years with enough confidence. That makes sense. I guess the big question is how much we can ‘poke’ or have ‘poked’ recently. I understand people disagree about that.

        In trying to reframe the last sentence, are you saying that the CO2 control knob does act on longer time scales?

      • Yes, the CO2 control knob does act on longer time scales, it is slow acting. CO2 is also a big factor as to why earth’s climate is different from mars (well of course there is that planet-sun distance thing).

        This is my whole point about ‘detection’. Without understanding multidecadal to to millennial variability, we can’t tell how much of what we have been seeing since 1980 is caused by humans (even the IPCC and Gavin Schmidt say that signal was not detected prior to 1980 above the noise of natural variability).

        All this seems like common sense, but I get labeled as a heretic, denier etc for bringing up these topics for discussion

      • CO2 is also a big factor as to why earth’s climate is different from mars (well of course there is that planet-sun distance thing).

        Don’t forget the lighter gravity and lower escape velocity.

      • On long time scales(avg 800 years) CO2 level follows temperature rise. On even longer time scales, temperature drop follows CO2 rise. When will they ever learn? When will they ever learn?
        =================

      • –AK | August 28, 2014 at 10:31 am |

        CO2 is also a big factor as to why earth’s climate is different from mars (well of course there is that planet-sun distance thing).

        Don’t forget the lighter gravity and lower escape velocity.–

        I don’t why Mars is different than Earth as it relates to CO2- other than Mars has far more CO2 in it’s atmosphere as compared Earth. Or Earth has few trillion tonnes and Mars has about 25 trillion tonnes of CO2 in it’s atmosphere.

        In terms of climate.I think the most significant difference between Mars and Earth, is that Mars lacks surface water.
        And a major reason Mars lacks water at the surface, is owing to Mars being volcanically dead as compared to Earth. Or if Earth was as dead geologically as Mars for as long as Mars was dead, then Earth would also not have oceans of water at the surface.

        Or major difference between Earth and Mars in terms of climate, is Earth’s tectonic plate movement.
        This is fairly relevant considering plate tectonic theory is rather new, as compared the rather old greenhouse effect theory. One could say theories about the significance of CO2 [once thought to be related to ice ages] is almost as old as the “social theory” of Marxism.

      • @gbaikie…

        Mars can’t keep a permanent atmosphere due to its low escape velocity. Anything you see is in transit to interplanetary space.

      • –@gbaikie…

        Mars can’t keep a permanent atmosphere due to its low escape velocity. Anything you see is in transit to interplanetary space.–

        It probably true that Mars would lose more atmosphere than compared if it had same gravity as Earth. Or both Earth and Mars lose some of their atmosphere to space, and Mars lower gravity would cause more loss.
        [Though I would add also, that both Earth and Mars gain material/atmosphere from space environment.]
        Some theorize that if Mars had magnetosphere it would retain more of it’s atmosphere. Or after Mars become geological dead [a couple billion of years ago], it lost it’s magnetosphere and this was factor a greater loss of the atmosphere of Mars.

        What is fairly obvious, is the currently the thin Mars atmosphere does not lose much atmosphere.
        And due to Earth having more atmosphere, Earth may lose as much [tons per year] as Mars currently does. If one were to have Mars and Earth having same amount of atmosphere, than it seem Mars would indeed lose more atmosphere than Mars. As guess if Mars had much more atmosphere, it might lose say 5 or even 10 times more atmosphere
        as Earth. But as far as I know, there is not very precise measurement
        of either Mars or Earth losses [or gains] of it’s atmosphere.
        Also in terms of guesses, if Mars had 100 times more atmosphere, it would retain most of that atmosphere for a million years- though how much leaves in billion years could amount to 1/2 or more of this atmosphere- if loses 1% of atmosphere in 1 million years, one losses a lot in 1 billion years.
        Though such idea of losing 1% per million years at some constant value
        is probably unrelated to reality- or the loss [or gain] is probably related
        varying unusual events rather than some kind of slow and constant leakage.
        Fortunately we in process of sending a spacecraft to Mars which may provide more accurate measurement- and will be at Mars when comet passes near it- thereby having event which might related to atmospheric losses which could be measured. So that’s called MAVEN spacecraft, which arrives in September 2014.

      • If you look at where the signal has been detected, at least detected if the blog post by Ed Hawkins on this topic is correctly depicting it, and compare with literature on the effects of the AMO, it is quite possible the only thing they detected was an AMO going positive and all the rest is still just at noise level.

  18. I looked at Gavin’s post and have now managed to stop laughing.
    He starts with a re-iteration of the claim and then says
    “The basis of the AR5 calculation is summarised in figure 10.5”. So we are supposed to look at that graph, look at the error bars on it, and accept it!
    Clive Best and I have pointed out that fig 10.5 makes no sense, since the uncertainty bars for total anthro are much smaller than its components (GHG and other anthro) – unless the IPCC is following circular reasoning, starting from an assumption that they are trying to prove.

    • A priori, it could potentially be an interesting point, but doesn’t really have anything to do with Judith’s critique or Gavin’s critique of Judith’s critique.

      As to the error bars, consider the reason for the variance in GHG and OA temperature response. Probably the biggest is sensitivity, but sensitivity would apply nearly equally for cooling and warming anthropogenic factors. That means a model with high-end GHG response (about 1.3ºC according to Figure 10.5) is also likely to have a high-end OA response (about -0.6ºC). The net total is 0.7ºC. Does that make more sense for you?

    • I am rather convinced that Fig 10.5 is simply wrong – not just its error analysis. It is based on warming just during the period 1950 to 2010 – ‘observed’ as 0.65+-0.05 C with Natural variation 0.0+-0.1C. The natural variation should instead have been 0.2+-0.05 C leaving a remnant ‘observed’ component of 0.45 +- 0.05 based instead on 1940-2000 period.This is because 1940 and 2000 are peaks in the cycle and only then does that natural component cancel.

      Fig 10.5 should really have looked as follows

      http://clivebest.com/blog/wp-content/uploads/2014/08/new-Fig10-3.png

      CMIP5 models are running about 50% too warm limiting transient climate sensitivity to about 1.5C

  19. The problem with looking at global temperature records is it removes all of the regional variations. I don’t know how many images I can do in one reply, so I’m going to do them one at a time.
    These are day to day changes in min or max temp averaged together by year for the labeled areas. What they show is that max temps average near zero change, and the min temp changes a lot regionally. The only thing that can do this is changes in Ocean temperatures influencing Land temps. This is definitive proof Co2 is not doing anything globally to surface temps, nothing.
    Polar regions
    http://www.science20.com/files/images/N-S%20Poles_0.png
    The interesting thing here is it shows the ping pong between poles.

    • North America and Eurasia, again the oceans were cold at different times
      http://www.science20.com/files/images/Northern%20Continents_2.png

      • Mi Cro, you did the scientific equivalent of throwing your garbage into the street.

        Are you that incompetent that you can’t even put scales or reference points on your axes? What does min and max even mean in this context?

        Who knows what other crimes against data analysis are committed in making these charts.

        But then again, no one really cares.

    • Southern Hemisphere, different continents, same flat max temp, Min temps in South America and Africa both dropped during the 80’s and part of the 90’s, same as the South Pole, different from the Northern Hemisphere.
      .http://www.science20.com/files/images/Southern%20Continents_1.png

    • All together.
      http://www.science20.com/files/images/Continents.png

      I think there’s a . in front of the southern hemi image, maybe that’s why it didn’t show up.

    • WebHubTelescope (@WHUT) commented on Atlantic vs Pacific vs AGW.
      in response to Mi Cro:

      Are you that incompetent that you can’t even put scales or reference points on your axes?

      The charts are marked in Years and Degree’s F

      What does min and max even mean in this context?

      I’ve mentioned this enough times, I thought you already knew. I subtract yesterday’s min (or max) temp from today’s min (or max) temp, and then average them in this case by year.

      Who knows what other crimes against data analysis are committed in making these charts.

      I averaged these mn or mx day over day differences together by area. I do filter out 6-8 million records out of 122 million because they just don’t have enough samples per year.
      Follow the URL in my name, all of my source code is there, plus the actual data.
      Here is the excel with these charts and the data used to produced them.

      http://sourceforge.net/projects/gsod-rpts/files/Reports/Charts/Combined.xls/download

      (sorry it ended up in the wrong place).


      • Mi Cro | August 28, 2014 at 10:35 am |

        The charts are marked in Years and Degree’s F

        These charts are not “marked” “in Degree’s F”

        You had to tell us that, because otherwise we would have assumed that the numbers were 9/5 as big as they are.

        And we still don’t know whether any of what you are doing has any connection to reality.

        The lines on the chart are broken up and disconnected which gives it the appearance that you are trying to hide something else.

        See what I mean by incompetence?

      • They are broken because there are no samples for that area and period of time.

      • So you are confirming the fact that these are garbage graphs?

        If the sample space is that limited such that you have gaps, no wonder that the noise dominates the time series profile.
        Yet you also say that you are using greater than a million records. That is quite a disconnect.

        What exactly are you trying to show except to sow FUD?

      • The surface record has poor coverage some periods in some areas. I had a copy of hadcrut 3 I believe, and it was made from the same stations. This is NCDC’S data set. It also has 122 million station records. People should get a look at what the temperature series are made from.
        I generate station record counts for each report, which is used to make these graphs.

  20. It is fascinating to me that Andy Revkin would write a summary about these complexities re global warming, shortyl after the NYT article announcing that Obama was going to do a UN climate treaty so he could bypass the US Senate.

    I had previously done a NYT search for “Chen and Tung” and found nothing. Maybe since the BBC and Economist had already reported on the paper, they didn’t want to be too far behind the times.

  21. WebHubTelescope (@WHUT) commented on Atlantic vs Pacific vs AGW.
    in response to Mi Cro:

    Are you that incompetent that you can’t even put scales or reference points on your axes?

    The charts are marked in Years and Degree’s F

    What does min and max even mean in this context?

    I’ve mentioned this enough times, I thought you already knew. I subtract yesterday’s min (or max) temp from today’s min (or max) temp, and then average them in this case by year.

    Who knows what other crimes against data analysis are committed in making these charts.

    I averaged these mn or mx day over day differences together by area. I do filter out 6-8 million records out of 122 million because they just don’t have enough samples per year.
    Follow the URL in my name, all of my source code is there, plus the actual data.
    Here is the excel with these charts and the data used to produced them.
    http://sourceforge.net/projects/gsod-rpts/files/Reports/Charts/Combined.xls/download

    • Correct me if I’m wrong: your charts show the yearly average value of daily changes in Tmax or Tmin at land stations only.

      • That is mostly correct. NCDC’s Global Summary of Days has some bouys in it, as long as it has a set of coordinates, and the selection includes those coordinates, they would be included.

    • Mi Cro:

      In that case, the cumulative sum of the yearly average changes should produce a time-series expressing the total departure from the initial value. I’m quite surprised that Tmin changes are an order of magnitude larger than Tmax changes and are quite sporadic in your results. Have you any explanation for this?

      • John S. commented on Atlantic vs Pacific vs AGW.
        in response to Mi Cro:

        In that case, the cumulative sum of the yearly average changes should produce a time-series expressing the total departure from the initial value.

        Yes it does. in fact i was thinking that might be useful, so I added a sum (mndiff) and sum(mxdiff), and started a report on some stations in the desert. You can get the same value by multiplying mndiff/mxdiff by the sample size for that value (in both of the reports I generate).

        I’m quite surprised that Tmin changes are an order of magnitude larger than Tmax changes and are quite sporadic in your results. Have you any explanation for this?

        I don’t include the temp data for a station day if it doesn’t have a record for yesterday, because I look at the rising temp for yesterday so I can subtract last nights falling temp. I do this during the import of the raw NCDC files, I get about 118 million station records out of ~124 million records that meet this criteria.
        So every record has both min and max values, so that’s not the problem.
        I work with Yesterday’s min/max and today’s min/max. Rise, Fall, day to day min difference, and day to day max difference, so all of the numbers are just various differences, which then get averaged (usually).
        So for a station that has a full year of data (365 or 366 days), if you sum min or max diff from Jan 1st 12:01AM to Dec 31st 11:59PM, summing the difference should be equal to the difference in temp taken at those two dates. If they are both 32F, diff will be 0.
        The difference in max temp is very small* with no clear positive or negative trend.
        * This only seems to be “larger” if there are missing daily records. If there’s 3-4 stations and no days without a value, Max diff is remarkably flat! Because max is so stable, the large variance in min is due to the data. But when you look at it regionally it’s obviously non global. As you go back into the 50’s, this is pretty noticeable.

        When you take the data pretty much as it is( no making stuff up), and look at it like I do. You see max temps being stable year to year. When looking at it yearly though you can’t tell how large the variance is, nor what the distribution of values are, just that year of year they’re not changing much either way.
        Min temps, operate independently, they too stay sort of flat, with large dips. The min temp profile for the US, does show below normal during the 70’s. The other continents all have cold min temps at various times. I haven’t been able to find a SST surface map to try and match them up, but that is the only thing I can think of.

  22. Posted at RC (hence written 2nd person, apologies to Judith for that) without a response to date; views from here would be appreciated

    There are a few things I can’t understand at all about Judith’s position:

    1) The assumption she makes at the start that because warming is judged extremely likely >50% anthropogenic, it is therefore asserted by IPCC to be between 50-100%. I’ve no idea where she gets this from, and she’s done it before and Bart V called her out on this.

    2) The significance of attribution in the 1910-1940 warming. Presumably if this is internal variability, then it makes it less likely, not more, that later warming was also internal variability, as reversion to the mean by cooling would be expected. If forced, then it’s irrelevant to the argument.

    3) The failure to work through to a logical conclusion. Judith states “i think the most likely split between natural and anthropogenic causes to recent global warming is about 50-50”. A PDF (unless lopsided) centered at 50% anthro makes it equally likely that anthropogenic forcing is 0% and 100%. Does Judith really believe that 0.65degC and zero degC rises are equally likely results of the changes in antho forcing over the period 1950-2010? This seems inconceivable!

    4) There are a number of places where Judith makes statements based on inability to attribute in shorter time periods than the IPCC choose to consider. Regardless as to whether she’s right in this, it’s an apples vs oranges comparison.

    Have I misunderstood any of this or is she really clearly wrong on all these? I’m very wary of assuming I know better than someone as well qualified as she is.

    • “Everyone** agrees that we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know. The old-style energy balance models got us this far. We can’t be certain of large changes in future, but can’t rule them out either. So climate mitigation policy is a political judgement”

      Tall guy- do you disagree?

      • I agree that great accuracy is not possible but disagree that it could be small – even the low end estimates of impact are large. I agree that all policy (by definiton) is a political judgement.

        I’m not sure what that has to do with any of my points or the subject of the post though – I’m trying to challenge my own reading of Judith’s analysis of 50:50.

      • Heh, vtg, they got the sign wrong. I’ve no confidence in the magnitude of the projected impacts. When will they ever learn? When will they ever learn?
        ===========

      • Rob Starkey says “everyone agrees that we can’t predict the long-term response with great accuracy”… actually we can’t predict with any accuracy. Why do IPCC never show results of backcast testing of the models. Answer: they do not explain history. And then, how can you be confident in using them in any kind of argument about the future, let alone in them predicting the future.

    • 1) The word “most” is used in the SPM which is what the layman will interpret as > 50% at 95% confidence. Whether her “interpretation” is incorrect or not is unimportant because either way the IPCC position looks very hard to justify.

      2) Why should reversion to the mean be expected? Or to put it more precisely, reversion to what mean? Over what time scale? If there are centennial internal variations – and looking back over the holocene it looks to me there are – then “reversions to mean” over a centennial scale will look like a trend on which shorter decadal length variations will be superimposed. This is the whole point about “coming out of” the LIA. It seems to me that the real argument is about the amount we know about the climate. Gavin on the one hand thinks we know a lot, and sufficient at least to be able to confidently say things about the 20th C and beyond, and Judith on the other thinks we don’t know enough to justify those claims.

      3) This seems a to be a mischaracterization of Judiths position. I take it to mean that of the 0.65 rise about 0.325 is man-made and 0.325 is natural. Without mans influence, in other words, we should only have seen about a 0.325 deg rise. I don’t see what is so hard to understand about this or why it should be necessary to invoke an implication that Judith might think there as an equally likely result of 0 and 0.65.

      4) Gavin makes the point that you need to consider longer time periods in order to detect a trend from the noise. But ignoring smaller time scales means you miss out on seeing what the climate is actually doing since it changes over smaller time scales as well, and this is why this post refers to the AMO and the PDO. It’s also why it’s important to say when you think you should be able to detect an anthropogenic signal.

      I sign as “Agnostic” because I am really trying to understand both sides of this debate. But if there is an argument to Judith’s position as outlined in the 50/50 post, it’s really not at all clear to me.

    • VTG, thanks for your thoughtful questions. Some quick responses:

      1) The assumption she makes at the start that because warming is judged extremely likely >50% anthropogenic, it is therefore asserted by IPCC to be between 50-100%. I’ve no idea where she gets this from, and she’s done it before and Bart V called her out on this.

      This is a semantic quibble (not a scientific one). The common interpretation of ‘half’ and ‘more than half’ refers to half of a 100% whole. I find the >100% thing to be highly misleading in context of ‘more than half’; their main attribution statement should not have been phrased this way.

      2) The significance of attribution in the 1910-1940 warming. Presumably if this is internal variability, then it makes it less likely, not more, that later warming was also internal variability, as reversion to the mean by cooling would be expected. If forced, then it’s irrelevant to the argument.

      1910-1940 may be forced (e.g. solar, volcanoes); if so our understanding of forcing (particularly solar) is way inadequate. If it is internal variability, then internal variability could also cause 1980-2000 warming. Reversion to mean makes no sense to me, there are oscillations on many time scales, climate shifts, etc.

      3) The failure to work through to a logical conclusion. Judith states “i think the most likely split between natural and anthropogenic causes to recent global warming is about 50-50”. A PDF (unless lopsided) centered at 50% anthro makes it equally likely that anthropogenic forcing is 0% and 100%. Does Judith really believe that 0.65degC and zero degC rises are equally likely results of the changes in antho forcing over the period 1950-2010? This seems inconceivable!

      I make no assumptions about a normal distribution. I think both 0% and 100% are extremely unlikely.

      4) There are a number of places where Judith makes statements based on inability to attribute in shorter time periods than the IPCC choose to consider. Regardless as to whether she’s right in this, it’s an apples vs oranges comparison.

      As per the IPCC and Gavin’s post, the IPCC has detected anthro warming only since 1980. What is the rationale for considering the period back to 1950? The period since 1980 is only 34 years, and for the last 15 or so years, there has been no warming. Periods of 30-40 years MATTER in this debate, given that the time period that is attributable to human warming

      • The crux of the disagreement with the IPCC is: why is 100% extremely unlikely? Observations suggest that even a moderate 2 C transient sensitivity accounts for all the warming without having to invoke a net natural variation over the last 60 years. Even most skeptics have 2 C within their uncertainty range.

      • Judith,

        your reply is much appreciated, thank you. It would be interesting to go into all of your responses in much more detail, but I’ll restrict myself to a couple.

        1) I don’t think this is semantics, and I do think it’s central. Your critique assumes <100% anthro, yet the IPCC states “The best estimate of the human induced contribution to warming is similar to the observed warming over this period”, which your “common interpretation” of is in flat contradiction. Your critique would, to an interested layman like myself, be more powerful if it addressed what the IPCC says rather than a “common interpretation” of it.

        3) So you do indeed appear to find 0% and 100% anthro equally likely. I still find that incredible! I’ll try and explain why. Let’s look at GHGs only for simplicity. IPCC (AR5 box 12.2) gives a range of 1-2.5 for TCR with a midpoint of 1.8, and even Nik Lewis in his GWPF paper gives a range of 1-2 with a most likely of 1.35. With CO2 at 307ppm in 1950 and 390PPM in 2010 that gives and expected response of
        Lower bound 0.35 degC anthro (54%)
        Lewis midpoint 0.47 degC anthro
        IPCC midpoint 0.62 degC anthro
        Upper bound 0.86 degC anthro (133%)
        Even taking Lewis’ numbers, this seems completely irreconcilable with your position. Are you now arguing TCR is constrained below 1?

        Further, what is your rationale for a natural contribution centred at +.32 degrees for 1950 – 2010?
        Just for instance, why is -0.32 degrees not equally credible?

        A couple of other posters have suggested a structured response to Gavin would be useful. I’d add to this request, and specifically suggest your PDF for the attribution with justification for midpoint and range (perhaps your version of fig 10.5?) would be a great way to illustrate your thinking clearly enough for a numpty like me to understand.

        Thanks again for responding, hope to hear more.

      • “I think both 0% and 100% are extremely unlikely.” VTG, this is entirely different from your “So you do indeed appear to find 0% and 100% anthro equally likely.” She just didn’t say that.
        I would urge moving away from the semantic issues (1) and (3); even if you agree on what she meant, they are not central points. I think Mosher’s comment below (http://judithcurry.com/2014/08/28/atlantic-vs-pacific-vs-agw/#comment-622130), which Curry seems to agree with, is the clearest expression of the difference between her and Gavin Schmidt: (a) We can tell from historical data that natural variation will frequently be more than half of the climate anomaly. (b) Admitted by all: no current climate models can predict natural variation. Therefore: (c) climate models are not usable for attribution.
        Approached properly, it isn’t hard, and all of Schmidt’s post disappears at once.

      • VTG
        semantically if IPCC meant

        “The best estimate of the human induced contribution to warming is similar to the observed warming over this period”

        They mean that there is no room for any natural variation in temperature over that time span of 34 years. None at all.
        They are asserting that in there opinion the world temperature would have stayed inviolate , ie changed not one iota in that time frame.
        Now you know that that this not true.

        Over 34 years, without any human influence whatever, There would have to have been some variance, up or down.
        But the IPCC boldly asserts that all the warming could only be due to human induced contribution.
        The second statement is obviously wrong. Give it up.

      • Miker613,

        I strongly disagree these are semantic points. The first appears to be a misunderstanding/disagreement on what the IPCC position actually is, and the second, far from semantic, is what appears to me to be a conclusion of Judith’s logic which generates a physically implausible result. But I’d be interested in Judith’s take on it.

        On natural variability replication in models, note PaulSs comments below Mosh which seem to show that natural variability in models does indeed follow observations for Moshers case. There’s a long discussion in AR5 on this point, section 9.5.3 and fig 33. Mosher doesn’t seem to like it (“weak”) – you’d have to ask him why not.

        I’ve posted a couple of follow on questions at RC, yet to appear. Let’s see if Gavin responds to them.

      • From Curry’s reply it is clear that (1) at least is semantic: she is doing nothing more than complaining about their wording. Personally, I don’t care how well they worded it.

        Mosher is the data guy for the BEST temperature project. I think that makes him one of the top experts in the world on instrumental temperature data for the last century and a half, which is what we’re discussing. I don’t know PaulS, but accept no substitutes.
        As for models and variability, I don’t think anyone (including AR5) claims that the models can do what Mosher is demanding. Power spectrums (Fig 33) aren’t enough. Doesn’t mean the climate models are bad or aren’t useful; it does mean they aren’t up to this particular job.

      • “specifically suggest your PDF for the attribution with justification for midpoint and range” Judging from most of Curry’s comments, and her entire earlier post about the “terciles”, I would imagine that her pdf probably is centered somewhere between 33% and 67%, anyhow has some non-trivial probability of <50%, and (from her "extremely unlikely") has tails that essentially vanish well before 0% and 100%. Which is why your comment about how 0% is ruled out by Nic Lewis etc. makes no sense to me as a complaint: Curry said that too. You only need to find out why she thinks 100% is also unlikely. I would agree with you that it seems far more likely than 0%, but of course she might believe that as well, while still thinking it's extremely unlikely.

      • miker613,

        Mosher says that more than 25% of 60-year trends for start dates from 1770-1890 (or 1753-1890) produce temperature changes in excess of 0.67ºC. I’ve looked at the current Berkeley data at their website and found the 75th percentile for the same is 0.6ºC. I’m not sure exactly the source of this small discrepancy, possibly a definition thing, but it’s close enough to say I’m on the right track.

        I’ve performed exactly the same analysis on the MPI-ESM-P data and found the 75th percentile lies at 0.53ºC. Note that we should expect the MPI-ESM-P data to have a smaller amplitude due to being land+ocean, whereas Berkeley date is land-only.

        In case you didn’t see it, this plot shows the trends time series for both model and obs.

        Mosher hasn’t said how he’s determined the models don’t do this, but I’ve found model and obs. track quite closely over this period.

      • Lars Karlsson

        JC: “I think both 0% and 100% are extremely unlikely.”

        This implies that JC thinks that it is extremely likely that the natural contribution is positive since 1950? Where does this certainty come from? Does JC really have such reliable information about natural factors (forcings and internal variability)? And how can this certainty be reconciled with the warming of the oceans?

      • Paul S, your link doesn’t work.

      • miker613,

        Strange, it works for me. Can’t see anything unusual about it. Does this work?

      • Nope, not in Chrome, Firefox, nor IE. Frustrating.

      • Miker

        I can see your link to a graph perfectly on my iPad
        Tonyb

      • That one looks nice! Are we looking at anomaly? Mosher, any comments?

      • Ah, yes, it’s slightly devoid of context after this long string of comments. What it shows are time series of 60-year trends through these records, where the first point relates to the 1770-1829 trend. The y-axis values relate to the 60-year warming/cooling rate indicated by the slope.

      • Ah. I do not think that would at all fulfill Mosher’s conditions, though perhaps I should leave that to others.

      • miker613,

        Not sure I understand? Mosher’s test was about matching the amplitude of 60-year trends in the Berkeley observations. I’ve shown that the model does match the amplitude, and also substantially matches the phasing of those larger trends.

      • I do not think so, though perhaps others will correct me. He only mentioned and calculated 60-year chunks to demonstrate that a large part of the anomaly from 1950 is likely to be matched by natural variability. Then the challenge becomes, can the models actually predict the natural variability or is it “noise”, outside their ken? To do that, the models have to predict the variability itself, not its 60-year trend.
        But again it would be better if Mosher explained himself; I’m just trying to follow.

  23. ” I live on planet Earth observations, and Gavin lives on planet climate model”

    Really excellent writing, vivid, precise, and gets right to the heart of the issue. Observations trump models.

    • stevekoch

      On planet climate model, models are a technicolour exciting reality and observations are always merely grey, anecdotal and boring. If they are older than 20 years they become ‘historical anecdotes’ and can be safely discarded as extremely tedious and irrelevant

      tonyb

      • Plus, Tony, on planet climate model you get paid to play supercomputer powered video games all day rather than having to go outside and actually measure something. In Gavin’s case, stuff like microclimate siting issues in the GISS raw observational inputs he then homogenizes into global warming trends.

  24. Jan-Erik Solheima, et al., tell us–e.g., water is transported and spread throughout the Atlantic and exported to the Indian and Pacific oceans before updwelling in Antarctic waters. The return flow of warm water from the Pacific through the Indian ocean and the Caribbean to the North Atlantic, a distance of 40,000 km, takes from 13 to 130 years… Why do our Earthly model-makers believe they can deal with lag times like the example above? Consider that we are not even aware of all the natural phenomena — as we look back in time at past weather to tease out trends — that are involved in climate change. Who is to say that the effects of the polarity reversal of the Sun’s magnetic field (marking Solar Cycle 24’s midpoint) will not act in concert with other natural forces to amplify the effects of the Sun on the Earth’s future climate in ways we cannot comprehend? What has Western academia ever done in the past that would give us confidence in their belief that they can foresee the future and know how to change it?

  25. An increase of 0.8 C per century is not scary.

    “The underlying anthropogenic warming trend, even with the zero rate of warming during the current hiatus, is 0.08 C per decade.* [That’s 0.08 degrees Celsius, or 0.144 degrees Fahrenheit.] However, the flip side of this is that the anthropogenically forced trend is also 0.08 C per decade during the last two decades of the twentieth century when we backed out the positive contribution from the cycle….”

    http://dotearth.blogs.nytimes.com/2014/08/26/a-closer-look-at-turbulent-oceans-and-greenhouse-heating/

  26. Everyone understands your points- they arej ust wrong.

    • *are just

      • Yes to ‘are just’. Why do you have the extraneous ‘wrong’ at the end of the sentence.

        You and Gavin both assert. That’s not the same as refute. It’s in the rulebook. You could look it up.
        ==============

      • No, you have this backwards. There is an extensive literature on attribution describing the methodology in detail. Gavin is just summarizing some of the basic (and frankly, elementary) points that JC simply doesn’t understand. If she would consult an attribution expert this would be much less embarrassing and less of a waste of time for everyone involved, not to mention more informative for her readers. On the contrary, it is JC who has ‘asserted’ for 5+ years now, hand-waving about uncertainty with no compelling attempt at quantifying anything she is concerned about. Strings of out-of-context quotes and more uncertainty waving actually don’t change that.

      • “Elementary”, but wrong.

      • Attribution, she’s a bitch;
        Don’t know how just scratch that itch.
        Puff the Magic Climate
        Lived by the CO2.
        Nature turned and bit him, someplace rich.
        ===============

      • Ah, so ‘experts’ assert, and might you be expert?
        ==========

      • I’m no expert but I know that attribution is far from pinned down. Furthermore, the more you attribute warming to man, the colder we would now be without man’s input.

        Far better for us if the warming has been predominantly natural.

        You should be careful what you wish for us.
        ==============

      • Matthew R Marler

        Chris Colose: There is an extensive literature on attribution describing the methodology in detail. Gavin is just summarizing some of the basic (and frankly, elementary) points that JC simply doesn’t understand. If she would consult an attribution expert this would be much less embarrassing and less of a waste of time for everyone involved, not to mention more informative for her readers.

        One of the points on which Prof Curry and Dr. Schmidt agree is that attribution requires a comparison of the actual data (global mean temp, global mean rainfall, etc) to a model of what would have happened absent human alteration of the Earth surface and atmosphere. At this point, belief in one of those models over others is tantamount to a religious belief because none of them has been subjected to stringent testing of any kind (comparisons of model predictions to out of sample data being the best.) What you called Gavin’s summarizing of elementary points is actually Gavin’s selection of a few of the models that he happens to credit.

        That is not JC simply not understanding. It is your misunderstanding of the holes in the science behind some of the propositions in Dr Schmidt’s post.

      • Matthew,

        The first issue here has nothing to do with models- it’s simply that JC doesn’t understand how attribution methods work. Her comments about the probabilistic statements in IPCC, circular reasoning, sensitivity, etc are all written down and easy to examine without beliefs, conflicting worldviews, or confidence in models being relevant in any form. This is where I generally recommend consulting actual experts in the subject, but instead she refuses to listen to anybody who knows what they are talking about because it might be “groupthink” or “ideology.” Because JC is high-profile in these discussions, her statements get a lot of attention, and so unfortunately demand time from other people when she gets it wrong. Most frustrating, these misconceptions seem to require monthly to annual updated corrections, which seems odd for someone claiming to be actively researching a topic.

        As for your comment, the fundamental point here is that there can be no “planet climate model” and “planet observations” when talking about attribution. The two have to be married in some way. This idea of simply looking at data sounds sciency to the uninitiated, but without a coherent theory to link aspects of observed OHC, stratospheric cooling, vertical T profiles, warming patterns, etc, then you’re left with no story to tell and just a bunch of numbers on a computer that are only mildly interesting. In fact, there are several studies that do try to think about attribution from an observations-only perspective; in general, what they gain from avoiding assumptions about the model’s veracity in simulating the shape and timing of the expected responses, they lose more from making more substantial assumptions, many of which are crucial to JC’s concerns, such as how to separate forced+internal components or the timescale of response. Relaxing some of these assumptions can be done, but then a ‘model’ is being constructed implicitly.

        The other point is that attribution studies evaluate the extent to which patterns of model response to external forcing (i.e., fingerprints) simulations explain climate change in *observations.* Indeed, possible errors in the amplitudes of the external forcing and a models response are accounted for by scaling the signal patterns to best match observations, and thus the robustness of the IPCC conclusion is not slaved to uncertainties in aerosol forcing or sensitivity being off. The possibility of observation-model mismatch due to internal variability must also be accounted for…so in fact, attribution studies sample the range of possible
        forcings/responses even more completely than a climate model does. Fundamentally, a number of physically plausible hypotheses about what else might be causing the 1950-present warming signal are being evaluated. Appeals to unknown unknowns to create the spatio-temporal patterns of GHGs, and to cancel out the radiative effects of what we know to be important, seems like much more of a stretch than the robustness of current methods.

        Of course, the models might be doing everything wrong- aerosols might not locally cool and have a distinct pattern in space/time, CO2 might not cool the stratosphere, and solar might have a completely different fingerprint on vertical temperature profiles. There are a number of reasons to doubt that the models are not useful in this respect, but since we’re making things up then there’s no point in defending what models seem to do well. Phlogiston might be the unifying theory we need. It is easy to make claims that the model internal variability is way off and can simultaneously lead to observed patterns (e.g., upper ocean heat content anomalies, tropospheric warming, etc) but this is not a serious criticism until there’s something more that has been demonstrated. It is also easy to hand-wave about “multi-decadal variability” (that is fully acknowledged to exist in the real world). Regardless, the question of why every scientist doesn’t think the way she does seems self-evident at this point.

      • Matthew R Marler

        Chris Colose: As for your comment, the fundamental point here is that there can be no “planet climate model” and “planet observations” when talking about attribution. The two have to be married in some way. This idea of simply looking at data sounds sciency to the uninitiated, but without a coherent theory to link aspects of observed OHC, stratospheric cooling, vertical T profiles, warming patterns, etc, then you’re left with no story to tell and just a bunch of numbers on a computer that are only mildly interesting.

        I think I already agreed to that. Attribution and estimation of CO2 effects depend on models of the natural variation that would have occurred without the CO2. Those models (and there are a lot of them out there) might be strictly empirical curve fitting or really systematic and rich aggregations of multiple sources of evidence such as the GCMs. The GCMs to date are clearly inadequate; some other simpler models are at least close to the recent past.

        In fact, there are several studies that do try to think about attribution from an observations-only perspective; in general, what they gain from avoiding assumptions about the model’s veracity in simulating the shape and timing of the expected responses, they lose more from making more substantial assumptions, many of which are crucial to JC’s concerns, such as how to separate forced+internal components or the timescale of response.

        I think some examples of that would be helpful, especially a showing of “they lose more” by making substantial assumptions crucial to JC’s concerns..

        Appeals to unknown unknowns to create the spatio-temporal patterns of GHGs, and to cancel out the radiative effects of what we know to be important, seems like much more of a stretch than the robustness of current methods.

        (1) I focus my attention on known unknowns such as the underestimation of effects of changes in surface water vaporization and the uncertainty of cloud cover changes; (2) Current methods have not been shown to be accurate, whether they are robust or not GCMs, again, have consistently over predicted subsequent warming, as did Hansen. None of the methods “robustly” predicted the “pause” until it was well underway.

      • Matthew R Marler

        Chris Colose: It is also easy to hand-wave about “multi-decadal variability” (that is fully acknowledged to exist in the real world).

        Besides the hand-waving there are serious attempts to study it and quantify it. I think it is fair to say that a reasonably thorough and accurate model does not exist yet. The Tsonis et al model might eventually be shown to be sufficiently accurate; the recent paper by K.K Tung is informative; the stadium wave paper is informative. Your statement about “hand waving” is off-target. It isn’t enough to fully acknowledge that they exist: they have not been adequately quantified, and that ought to be acknowledged.

      • “If she would consult an attribution expert”

        Coming to a University near you soon, post-graduate degrees in Attribution; become a Dr in Attribution and join the fastest growing field in science.

      • Alas, the “extensive literature on attribution” is quite peculiar to climate science, which indulges in singular claims on evidentiary bases that would not be acceptable in any serious science.

    • Matthew R Marler

      Chris Colose: Everyone understands your points- they arej ust wrong.

      I think that Gavin Schmidt showed that Prof Curry’s statements can be contradicted by other statements, but his statements are at least as dubious as hers. His assertion that the models will be more correct over longer time periods, for example, has not been shown to be true, so there is no good reason give it a lot of weight. Should it prove true in the future, then we’ll be able to say that Prof Curry has been wrong. If the future shows otherwise, then we’ll be able to say that Gavin Schmidt has been wrong. On evidence to date, I do not see that Prof Curry has been proven wrong.

      Over time I have come to think that Prof Curry is closer to the truth, especially through her emphasis on the sources and sizes of uncertainty.

      I should mention at least once I appreciate the moderators at RealClimate for posting my comment. I stopped posting when they suppressed some comments that I thought were reasonable and on point.

      • Don’t be deceived by their temporary ‘generosity’, they even allowed my stuff which many times before ended in the BoreHole.

    • Chris

      Thanks for your insightful analysis.

      Is that it or would you like to be more specific?

      Tonyb

    • Matthew R Marler

      Chris Colose: This is where I generally recommend consulting actual experts in the subject, but instead she refuses to listen to anybody who knows what they are talking about because it might be “groupthink” or “ideology.” Because JC is high-profile in these discussions, her statements get a lot of attention, and so unfortunately demand time from other people when she gets it wrong.

      this is the place where links and citations would be helpful. I frequently find that references to scholarship elsewhere frequently do not support claims made about them. I buy a bunch of books and download a few dozen papers each year. I read many links. Whatever you have, I’ll read. I usually find that what are claimed to be “debunki9ng” and such are based, like Gavin Schmidt’s remarks on Judith Curry’s post, alternative questionable assumptions. I find that there are a lot of things that are not known very precisely or completely. Most commonly I find the assumption that the radiative balance and equilibrium calculations (ignoring non-radiative transfer) are directly relevant to the dynamic climate, to at least the first significant figure and at a short time frame.

  27. David L. Hagen

    “Its warming” or not “its warming”, that is the question
    (with apologies to Shakespeare)
    Curryja
    Congratulations on clearly exposing the difference:

    I live on planet Earth observations, and
    Gavin lives on planet climate model.

    William Briggs expertly addresses this issue in:
    We Know The Climate Is Warming Because It Isn’t

    What do you call the mental process which allows a man to say “What’s firmly established is that the climate is warming” while also holding that “There’s been a burst of worthy research aimed at figuring out what causes the stutter-steps in the process—including the current hiatus/pause/plateau [in warming]“?
    Which is it? The climate is warming or it isn’t? . . .
    If the statistical model that said the “hiatus” was “statistically significant” was any good, it would be able to skillfully predict future temperatures. Can it? . . .
    The reason good scientists do not believe in apocalyptic global warming theory is because that theory has failed consistently (and outrageously, given its hype) to produce skillful predictions.

    Therein lies the rub!

  28. Judith –

    Are you going to take the time to make a detailed explanation of how you think that Gavin’s response missed your main points? He took the time to make a point-by-point explanation of his position; perhaps readers would benefit from you doing the same?

    I mean perhaps you’re saying that to respond to his post would be to simply repeat “this misses the point” over and over – but maybe if you offer something more of a detailed explanation for why his responses miss the point, it would be helpful?

    • Particularly given that one complaint offered by “skeptics” is that “realist” climates scientists refuse to debate the science (with the resulting conclusion drawn by “skeptics” that the refusal to debate is some kind of protective measure against having to acknowledge error).

      Seems to me that this is a great opportunity for you to engage in a discussion of the science, free from name-calling. I think that both of you have made extraneous and counterproductive remarks that have nothing to do with discussion of the science (e.g., that you;re making stuff up, that you and he live in different planets [paraphrasing], but at least the dialog is somewhat higher quality than what is usually found in the blogosphere.

      Maybe you could make the most out of this opportunity?

      • Easy, is attribution settled or not? Every ongoing day of models failing unsettles.
        ==========

      • AR5 science backed down on settled ’til the SPM. Then the politicians settled it. Ain’t science great?
        ============

      • Joshua

        I agree. It would be nice to see Gavin’s piece thoroughly examined point by point and either refuted or acknowledged as having some merit..

        tonyb

    • Well Joshua, Judith typical mention the IPCC statement that there it is “extremely likely (>95%) that “most” of the warming since 1950 is anthropogenically caused.

      http://www.realclimate.org/images/attribution.jpg

      Then Gavin’s pdf should have 95% probability on the 50% line not approximately 99.25% which would put his maximum probability at 100% not 110%. Gavin also shows attribution as normally distributed meaning there could be up to about 170% anthropogenic when then same probability should be at 50% and 150% for a normal distribution. Based on what I have heard Pekka say, we all can stop reading right there at the first screw up.

    • I am thinking I might need to respond to Gavin’s essay point by point, but I really don’t feel like doing this, i am leaving for travel and won’t have much time this weekend. Plus I have a lot of really interesting posts in the pipeline. So I probably will respond in more detail, but it might be up to a week before I get to it.

      • From what I read on twitter, most serious scientists seem to think Gavin makes points that make you look foolish.

      • yeah, scientists like Michael Mann and Chris Colose. Almost as scientific as a Cook/Lewandowsky survey. On the other side, lots of people are retweeting my posts. Would be interesting to do a twit analysis on this. Bottom line is that few climate scientists are on twitter (a relatively large population in the UK tho).

      • curryja commented on Atlantic vs Pacific vs AGW.
        in response to me:

        Would be interesting to do a twit analysis on this.

        I’m sure MM would rate highly!

      • I just love it; assertion by twitter scan. Ain’t science great?
        ===========

      • World Serious Scientists.

        OK, I stole ‘World Serious’. H/t Ring Lardner.
        ====================

      • curryja :I am thinking I might need to respond to Gavin’s essay

        Although tit-for-tat might be entertaining and good spectators sport, it may be detrimental to influence you currently have. Gavin is an intelligent scientist, due to ever longer ‘pause’, he must see that writing is on the wall, but his current position doesn’t offer him an acceptable alternative. I would think that at this point in time, it would be much wiser to stay away from a further confrontation.

      • Matthew R Marler

        curryja: . Plus I have a lot of really interesting posts in the pipeline.

        My vote (I know you did not ask, but … .) is that you finish those posts first and then come back to Gavin Schmidt’s post later. I think you said it correctly above: he gives much more weight to models (and a selection of all models) than you do.

      • I’m intrigued by Matt Skaggs comments (here and at RC). The climate attribution problem needs a reframe.

      • “From what I read on twitter, most serious scientists seem to think Gavin makes points that make you look foolish.

        Twit.

      • What about the post on Salby? That’s attribution too. And some of the thoughts that come up may be influential in the warming attribution.

      • Coming next week, too much breaking news this week.

      • Let’s hope so because this initial response was pretty thin gruel.

      • Bwwwwwaaaaaaaa!!!!

  29. I don’t know how this will shake out, but reading the tenor on RC and here is mindboggling to me and makes me think about the “science court” concept.
    It appears that, depending on which site you prefer, I’m supposed to conclude that either Judith Curry or Gavin Schmidt are doddering fools who couldn’t open a door without intellectual guidance. I don’t believe that’s true for either.
    It really is a shame that there isn’t objective outlet that can translate this into: here are the good points Judy makes, here are the good points Gavin makes.
    One thing struck me at RC, Schmidt says that absent AGW it would be cold and getting colder. How cold? What damage would the cold do? There is an international effort to develop policies to make the world cold?

    • Yes. If the warming has been predominantly natural, we’ve bounced off the low of the Holocene at mid precession cycle. If it’s predominantly man-made, then without man we’d be cooler than ever before in the Holocene and on time(heh) for the next glaciation.

      We don’t have enough fossil fuel to stop the next glaciation, unless sensitivity is astronomical, and we’d have seen that by now.

      Do we have enough to delay glaciation, or get us over the hump of the half cycle?

      To be continued, same time, same place, next week.
      =============

      • We don’t have enough fossil fuel to stop the next glaciation, unless sensitivity is astronomical, and we’d have seen that by now.

        Orbiting mirrors.

      • Solar panels and microwave transmission to Earth. Got ya’ covered, Boo.
        ==========

    • And thanks Jeff for the clue to Gavin. He’s finally snapped to the fact that demonization of warming has been wrong from the gitgo. When will they ever learn? When will they ever learn?
      ============

      • I’m guessing he means a temporary cooling phase, like the ’60s and ’70s. Which means the 80s-90s were a warming phase and Curry is right that the definition of “most” isn’t settled.

        It stumped at least one of his followers tho: http://www.realclimate.org/index.php/archives/2014/08/ipcc-attribution-statements-redux-a-response-to-judith-curry/comment-page-1/#comment-586823

        That could be read to mean the overall long-term trend is supposed to be cooling but we humans mucked it all up and made it warm. Is there a chart showing what someone thinks the temperature trend would be if human ancestors had gone extinct?

      • ” Is there a chart showing what someone thinks the temperature trend would be if human ancestors had gone extinct?”

        No there wouldn’t be such a thing as all it would do is risk exposure to the fraud. But one is given a strong clue, if you believe in Mann’s hockey stick. One is suppose to believe that the blade of hockey stick is all caused by human activity- it’s the whole reason the hockey stick was fabricated, the amazing rise in temperature could only be explained as being the result of human activity- industrial activity [in accordance Luddism fantasy].

        So if one believes the hockey stick is based upon reality, then the glaciers would still be advancing globally- or the Little Ice Age would not have ended in 1850.

    • Jeffn, you are experiencing the ‘science court’. Just, we could use more well trained advocates. I expect the internet (not invented by Gore) will sort that out all by itself. See Judiths blog statistics below for a bit of evidence.

  30. Gavin and the rest of the inhabitants of “Planet Climate Model” have too much invested, literally and figuratively, to even consider the possibility that the models fail in myriad ways and there have been dozens of peer-reviewed publications that describe those failings in detail.

  31. “Gavin and I seem to live on different planets: I live on planet Earth observations, and Gavin lives on planet climate model.”

    Thanks for the chuckle. I don’t know how some of these guys sleep at night.

  32. Judith
    There are obvious long term natural periodicities in the temperature data.
    Both Gavin and you are simply not seeing the wood for the trees – you both refuse to take into account the very evident 960-80 year quasi periodicity in the record see Figs 5-9 at the latest post at
    http://climatesense-norpag.blogspot.com
    Over time the 60 year periodicity essentially zeroes out. The underlying 20th century rise is part of the 960 year periodicity. There is no room left for anthropogenic warming of other than minor significance
    The current hiatus represents a peak in both the 60 and 960 +/- periodicities,
    The linked post also forecasts the timing amplitude of a possible coming cooling.
    Forecasts and even discussions which do not account for the timing and amplitude of this natural quasi- millennial variation are pretty much irrelevant.

    • Matthew R Marler

      Dr. Norman Page: The underlying 20th century rise is part of the 960 year periodicity.

      that is a possibility, but it’s based on post-hoc model fitting. Unless there is some way to test the model predictions against out of sample data (perhaps extensive data on a hypothetical underlying cause of the 960 year approximate periodicity), then it is just one of the many models out there.

      There are models that give large CO2 effects, models that give moderate CO2 effects, and models that give large CO2 effects as outputs; but there are no models that have made confirmed accurate predictions against out of sample data.

      • In the real world there is no out of sample data .Forecasts can only be tested against future temperatures over time scales sufficiently long to be outside the range of shorter term variability. In this case the key periodicity of interest is the natural quasi- millennial periodicity – see Figs 5-9 at
        http://climatesense-norpag.blogspot.com
        Note that temperatures in Fig 9 can still exceed the 50 year moving average peak 100 years and more after the moving average peak.
        Nothing can be proved by models. and GCMs are inherently worthless.
        . The way to go is simply to state clearly what the working hypothesis is
        and what reasonable assumptions went into them – in my case the basic assumptions are that the current warming peak is a synchronous peak in the 60 and 960 year periodicities and that the 10Be and neutron count records are the best proxy for solar activity.
        The 960 year carrier wave variability can then be modulated – ie shorter term forecasts can be then made by looking at and projecting forwards. the shorter term periodicities in the PDO AMO etc.

      • Matthew R Marler

        Dr Norman Page: In the real world there is no out of sample data .Forecasts can only be tested against future temperatures over time scales sufficiently long to be outside the range of shorter term variability.

        I agree with you there. That is why it can not now reasonably be concluded that your proposition about the 960 year period is accurate. It is one of many models.

  33. “Gavin and I seem to live on different planets: I live on planet Earth observations, and Gavin lives on planet climate model.” So is this the basic disagreement: Gavin thinks that the climate models are good enough that they can be used for attribution, and Curry thinks that they clearly are not good enough, since the issue that we’re discussing, CO2 vs. natural variation, is exactly what the models are failing to capture, both early in the last century, and during the “pause” early in this century.

  34. Steven Mosher

    Judith
    The discussion about percentages is entirely confusing.

    Starting from 1950, the anomaly in Land is around 0. The Anomaly in Land plus ocean is around 0.

    60 years later we have

    Land anomaly : 0.9C
    Land + Ocean Anomaly: 0.5C

    For simplicity sake I will just look at the land.

    The RIGHT question to ask is this: How many C of warming between
    1950 and 2014 could be caused by natural variability.

    Why ask the question that way. Well, when you ask the question THAT WAY
    you see that GMC modelling can only get you the answer IF it knows how
    to model natural variability with the right phasing and amplitude. As gavin
    admits getting the phasing correct is impossible.

    Then we get to ask the question.. based on observation what is the amplitude of natural variability. For the land I took all the years from 1753 to 1890.
    And simply computed the rise or fall in the anomaly. crude but an adequate first wack.

    emailed you the results. regardless of how you slice the data you come up with an answer that amounts to this.

    Before you start your GCM experiment you can reasonably assume based on observation that there is a substantial probability ( > 25%)
    that natural variability can contribute over .45C of the observed .9C rise in land temperatures.

    When you start the question the OTHER way, when you start by tying to quantify the Human contribution the assumption that models get natural variability is buried. When you ask a different question, how many C
    of the .9C observed is due to natural variability, the question of getting
    both the amplitude and phasing of that variability is put front and center.
    That is how the question you ask frames and “biases” the result.

    And the clearest way to test whether GCM are fit to that purpose is to compare historical GCM performance
    ( from 1750 to say 1890 or some other early period, say 1760 to 1880) with the observations during that period. They dont get the amplitudes right. they cant get the phasing right. The former is doable.

    Finally, Its interesting to note that Gavin plays the same explanatory game
    as sun nuts. Looking at the record the sun nut will appeal to the X factor.
    Some un known, un measured thing about the sun.
    In gavin’s version of this, he argues to the possibility of anthro forcing projecting onto natural variation. yes it could be unicorns.

    • Matthew R Marler

      Steven Mosher: As gavin
      admits getting the phasing correct is impossible.

      You caught that as well. He cited a result from spectral analysis, but his use of it required phase information on the oscillations as well; his comment on the lack of phase information came in a different paragraph.

      On that and some other points I thought he might not be consistent, in addition to relying too heavily on the models.

    • Post this at RealClimate?

    • Agree or disagree, I find this approach much more reasonable and clear than Gavins. I am not quite sure I understand your point about Gavin using the same explanatory game as sun nuts. Are you saying his “X factor” is ACO2?

      • Steven Mosher

        no his x factor is projection of anthro on natural.
        he is arguing you Cant rule that out, and that judith rules that out.

        STRUCTURALLY this is a skeptical argument.. you cant rule out some
        X factor from the sun.

        Its an appeal to ignorance to cause doubt in your opponents argument.

        strip away the details of argument and look at the structure

      • Steven Mosher

        yes, of course its more clear. you avoid the whole question of 110%
        you Even avoid the whole question of anthro forcing. You never use
        Anthro forcing. You simply do this.

        1. Establish the empirical distribution of 60 year changes over the
        1770-1890 period ( start dates)
        2. You run your models with only natural forcing over that same time
        period.
        3. You ensure that your models can AT LEAST capture the broad
        strokes of this distribution.
        4. If it can, you simulate the 1950 to 2014 period a bunch of times.
        5. Those runs will give you a distribition of the warming/cooling
        you can expect from natural warming
        6. you subtract that from observations and conclude that the NET
        is caused by humans.

        totally different approach to the “same” question. Except you probably
        have to stop after step 3 because you fail at that point.
        This failure is then demonstrated.
        In gavins world succees in capturing natural variability is ASSUMED
        and then bolstered weakly with some argument, rather than tested
        directly and rigorously

        In short, Im arguing that in attribution, the best way is to NOT run
        the annthro runs. Todays observation is .9C of warming
        That is the sum of all effects.
        Estimate the natural using models that excell at natural.
        Subtract.

        Here the assumption is that what cant be explained by natural forcing
        is anthro.

      • Mosher, please learn to use grammar and punctuation so that reading your comments isn’t such hard work.

      • Steven Mosher

        me.

        on a phone. deal with it

      • “On a phone” – me too. Use the grammar and spell checking functions. Don’t be a dumb ass.

      • Mosh

        Can I recommend the asus phone pad, a seven inch tablet with a phone that won’t be your main one but is very useful in the right circumstances. I can’t see the screen of a smartphone well enough to use it and don’t know how or why people follow sport on it. you do well to use your phone but your posts could be EVEN better with the right tool. ;)

        Tonyb

      • Mosher’s spelling and grammar does not improve when he uses a larger keyboard.

        Trust me on this.

      • Tom

        I thought he was an English Major? Then again I thought he was a temperature expert. (just joking Mosh)

        tonyb

      • Mosh

        Stay with the phone. The net result is very often a very high signal to noise ratio…very useful.

      • Steven Mosher

        Me.
        I will pay you to edit. 2.50 per hour.
        That’s all u can do
        And that’s all you’re worth.
        Esad

    • Then we get to ask the question.. based on observation what is the amplitude of natural variability. For the land I took all the years from 1753 to 1890.

      For natural variability, take all the years for ten thousand years. We are well inside the bounds of the most recent ten thousand years.
      There is no data that shows anything other than CO2 is out of bounds. CO2 only matters in how green things grow.

    • Mosh

      Surely this is making an assumption that today’s temperatures are unprecedented and therefore man must have had something to do with making a substantial contribution since 1950?

      If Natural variability is much greater than scientists such as Phil Jones realised (although it took him to 2006 to admit this) why can not the observed warming since 1950 be solely attributed to this?

      Can we be certain that co2 did not lose the ability to warm after around 280ppm?

      tonyb

      • Steven Mosher

        has NOTHING to do with unprecedented.

        I look at all 60 year periods from 1753 to 1890
        I create an emprical distribution of the change in temp over 60 years.
        I note that theory tells me these periods will have smallish anthro forcing.
        I assume the distribution shows me the true probablity of large
        natural 60 year swings. 25% of the time these swings EXCEED
        ,67C.

        So, before I start my GCM experiment I have a prior belief.

        1. Natural 60 year swings can exceed .67C 25% of the time.
        2, to capture the natural contribution bewtween 1950 and 2014
        I need a TOOL that can reproduce that distriution or come close.

        3. I would then test the GCMs on this early period. Does it have
        the gross skill necessary to estimate the natural warming or cooling
        in the period of interest 1950-2014.

        I would agree with Judith that the GCMs dont have the capability of getting the natural warming correct.. and one would simply NOT do the attribution experiment.

        gavin reverses this they do the experiment to quantify the anthro portion
        with a prior that the anthro portion is large and with an assumption that the models are adequate to capture natural variability.
        This assumption is “validated” weakly by looking at spectra.

        I’m flipping the script and saying want you want to measure is the natural component and to do that you foreground the validation of the models ability to capture natural variation. and you start with a different expectation.. history shows you that natural variation can be quite large.

      • I think the problem is with what is assumed to be all natural forcings. If you are over confident that you have accounted for them correctly, or you have simply missed some out, you can be drawn to a conclusion that the remainder MUST be human.

        I believe most “mainstream” skeptics believe that a portion of the warming or in the case of the hiatus, lack of cooling, MUST be anthro, but the extent of it is what has the implications for responding to it. If, as Judith and many others suspect, natural variability is under-accounted for then it explains why temperature rises have been over-predicted – since the ACO2 can be most readily quantified.

      • You don’t even need models. You can do as Lovejoy (2014, Clim Dyn) did, which is to derive a natural variability of global surface temperature from solar and volcanic forcing which is 0.2 C for decades to century time scales. Then what happened since 1850 is a rise of 4 standard deviations, that is unlikely at less than 0.1% to be part of that previous oscillation. That is, natural variability is rejected with 99.9% certainty, just from the shape of the temperature curve which is unprecedented.

      • Matthew R Marler

        Jim D: You don’t even need models. You can do as Lovejoy (2014, Clim Dyn) did, which is to derive a natural variability of global surface temperature from solar and volcanic forcing which is 0.2 C for decades to century time scales.

        that is a particular model of natural variation, and it leads to a conclusion about anthropogenic effects.

        Alternatively, you can take an estimate of anthropogenic effects (e.g. the calculated change in equilibrium climate mean temp), and from that you can derive a conclusion about the natural variation.

        If you are confident in your model of natural variation (as Dr Norman Page and Nicola Scafetta seem to be), then you can be confident of your human attribution. If you are confident of your human attribution, then you can be confident of your estimate of the natural variation.

        Either way you get results that are consistent with the extant data. Which way, or if any way now known, will predict the next 20-50 years with sufficient accuracy for policy decisions is not now known.

      • The problem is that 1753-1890 variability in the global average Berkeley data is:

        1) Affected by very large volcanic forcing which didn’t occur to the same extent since 1950, so the likelihood of natural trends on the order of 0.67degC is substantially reduced.
        2) Almost certainly wrong, and biased towards greater amplitude, due to poor geographical coverage. The variability in the Berkeley 20th Century clearly has a different structure, and in particular a lower amplitude, than that seen between 1770 and 1890.

        The problem is that 1770-1890 variability in the global average Berkeley data is:

        1) Affected by very large volcanic forcing which didn’t occur to the same extent since 1950, so the likelihood of natural trends on the order of 0.67degC is substantially reduced.
        2) Probably biased towards greater amplitude due to poor geographical coverage. The variability in the Berkeley 20th Century clearly has a different structure, and in particular a lower amplitude, at least at high frequencies, than that seen between 1770 and 1890.

        To illustrate the point this plot shows 60-year trends from MPI-ESM-P past1000yr+historical simulation against Berkeley Earth 60-year trends from 1770-1890 (start dates). Note that the MPI-ESM-P data is land+ocean SAT, while Berkeley is Land-only. If you look at this you’ll see that all the periods which feature 0.6+degC warming are also periods in which the model produces large warming trends. This means these periods are very likely not examples of internal variability, but rather forced response.

      • That looked alright when I posted. Anyway you can see how I modulated the language in the second version ;)

        Actually, I think we can use this model-obs comparison to crudely estimate possible magnitude of internal variability by taking the standard deviation of the difference between modelled and observed trends. For all start dates from 1770-1890 the standard deviation comes to 0.21. However, there is a clear disconnect between standard deviation in the first half of this period compared to the second, 0.2 versus 0.07. As mentioned above, the second half of this observational data is almost certainly more reliable so I would weight the true number to be closer to 0.07. Taking things up to the present (60-year trends ending in 2005) the full 1770-1946 standard deviation is 0.18. Again, if we only look at start dates from 1830 it is 0.09.

        This agrees with the IPCC likely estimate for internal variability over a 60-year period of +/-0.1ºC.

      • Paul S commented on Atlantic vs Pacific vs AGW.
        in response to climatereason:

        For all start dates from 1770-1890 the standard deviation comes to 0.21. ……. Taking things up to the present (60-year trends ending in 2005) the full 1770-1946

        I think, counting on the sampling being sufficient to understand global temps, really prior to 1950 is foolish. There are just so few weather stations.

      • Beautiful. So where’s the credibility for more than half of the 1950-2010 temp rise being anthropogenic?
        =============

      • Doc, you just nailed the whole shebang. Wish I had thought of that.
        Congratulations on the most succinct explanation ever.

    • Thanks, good post.

  35. Here are some blog stats:

    Overall, its a pretty average day for hits at CE: about 7000 so far.

    The post 50-50 attribution has about 600 hits

    The post Atlantic vs Pacific vs AGW has about 1700 hits.

    Of these, 76 hits have come from RC and 154 from Twitter

    There have been a total of 59 links to RC coming from CE

    Overall this exchange seems to be a relative non event, of interest to a few inside baseball players.

    Not sure how many hits RC post is getting, but overall their blog traffic is a lot lighter than at CE as per Alexa.

    • Translation, please do not waste your valuable time on a reply to a non- rejoinder from a groupie CAGW believer that already drank the Cool Aid.
      As to the ‘could not so did not’ elementary school rejoinder, my humble advise would be the same as that of great country singer Kenny Rodgers–‘know when to walk away, know when to run…that’s the secret to surviving’

      But ultimately your call, as I am just an ardent fan of your are fully chosen yet difficult choices.

      • And know when to hold ’em.

        Attribution ain’t settled, nor all RC crowing to the wind. The question will continue to recur, and I’m loading up the pot on observations.

        I can’t figure out if models are marked cards or cards up the sleeve, but they ain’t according to Hoyle. The slingers are holding fire, but sure do like a clean game.
        ===================

  36. I’m wondering also how this plays into the apparent disagreement between Curry and the rest of BEST. They seem to be publishing papers that say that now we can say that CO2 is 100% responsible, and Curry is demurry (surry about that). Mosher, Dr. Curry – what are you arguing about?

    • actually i think Mosh and I agree on this one, i agree with Moshers comment (and email he sent me). I disagreed publicly with Muller’s 100% attribution paper.

    • Well, what does Richard Muller think? How are they doing this? Mosher’s comment seems to make a lot of sense to me.

      • Steven Mosher

        Muller encourages me to think and write for myself. of course, I can also argue quite well for HIS position.

        The question of attribution is tricky. depending where you start, depending on the question you ask, depending on your method you
        get different answers.

        What’s that tell me? Well that goes to the structural uncertainty that Gavin says we need to articulate.. or more broadly the methodological
        uncertainty. when different REASONABLE methods come to radically different results.. you better stop and understand that its your analytical
        choices that may be driving the answer.

        On the other hand when different methods ( GISS, hadcrut, noaa, BEST)
        using different data, come to the same answer you can be reasonable certain that your choice of method isnt driving the answer

      • Steven Mosher commented on Atlantic vs Pacific vs AGW.
        in response to miker613:

        or more broadly the methodological
        uncertainty. when different REASONABLE methods come to radically different results.

        Other than dismiss it out of hand because you don’t like it?

        On the other hand when different methods ( GISS, hadcrut, noaa, BEST)
        using different data, come to the same answer you can be reasonable certain that your choice of method isnt driving the answer

        Maybe there is a common method that is fatally flawed, ie your “methodological uncertainty”?

      • Steven Mosher commented on Atlantic vs Pacific vs AGW.

        Wait, I know, you come up with a method that is so complicated, blends in data from multiple places, and then makes a hypothetical field that you can’t actually measure, and the only way to test it is if you know the magic hand shake to translate any measurement you have through the same methodology, and that sounds reasonable!

      • Steven Mosher

        those methods are not common to each other.

        1. NOAA use EOF
        2. GISS uses RSM
        3. Hadley used CAM
        4. Best uses Krigging

        #1 and #4 both have support in testing with synthetic data.
        #1 and #4 are both widely used across disciplines
        #1 and #4 are both published on as methods, that is they
        have both been evaluated AS METHODS

        #2 and #3 were dreamed up but never tested as methods.

        With respect to your work. untested, undocumented, unpublished,
        unreviewed.

        you are not even #4.02

      • Steven Mosher commented on Atlantic vs Pacific vs AGW.
        in response to miker613:

        With respect to your work. untested, undocumented, unpublished, unreviewed.

        I did not realize averaging of a group differences was so hard to understand.

      • Steven Mosher

        if you cant see the problem, then you’ll never see the problem.

        if you want someone to show you the error, then try to publish.

      • Steve, I would have far more faith in BEST is the monthly ‘raw’ data was the same as the pdf’s of the dated and signed documents of the historical site records.
        As the BEST ‘raw’ and certified ‘raw’ datasets are different I have little faith in the result.

      • Hi relclimat

        This disappears. What could possibly be the reason?

        Cheers

        ‘Using a new measure of coupling strength, this update shows that these climate modes have recently synchronized, with synchronization peaking in the year 2001/02. This synchronization has been followed by an increase in coupling. This suggests that the climate system may well have shifted again, with a consequent break in the global mean temperature trend from the post 1976/77 warming to a new period (indeterminate length) of roughly constant global mean temperature.’
        http://onlinelibrary.wiley.com/doi/10.1029/2008GL037022/abstract

        There are many ways that can and have been used to calculate a maximum residual rate of warming in the order of 0.1 degree C/decade. If you divide the 1950-2010 increase of some 0.65 degrees by 6 decades – you get a little over 0.1 degree C/decade. Even more rationally you could divide the increase between 1994 and 1998 – and assume the cooler and warmer regimes net out – by the elapsed time and you get 0.07 degrees C/decade.

        Here’s one using models – http://www.pnas.org/content/106/38/16120/F3.expansion.html

        Here’s one from realclimate – http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

        Here’s one subtracting ENSO – ://watertechbyrie.files.wordpress.com/2014/06/ensosubtractedfromtemperaturetrend.gif

        And of course there is Tung and Zhou.

        The question has and will be asked – especially if the ‘hiatus’ persists for another decade or 2 (as seems more likely than not) – is just how serious this is? The question is moot – climate is a kaleidoscope. Shake it up and a new and unpredictable – bearing an ineluctable risk of climate instability – pattern spontaneously emerges. Climate is wild as Wally has said.

        Which if you think about it means that the late 20th century cooler and warmer regimes are overwhelmingly unlikely to net out. Indeed a long term ENSO proxy based on Law Dome ice core salt content suggests a 1000 year peak in El Nino frequency and intensity in the 20th century.

        http://watertechbyrie.files.wordpress.com/2014/06/vance2012-antartica-law-dome-ice-core-salt-content.png

        http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00003.1

        However – let’s put that in the imponderables basket for the time being.

        The answer to the moot question – btw – is 12 phenomenal ways to save the world – http://watertechbyrie.com/

        Looks like my experiment at realclimate is done and dusted.

      • Looks like I spoke too soon. Apologies to realclimate.

  37. Chris Colose wrote:
    “If she [Judith] would consult an attribution expert this would be much less embarrassing and less of a waste of time for everyone involved, not to mention more informative for her readers.”
    Now you have really stepped in it, Chris. If there is anything I have expertise in after working in the field for 30+ years, it is attribution. We call it root cause analysis, and yes, there are formal and well-documented approaches. Contrary to your comment, it is the claims that Gavin has made in his posts about attribution going back to 2010 that don’t look like any form of established attribution approach that I have seen. Gavin’s statement that attribution – which only deals with phenomena in the past or present – is necessarily “model based” is exactly the sort of revelatory statement that means that he spent exactly zero time trying the benchmark his attribution approach against the accumulated wisdom of the experts. Instead, the approach appears to be cut from whole cloth. While Judith has not clearly demonstrated that she is particularly well grounded in attribution approaches either, I would judge that she hews much closer to established protocols. Of course you won’t believe a word that I say (your respect for expertise will have to be set aside or my expertise will have to be disparaged, I am sure), but here is a point to ponder: in any attribution study, there will be one correct root cause and a long list of incorrect root causes. The vast majority of the work will be in refutation, not affirmation. How much of AGW-based climate science is direct refutation of alternatives as opposed to affirmation of GHG forcings? That is a telltale sign right there.

    • Matt thank you for this. Email me if you would be interested in providing a guest post on this topic.

    • […] here is a point to ponder: in any attribution study, there will be one correct root cause and a long list of incorrect root causes.

      Why does there have to be a root cause? In a hyper-complex non-linear system? What is the “root cause” of mitosis?

    • there will be one correct root cause and a long list of incorrect root causes. this sounds like Occam’s razor

      Every Major Climate Theory, or Most Every Climate Theory uses Ice Extent as a Major and Necessary Feedback.

      Consider that the Polar Ice Cycles are the correct root cause.
      http://popesclimatetheory.com/page56.html

    • nottawa rafter

      Matt
      I did not realize there was so much to the process of attribution. Such a post as Judith mentions would be very much appreciated. It may help with people talking past each other.
      I went into this discussion thinking it was fairly straightforward. The nuances are getting things murky.

    • //”How much of AGW-based climate science is direct refutation of alternatives as opposed to affirmation of GHG forcings? That is a telltale sign right there.”//

      Quite a bit. See my comment to Matthew for starters.

    • Matthew R Marler

      Matt Skaggs: in any attribution study, there will be one correct root cause and a long list of incorrect root causes. The vast majority of the work will be in refutation, not affirmation.

      A problem that arises in the context of attributing any effect to CO2 is that since the end of the Little Ice Age, a natural warming has possibly increased the temperature monotonically, anthropogenic CO2 has increased monotonically, and deforestation and urbanization have increased monotonically. Estimating any one of those therefore requires information on the other two. There does not seem to be enough information to refute or bound the effect of any of them.

      It may be possible in the future, if solar activity as measured continues to decline and if global mean temperature declines while CO2 continues to increase.

    • David L. Hagen

      Matt Skaggs
      Re: ” in any attribution study, there will be one correct root cause and a long list of incorrect root causes.”
      What if you have multiple independent varying non-linear coupled possibly chaotic causes?
      E.g. anthroprogenic fossil fuel combustion and solar/cosmic ray variation / earth precession impacting insolation and clouds and consequently land and ocean circulation and temperature driving and temperature dependent microbial decay driving CO2 emissions, and CO2/temperature driven biomass growth?
      e.g., see Pehr Bjornbom’s article & comments on Salby at:
      A comparison of Gösta Pettersson’s carbon cycle model with observations
      or papers by Beenstock et al. and responses on co-integration etc.?

      PS curry may address some in her Salby post.

      • Steven Mosher

        Beenstock is wrong.
        bad choice of an unphysical model.

      • David L. Hagen

        Steven Mosher
        Re: ” Beenstock is wrong”
        Spoken from authority!
        Any evidence and references?
        Have you addressed the issues raised by Beenstock et al in their supplementary:
        Reply to Hendry and Pretis: Michael Beenstock, Yaniv Reingewertz, Nathan Paldor
        Or
        Hendry, D. F., & Pretis, F. (2013a). Anthropogenic influences on atmospheric co2.

        How about the diffusion in ice core etc as identified by Murry Salby and Pehr Bjornbom?
        How about accounting for 1/2 of biomass grows in the Ocean?
        How about the lag in ocean temperatures after solar cycle forcing?

        Consequence: Arguments “Unproven”!

      • David L. Hagen

        Steve Mosher
        Note further:
        James McCown, Problems with Statistical Tests of Pre-1958 Atmospheric CO2 Concentration Data, August 29, 2014

        A cointegration test can only be valid if the data series have a high degree of temporal accuracy and are matched up properly. The temperature data likely have good temporal accuracy but the pre- 1958 Etheridge CO2 concentration data, from which part of the radiative forcing data are derived, are 20 year or greater moving averages, of unknown length and distribution. They cannot be properly tested for cointegration with annual temperature data without achieving spurious results. . . .
        The other papers cited in this essay, except Beenstock et al (2012), come to similar conclusions. Due to the low level of temporal accuracy of the CO2 data pre-1958, their results for that period cannot be valid. . . .
        Unless and until a source of pre-1958 CO2 concentration data is found that has better temporal accuracy, there is no point in conducting cointegration tests with temperature data for that period.

  38. If Judith is interested in why her arguments are not convincing to others, perhaps this can give her some clues.

    Judith’s arguments are not convincing to others because once people make up their minds about anything, it is really difficult for them to change.

    Most people are not skeptical enough about their own thoughts and theories. I struggle with this but I do make an effort to read and consider opinions that I do not agree with. I do like to look at actual data and how it matches with various theories. There are not two sides, there are many theories.

    Show data, explain your theory, Show data, explain your theory. Repeat. Repeat. Repeat.

    Listen to people you disagree with. Look at the data they present. Think about it. Repeat. Repeat. Repeat.

    I have been involved in processes that required people to change their mind on important issues. It can take many multiple sessions and it works better if there is supporting data. It should not work if there is no supporting data.

    Read any Theory. Look at Supporting data. Think! Repeat. Repeat.

    Consensus Climate Theory has worked without supporting data.
    That cannot continue. Actual Data will Make or Break their CO2 Alarmist Theory and, so far, Mother Nature is not helping them.

    CO2 level changes and Temperature and Sea Level changes have some time periods with correlation but there is much more time with no correlation. CO2 levels do follow Ocean Temperature, when it can, but it never leads and more of the data shows them to be not connected.

    Read my Climate Theory and tell me what you think.
    http://popesclimatetheory.com/page56.html

  39. John Smith (it's my real name)

    Dr. JC
    Must very humbly disagree with Tonyb. Don.t waste your time responding further to Gavin.
    This 50-50 argument gets us nowhere,
    Suppose it’s 70-30 or 30-70?
    There’s lot’s of people on the planet.
    Our existence affects the biosphere.
    If we all commit suicide our rotting corpses will alter the environment.
    Enjoy your trip. Be happy.
    (Doesn’t mean It didn’t enjoy and learn from the “50-50” post, or that I don’t think we shouldn’t be cautious about CO2 emissions.)
    I read Climate Etc.(and SkS) almost everyday because I hate sports.
    I really appreciate you and ALL the people who comment.
    … oh, and stay strong, heretics are my favorite people :)

    • “If we all commit suicide our rotting corpses will alter the environment.”

      This is a useless generalization. How about something more scientific?

      Andrew

    • John

      I assume you mean over responding to Gavin’s post Point by point?

      I agree that She has other things to do at present that are more pressing but it will remain unfinished business and lead some to claim that she was unable to respond to Gavin as she knew she was wrong. So at some point I think the delayed battle must be resumed

      Tonyb

      • My post 50-50 has 13,000 hits so far (which is somewhat above avg for my recent posts). My current thinking is that this exchange with Gavin is a tempest in a small teapot. at this point today, 99 hits from RC, with 300 from WUWT (a link from a post by Tisdale), twitter hits up to 172. Time is better spent on rethinking attribution than responding to gavin’s points.

      • Dr Curry – Seems to me that you aren’t able to respond to Gavin’s points. He states science backed up with evidence, you talk about ‘belief’ with no evidential support.

        I’m not surprised that you’re backing off from responding.

      • My point is the underlying logic of the IPCC/Gavin’s approach is wrong, that is my argument. I argue the climate models aren’t fit for this purpose; if they aren’t then game over.

      • Having read Schmidt’s post, I think it would be an error to try to decode his tortured ‘logic’ point by point.

        It is a problem that comes up all too often in responding to the legal briefs of an opponent who 1) doesn’t understand you arguments, and 2) sucks at logic. It takes at least three times as long to explain how the other guy is mistaken, and then explain the logical flaws in his critique. At some point you realize it is better to just write “Opposing counsel does not understand either the law or the facts,” and then just restate your original position.

        I thought about trying to deconstruct the first paragraph of Schmidt’s critique:

        ” This is not a good start. The statements that ended up in the IPCC SPMs are descriptions of what was found in the main chapters and in the papers they were assessing, not questions that were independently thought about and then answered. Thus while this dichotomy might represent Judith’s problem right now, it has nothing to do with what IPCC concluded….”

        I think I understand Dr. Curry’s argument on this point; Schmidt’s response; the error in his understanding of Dr. Curry’s position; and the flaws in his argument. But it would take a mini-dissertation to unravel it all and explain it coherently. So I will spare both myself, and anyone unlucky enough to consider reading such a thing, the effort.

        This is precisely why political activist scientists use terms like ‘most’ and ‘more than half.’ Deconstructing these simplistic terms requires discussion at length (see eg this comment) which lull readers, and often the writer, to sleep.

        An easier example – Calling someone a racist takes three words. “He’s a racist.” Refuting that charge requires a complete personal history. Which is one of the reasons propaganda works so well. Not to mention that responding to your opponent on such charges distracts you from arguing your own positions.

        My advice, worth exactly what is paid for it here – ignore this confused harangue, unless it later appears it is gaining traction among people whose opinions matter.

      • Garym

        I don’t disagree wth your logic but The perception will be that Judith did not refute Gavin’s post. This will eventually be seen as being that Judith was unable to refute Gavin’s post and the story will grown and deepen with those that don’t realise Gavin’s basic flaws in his understanding of Judith’s original piece.

        History is full of examples of unfinished business that would have been better if they had been taken to a conclusion.

        Tonyb

      • Tonyb,

        “This will eventually be seen as being that Judith was unable to refute Gavin’s post

        The question is, ‘seen’ by whom?

        Consensus advocates and drones alike already see Dr. Curry as an apostate. Schmidt in particular will not be convinced, for whatever reasons.

        Readers here, at WUWT and other skeptical and lukewarmer sites can read Dr. Curry’s posts, Schmidt’s rebuttal, and decide for themselves.

        If Schmidt’s critique had any merit, it would be worth the time and effort to rebut. I just read through his post and saw it would be exhaustive to unravel each of his misperceptions and correct each of his errors.

        Not to mention, it will never address the underlying dispute. He and Dr. Curry look at the exact same data, the exact same arguments, and come to different positions. Dr. Curry understands Schmidt’s positions because she understands the subjective nature of much of the science. She has abandoned the tribal belief that there is only one way to view data.

        Schmidt has not. Which is why I think he really does not understand her points. I am reminded again of their exchanges at Kloor’s blog some years ago. At the end, Schmidt seemed genuinely befuddled that Dr. Curry could look at the same data, papers and analyses, and come to a different conclusion.

        Years later, lots of new data, increasing diversion between models and observations, and nothing has changed.

      • Garym

        I guess they speak completely different languages on planet climate model and planet earth.
        Tonyb

      • me –

        ==> “Dr Curry – Seems to me that you aren’t able to respond to Gavin’s points.”

        I suggest leaving that kind of logic for the “skeptics.”

        It’s weak and inherently tribal. I would say all the evidence is that Judith is quite capable of responding, with full confidence that her rebuttal would prove her analysis completely superior to Gavin’s. Such is the way of the climate wars in the blogosphere.

        The reason I asked for a more detailed response is so that I can read the reactions of the relatively few, non-overtly aligned and technically competent participants, such as Pekka – or even try to parse the reactions of the overtly aligned so as to see some way clear beyond the tribalism.

        Of course, when you get lame and ducking responses like that GaryM offered as yet another thinly veiled politicization of climate science, then I can see the logic behind your reasoning. But Judith hasn’t said that she can’t respond for some bogus reason like that GaryM offered.

        She’s said that it is low on her priority list. That does seem a bit odd, since she has already taken quite a bit of time to engage in this specific debate, read Gavin’s response three times, and to follow the comments on this thread (to the point of finding out how many hits it has, etc,). It would seem that up until the point of her vague response to Gavin’s point-by-point – she thought it was of a fairly high priority. That might leave it open to speculation why the sudden change in her priorities – but who knows? The human mind works in mysterious ways.

        But if you’ve observed the climate wars in the past, and in particular Judith’s participation in the past, it should be obvious that as I said, she is quite able to respond and to be absolutely convinced that she’s right. She’s even capable of just flat out disappearing the uncertainty monster when it serves her to do so.

      • Steven Mosher

        Tony

        The lines are pretty well drawn. There isnt much more to say.
        See my comment on how A person like Judith would think about this
        problem.

        Its entirely flipped from Gavins approach.

        And this

        “My point is the underlying logic of the IPCC/Gavin’s approach is wrong, that is my argument. I argue the climate models aren’t fit for this purpose; if they aren’t then game over.”

        As I note. you either start by assuming that GCMS can model natural variability, gavins approach.
        OR
        you start by testing whether they can.

        If you do the latter, you never get to gavins approach
        If you use gavins approach, you just argue about the untested assumption.

        In this case the GUY WITH THE GCM WINS..

        he’s not right, but he wins because he controls the tool to ask the question. Give somebody else control of that tool and they will
        show you that the tool is unfit.

        This question wont get anywhere until science starts asking the natural
        variability question FIRST.

        hence, Judith’s response should be..

        Hey gavin lets do a test Together using My approach.

        He wont agree

      • It is interesting how “skeptics,” some of whom so often complain that part of the problem is that “realist” climate scientists won’t engage in debates with “skeptics,” suddenly support Judith’s failure (thus far) to engage in detailed debate with Gavin on this topic.

        Let’s save up this list of justifications, and play them back in the future, shall we?

      • He wont agree

        He’s got to. Politically. If he doesn’t agree, he, and his whole paradigm, can be deprecated due to his refusal.

        Most likely, IMO, he’ll agree but arrange so many roadblocks that it never happens.

      • Come to think of it, why doesn’t BEST buy itself a GCM?

      • Most of them you can find the code for online, plus there’s a student gcm that comes all ready to run for about $150

      • ‘Using a new measure of coupling strength, this update shows that these climate modes have recently synchronized, with synchronization peaking in the year 2001/02. This synchronization has been followed by an increase in coupling. This suggests that the climate system may well have shifted again, with a consequent break in the global mean temperature trend from the post 1976/77 warming to a new period (indeterminate length) of roughly constant global mean temperature.’ https://pantherfile.uwm.edu/kswanson/www/publications/2008GL037022_all.pdf

        I find it difficult to imagine that the talking past each other by alarmist and realist has much more than obfuscation at it’s roots.

        There are many ways to calculate a maximum residual rate of warming in the order of 0.1 degree C/decade. Even webbly gets that number. If you divide the 1950-2010 increase of some 0.65 degrees by 6 decades – you get a little over 0.1 degree C/decade.

        Even more rationally you could divide the increase between 1994 and 1998 – and assume the cooler and warmer regimes net out – by the elapsed time and you get 0.07 degrees C/decade.

        The question has and will be asked – especially if the ‘hiatus’ persists for another decade or 2 (as seems more likely than not) – is just how serious this is? The question is moot – climate is a kaleidoscope. Shake it up and a new and unpredictable – bearing an ineluctable risk of climate instability -pattern spontaneously emerges. Climate is wild as Wally has said.

        Which if you think about it means that the late 20th century cooler and warmer regimes cannot are overwhelmingly unlikely to net out. Indeed a long term ENSO proxy based on Law Dome ice core salt content suggests a 1000 year peak in El Nino frequency and intensity in the 20th century.

        http://watertechbyrie.files.wordpress.com/2014/06/vance2012-antartica-law-dome-ice-core-salt-content.png

        http://connection.ebscohost.com/c/articles/85340584/millennial-proxy-record-enso-eastern-australian-rainfall-from-law-dome-ice-core-east-Antarctica

        However – let’s put that in the imponderables basket for the time being.

        The answer to the moot question is 12 phenomenal (benefit/cost ratio >15) ways to save the world.

        ‘In a world of limited resources, we can’t do everything, so which goals should we prioritize? The Copenhagen Consensus Center provides information on which targets will do the most social good (measured in dollars, but also incorporating e.g. welfare, health and environmental protection), relative to their costs.’ Copenhagen Consensus

        1. Achieve full and productive employment for all, reduce barriers to productive employment for all including women and young people.

        2. Reduce by 50% or more malnutrition in all its forms, notably stunting and wasting in children under five years of age.

        3. By 2030 end the epidemics of HIV/AIDS, tuberculosis, malaria and neglected tropical diseases reverse the spread of,and significantly reduce deaths from tuberculosis and malaria.

        4. Achieve universal health coverage (UHC), including financial risk protection, with particular attention to the most marginalized, assuming a gradual increase in coverage over time, focusing first on diseases where interventions have high benefits-to-costs.

        5. Ensure universal access to comprehensive sexual and reproductive health for all, including modern methods of family planning.

        6. By 2030 ensure universal access to access and complete quality pre-primary education

        7. By 2030 ensure equal access to education at all levels.

        8. By 2030 ensure increased access to sustainable modern energy services.

        9. By 2030 phase out fossil fuel subsidies that encourage wasteful consumption

        10. Build resilience and adaptive capacity to climate induced hazards in all vulnerable countries.

        11. Promote open, rules-based, non-discriminatory and equitable multilateral trading and financial systems, including complying with the agricultural mandate of the WTO Doha Round.

        12 Improve market access for agricultural and industrial exports of developing countries, especially Least Developed Countries, and at least double the share of LDCs’ exports in global exports by 2020

        This is a site I have just started -and am slowly adding to. http://watertechbyrie.com/

        Some of these are straightforward. Phasing out fossil fuel subsides for instance is on the G20 agenda. Other have inherent opportunities to reduce population pressures, encourage energy innovation, manage soil carbon, restore ecosystems, reduce black carbon, methane, tropospheric ozone, nitrous oxide and sulphides. What the world needs is a coherent and pragmatic platform to move forward on.

      • @Rob Ellison

        Wow, what a great list! Where is this from? Is that just your response to the Copenhagen consensus comment?

      • It comes from the Copenhagen Consensus analysis of the post 2015 Millennium Development Draft Goals. All I have done is pull out all of the ‘phenomenal’ goals from the various sections.

  40. The direction of science is determined primarily by human creative imagination and not by the universe of facts which surrounds us. Creative imagination is likely to find corroborating novel evidence even for the most ‘absurd’ programme, if the search has sufficient drive. This look-out for new confirming evidence is perfectly permissible. Scientists dream up phantasies and then pursue a highly selective hunt for new facts which fit these phantasies. This process may be described as ‘science creating its own universe’ (as long as one remembers that ‘creating’ here is used in a provocative, idiosyncratic sense). A brilliant school of scholars (backed by a rich society to finance a few well-planned tests) might succeed in pushing any fantastic programme ahead, or, alternatively, if so inclined, in overthrowing any arbitrarily chosen pillar of ‘established knowledge’.

    Lakatos The Methodology of Scientific Research Programmes (1978, pp. 99-100)

  41. I think it’s cute that Schmidt has adopted Steve McIntyre’s “watch the pea” idiom.

  42. I just read Revkins post at NYTimes and really liked it, both what he wrote himself and what the scientists said in the excerpts.

    I liked also the points that Judith protested in her post. The research on AGW is ultimately interested in the situations where the temperature has risen close to 2 degrees or more, or on the risk that it will rise significantly more than 2 degrees. The internal variability does not change so much on that.

    The lacking understanding of internal variability is certainly a problem in understanding AGW, but it’s not as much a problem for that than for the understanding of the internal variability itself. It’s possible to present meaningful estimates about the climate sensitivity even, when there are major gaps in understanding internal variability. The estimates are less accurate, but they are still meaningful (and they do not credibly extend lower than half of warming over years 1950-2010). Furthermore it’s not necessary to have reliable estimates of the climate sensitivity, it’s enough to have moderate evidence to support the expectation of significant warming.

    I do agree with Judith that improving further the understanding requires now that the natural processes of the Earth system are studied as the main emphasis. It’s hardly possible any more to make the estimates of AGW more accurate in any other way. Research of AGW is not an independent field of science, it’s just on application of basic Earth sciences. This application has surely been studied so much that further progress comes from improving the the understanding of the basic Earth sciences, not from trying to extract more from the old basis.

    • Very insightful comment, Pekka. Thanks.

    • I agree. Thanks for the thoughtful comments.

    • Three times in the last century and a half temperature has risen at the same rate, and only in the last of these was CO2 also rising. So where does Pekka get ‘do not credibly extend lower than half of warming over years 1950-2010’? Bah. Humbug.
      ==================

    • It gets worse: ‘it’s not necessary to have reliable estimates of climate sensitivity’.

      Pekka, the words are usually golden, the thoughts too often hollow and frightened.
      ============

    • What’s not necessary is policy action based on fear. Warmer is better and you can recognize it in the past. So recognize and consign your fears of a fretful future back into the box Pandora left under your bed.
      ============

    • So yes, more Earth Science. Please try to recruit curious world observers rather than activist world changers.
      ==========================

  43. John Smith (it's my real name)

    Andrew…
    I thought it obvious…I’m not a scientist.
    However…
    My guess is that the climatic models often fail to accurately predict because the math is not there.
    The math is not there, because the phenomenon is not fully understood.
    Therefore, applying a numerical value to “how much heat increase comes from human causes,” might be a poot on the wind.
    Sciencey enough for ‘ya?

    We should keep trying though.

    • nottawa rafter

      “..the phenomenon is not fully understood.”
      This cannot be said enough. Some get it. Others refuse to get it since it blows up their nice simplistic world view. It may dawn on them eventually and then they will have caught up with the skeptics. The next step then will be to begin true scientific inquiry which should have begun 25 years ago. Look at how many comments in the last few posts (50/50) where they begin and end with the mindset that the science is settled.

    • John, whether or not the underlying climate model ‘math’ is there, the models are not fit for purpose for the following basic reason. The most important feedbacks are water vapor and clouds. Both relate directly to convection cells, especially in the tropics. (Those move latent heat of evaporation up into the troposphere e.g. In thunderstorm cells), and also result in cloud albedo and precipitation that washes out humidity. Those are preciselynthe three things CMIP5 models don’t do well- upper troposphere humidity, clouds, precipitation. These convection cells are at least one order of magnitude smaller than the smallest computable grid cell (presently 1.1 degree or 120 km. Typical is 2.5 degrees). This cannot be fixed, because as the cells get smaller, so must the time increments, so the impact finite element computation is roughly squared. No foreseeable supercomputing advances can solve the intrinsic computational power problem. This is recognized by IPCC AR5 in WG1 7.2.
      So the models must be parameterized with respect to these fundamental microprocesses. Those parameter sets are fine tuned by hindcasting. For CMIP5 the period is roughly 1975 to exactly 2006 (30 years) as specified by the CMIP5 Experimental Design Document (available on line). So the tuning was mainly over the period when GAST rose along with CO2. That makes parsing attribution between natural and anthropogenic impossible by using primarily models.
      Upthread Mosher has suggested a possible inverted way- tune to model reasonably well the period up to 1945 or 1950 (when we know from consumption that emissions really started to climb). Then run a bunch of post say 1960 simulations to establish the natural variation envelope. Then subtract that, the residual being the sum of all ‘non-natural’ which will be more than just CO2 or even all GHG- for example land use.
      IMO a waste of time for two reasons. One, models are still fundamentally not computationally up to the task. Two, insufficient quality/coverage of temperature data even for 1910 to 1945, the last (IPCC agrees) mostly natural rise, to do competent tuning. And if there really is a 60 something year cycle, would have to go back another 60 years at least to have any confidence in the result, a period where there is almost no usable data.

      Which is in part why I found Gavin’s comments unintelligible as well as not even addressing Judith’s basic point. And that is being polite.

      • The computers, massive magnificent modern megamachines, produce illusions.
        ======================

    • The math is simple. You have a 25 year old man who has an ‘equilibrium weight’ of 170 pounds and a forcing of 1,800 calories per day. If you increase the calorific forcing by 18 calories a day, by putting two sugar lumps in his morning coffee, he will convert this excess forcing into mass, until the extra mass causes him to burn the excess forcing.
      The ‘equilibrium weight’ gain, from 8 grams of sugar per day, can be calculated from ((1,818/1,800)*170)-170 or 1.7 pounds.

  44. It is long past time for everyone to recognize that GCMs are inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4
    Models are often tuned by running them backwards against several decades of observation, this is much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to these longer periodicities is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure ab initio.
    The GCM outputs do not provide any basis for reasonable discussion of the climate problem. For alternatives see my earlier posts on this thread at 1:16 and 3:09

    • Dr Norman Page commented on Atlantic vs Pacific vs AGW.
      in response to curryja:

      The GCM outputs do not provide any basis for reasonable discussion of the climate problem.

      It’s worse than this, the temperature series have all been infected by trying to convert the measurements into a “Model” of average surface temps.
      GCM’s are tuned to GAT’s that don’t represent surface temps, and Gavin struggles with attribution!

  45. Pingback: What does Judith mean by natural? | …and Then There's Physics

  46. “I would appreciate some discussion that points out anything significant in Gavin’s post that refutes my arguments. ”

    Gavins arguments:
    1. Arguing against the 50/50 discussion
    Gavin’s/the IPCC PDF is centered on 110%, 50/50 would result in a PDF with a peak at 50%.

    Gavin’s argument boils down to: the IPCC PDF is a gift from God and precludes the influence of CO2 being less than 33% and gives only a 0.4% chance it is less than 66%.

    2. Arguing against natural influence.
    “It is worth pointing out that there can be no assumption that natural contributions must be positive – indeed for any random time period of any length, one would expect natural contributions to be cooling half the time.”

    His referenced post on the IPCC attribution chart indicates the IPCC considers the net natural influence on the period as 0.00°C.

    A dubious claim.
    http://agwobserver.wordpress.com/2013/08/29/papers-on-early-20th-century-warming/
    A number of papers claim the 1920-1940 “natural” warming was similar to the late 20th century warming.

    3. Arguing against non-CO2 attribution:
    “Is expert judgement about the structural uncertainties in a statistical procedure associated with various assumptions that need to be made different from ‘making things up’? Actually, yes – it is. ”

    The IPCC chart shows that the non-CO2 anthropogenic influences have a -0.2°C (cooling) influence.

    His claim boils down to that the IPCC’s SWAG (Scientific Wild-assed Guess) is better than yours.

    http://www.news.gatech.edu/2009/11/09/reducing-greenhouse-gases-may-not-be-enough-slow-climate-change
    Given that Georgia Tech believes land use changes caused 50% of US warming, this claim is dubious.

    4. Arguing against detection:
    “This is also confused. “Detection” is (like attribution) a model-based exercise”

    Gavin apparently believes models trump real world measurement and analysis.

    5. Arguing against the model vs real world comparison
    “Here Judith is (I think) referring to the mismatch between the ensemble mean (red) and the observations (black) in that period… However, the observations are well within the spread of the models and so could easily be within the range of the forced trend + simulated internal variability.”

    His position: although the ensemble does not match the observed behavior and is currently soaring off into the stratosphere – if even one of the models is within spitting distance of observed behavior the models aren’t wrong.

    This is the same as saying a shotgun blast that missed the target completely scores a bulleye if a “flyer” (out of pattern shot) hits the center. This claim is difficult to defend.

    The problem in your debate with Gavin is he appears to believe that the real world is wrong and the models are right.

  47. “I do think that ocean variability may have played a role in the lack of warming in the middle of the 20th century, as well as the rapid warming of the 1980s and 1990s.”

    How would it EVER be possible for changes in climate to be unconnected with ocean variability? Because such a notion, however true and obvious, might cause theological error or religious lapse?

    Welcome to Byzantium.

  48. Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.

    It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

    Four multi-decadal climate shifts were identified in the last century coinciding with changes in the surface temperature trajectory. Warming from 1909 to the mid 1940’s, cooling to the late 1970’s, warming to 1998 and declining since. The shifts are punctuated by extreme El Niño Southern Oscillation events. Fluctuations between La Niña and El Niño peak at these times and climate then settles into a damped oscillation. Until the next critical climate threshold – due perhaps in a decade or two if the recent past is any indication.

    So the fundamental question is why 1950 and not the inflection point in global surface temperature at 1944. It makes a difference – and the difference is ENSO.

    http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/update_diagnostics/global_n+s.gif

    HadCRUT4
    Year Anomaly
    1944 0.150
    1950 -0.174

    The difference is almost exactly 50% of the ‘observation’. 1944 would seem justifiable on theoretical grounds if we are truly trying to net out warmer and cooler multidecadal regimes – and it makes a difference.

  49. A scientific ‘experiment’ has been run on the entire planet for more than 400 years. Average global temperature has been recorded for 163 years. A physics based equation with just two drivers was calibrated to the last 118 years. The equation calculates average global temperatures with 95% correlation demonstrating that the equation is valid. The equation hind-casts credibly to 1610 and predicts that the temperature trend is down.

    There is no significant difference whether CO2 change is considered or not.

    The two natural drivers, method, equation, data sources, history (hind cast to 1610) and predictions (to 2037) are provided at http://agwunveiled.blogspot.com and references.

    • Worth a thousand words, but takes 21-23 syllables. NB: error bars enlarge past 20 from lack of digits.

      Every day in every
      Way, things are looking
      Samer and samer.
      ============

  50. Alexander Biggs

    “a) Warming since 1950 is predominantly (more than 50%) caused by humans.

    b) Warming since 1950 is predominantly caused by natural processes.”

    We are asked by Gavin to chose one or the other, as if they were mutually exclusive. But they are not. It all depends on the definition of ‘natural processes’. If we solve the differential equations governing heat transfer between atmosphere and oceans and find that heat transfer does in fact occur, in both directions, then we can conclude that the above choices are not mutually exclusive. Indeed the same heat can shuttle between atmosphere and oceans. Between 1910 and 1940 global atmospheric temperature rose by 0.5C (probably due to increased CO2 concentration), we should not be surprised when this heat reappears 40 years later in the oceans. Now I don’t call that ‘natural heat’, do you?

    • Finding warmth in all the wrong places.

    • It was Judith who said “Pick one”, and Gavin who said this was not a good start, so I think he was agreeing with your reaction.

    • I really don’t see what is so complicated about Dr. Curry’s statement. I don’t see her as positing a novel concept, or even asking a question. I think she was describing the two primary views of the consensus and skeptic sides of the debate, and her reaction to being told to choose.

      The consensus firmly believes “warming since 1950 is predominantly (more than 50%) caused by humans,” and expressly says so. Skeptics disagree, which by necessary implication means they (we) believe that “warming since 1950 is predominantly caused by natural processes” is far more likely to be true.

      As a lukewarmer, Dr. Curry is asked (to put it nicely) to adopt a) by consensus advocates and b) by skeptics. Which is I suspect why the first sentence after she poses those two propositions begins “When faced with a choice between a) and b)….”

      She is not posing the question. It is the question she faces every day on this blog. Schmidt is not posing the question, he sees no question. To him there is only the revealed truth of a). Dr. Curry is merely explaining her reaction to the choice posed by others.

  51. catweazle666

    So, does mean the science ISN’T settled?

    Heh, you learn something new every day!

  52. Judith says “My take is that external forcing explains general variations on very long time scales, and equilibrium differences in planetary climates of relevance to comparative planetology.”
    It is not the time scale that matters, but the change in forcing, however quickly that occurs. The deltaF now is ten times the size of a sunspot cycle’s deltaF and also several times larger than the Maunder Minimum negative change (both with measurable temperature effects). Expected deltaF’s of 5+ W/m2 compare with those that distinguish widely varying paleoclimate periods, such as those with or without polar ice caps and their consequences for sea level. 5 W/m2 is achievable at 700 ppm. It is only the forcing change that matters on long time scales. Natural internal variability can’t do much against that, and is a red herring in the big picture.

  53. Possibly the best cartoon yet. Let’s not let facts and real world observations get in the way of motivated reasoning.

  54. Please leave Australia out of this argument. Dr Jennifer Marohassy has recently exposed the fraud by Australia’s Bureau of Metrology in falsifying climate data over many years. Does that seem familiar?

  55. Here is the link to Jennifers site:
    http://jennifermarohasy.com/

  56. Not an existential threat? If there is a 60 year stair-step cycle of 30 up and 30 flat, and yet they tuned their models only to the 30 up, then they’ve overstated reality by a full 100%! In other words, cut their predictions in 1/2 for the long term.

    Yes, eventually it’s still a problem, but your timeframe for dealing with it is quite a bit longer.

  57. On the question of attribution and detection without models, one idea is to see what AGW would suggest based on a dominant CO2 forcing. It would predict, just from the forcing change, that the warming from 1950 to now should be at least twice all the warming before 1950, and this turns out to be correct. The observations of this long-term temperature curve are consistent with a CO2 attribution fingerprint.
    Judith hasn’t yet suggested any method to attribute climate change, or hypothesis to test when doing so, and perhaps thinks this is impossible (?), but if you look at hypotheses and their predictions, passing rigorous century-long observational tests is at least support for a theory to be used going forwards. CO2 forcing passes such a test. What else would predict a doubling of the warming after 1950?

    • ‘It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.’

      1950 is 0.324 degrees cooler than 1944.

      http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/update_diagnostics/global_n+s.gif

      Inflection points
      1911 -0.554
      1944 0.150
      1998 0.531

      Ask yourself – do I want to know what the inflection points in the global temperature record are – and why – or do I want cherry pie. Jimbo wants cheery pie.

      • Rob Ellison | August 29, 2014 at 1:44 am |

        Rob, that HadCRUT4 temperature graph is a fraud. They show the eighties and nineties (before 1998) as a rising temperature range when it demonstrably is a no-warming region. HadCRUT3 showed the same fake temperature rise. I made a note of it in my book “What Warming?” but nobody seems to either care or understand that the existence of fake warming is used to influence politicians to pass more urgent anti-pollution measures. As it stands, the fake temperature rise in both HadCRUT versions is plus 0.2 degrees in the time interval from 1979 to early 1997. This is absolutely not a rising temperature region but an 18 year period of no warming, comparable to the current hiatus/pause that has lasted 17 years now. And to make it worse, the fake warming continues with the super El Nino of 1998. In the satellite record the two dips on both side of the super El Nino are at the same temperature and look even. In this HadCRUT4 graph the dip on the right side of the super El Nino is a tenth of a degree higher than the one on its left side. The width of the super El Nino peak is two years which means that there was a jump of 0.05 degrees Celsius in this a two year interval. This amounts to a temperature rise at the rate of 5 degrees Celsius per century, compared to the actual temperature rise of 0.7 degrees per century. And the fraud continues into the twenty-first century where two peaks are shown as higher than the super El Nino which is entirely impossible. From time to time, starting with my book, I have called attention to this fraud and it is impossible that those responsible for the integrity of temperature records have not heard of it. I call on all those who care about the accuracy of the data we need to use in our work to bring it up at the management level. The only temperature records I find believable today are satellite temperature records. I recommend that only satellite records should be used for all,temperatures since 1979. I recommend that temperature records from HadCRUT, NCDC, and GISTEMP not be used in any published work as they share falsified data by collusion. This collusion can be proved by the presence of shared, computer-generated noise peaks in their published temperature records they did not know of until I pointed them out publicly.

    • Despite ongoing atmospheric CO2 emissions, the “global” temperature is flatlining for as long a period as your post-1950 warming

      Which single goalpost will you rotate now, Jimmy old bean ?

  58. ‘Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’ http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf

    The data shows quite clearly that changes in cloud radiative forcing in the satellite era exceeded greenhouse gas forcing.
    e.g. – http://watertechbyrie.files.wordpress.com/2014/06/cloud_palleandlaken2013_zps3c92a9fc.png

    It would be an odd sort of feedback that exceeded the forcing. There may be no equivalence a all. Zhu et al (2007) found that cloud formation for ENSO and for global warming have different characteristics and are the result of different physical mechanisms. The change in low cloud cover in the 1997-1998 El Niño came mainly as a decrease in optically thick stratocumulus and stratus cloud. The decrease is negatively correlated to local SST anomalies, especially in the eastern tropical Pacific, and is associated with a change in convective activity. ‘During the 1997–1998 El Niño, observations indicate that the SST increase in the eastern tropical Pacific enhances the atmospheric convection, which shifts the upward motion to further south and breaks down low stratiform clouds, leading to a decrease in low cloud amount in this region. Taking into account the obscuring effects of high cloud, it was found that thick low clouds decreased by more than 20% in the eastern tropical Pacific… In contrast, most increase in low cloud amount due to doubled CO2 simulated by the NCAR and GFDL models occurs in the subtropical subsidence regimes associated with a strong atmospheric stability.’

    At any rate it seems fairly clear that these changes are associated with changes in ocean and atmosphere circulation.

    .e.g. http://watertechbyrie.files.wordpress.com/2014/06/loeb2011-fig1.png

    Which you can find in the Loeb et al 2012 paper.

    And that OHC follows these changes in net TOA radiant flux quite closely.

    e.g. https://watertechbyrie.files.wordpress.com/2014/06/wong2006figure7.gif

    Which can be found in http://www.image.ucar.edu/idag/Papers/Wong_ERBEreanalysis.pdf

    But are the oceans warming currently? It depends on the Argo ‘climatology’.

    http://watertechbyrie.files.wordpress.com/2014/06/argograce_leuliette2012_zps9386d419.png

    A steric sea level rise of 0.2mm +/-0.8mm/year?Which can be found at – http://www.tos.org/oceanography/archive/24-2_leuliette.html

    Which has the merit of being consistent with CERES net.

    http://watertechbyrie.files.wordpress.com/2014/06/ceres_ebaf-toa_ed2-8_anom_toa_net_flux-all-sky_march-2000toapril-2014.png

    And are sea levels rising? Depends on whether you believe Argo or Jason.

    http://watertechbyrie.files.wordpress.com/2014/06/argosalinity_zpscb75296c-e1409295709715.jpg

    Is it all as simple as the narratives of the climate wars suggest?

  59. Has Gavin now officially given up on his handwave of the 1945-75 cooling period being caused by manmade aerosols? He has a cheek to accuse anyone of just making stuff up!

  60. Geoff Sherrington

    One often sees an assertion like “The earth has been recovering from the cold of the Little Ice Age.”
    For the moment, let us assume that the LIA was real and that there might be 1-2 deg C of warming since then.
    What mechanism is proposed to store the “cold” of the LIA and slowly get “heat” from somewhere to give the recovery. Does one have to invoke a “memory” that strives for an equilibrium? If so, what is the ideal balance?
    Is the energy recovered from the oceans? If so, what drives it to give up energy? Is it the sun? But the insolation has been constant, Leif says, or too tiny for this effect?? Is it a change in albedo, perhaps related to ice extent?
    I’m short on imagination.

  61. stevefitzpatrick

    I find Wunsch’s comment almost remarkable in light of the claimed certainty of attribution in AR4 and AR5, and clearly in conflict with that certainty (and Gavin’s!). It is a very awkward (and sometime humorous) process, but the reality of ‘the pause’ is gradually imposing itself on the main stream consensus. One may succeed in bullying people with the ‘consensus’ view, but reality will just laugh at your efforts.

    Gavin would be wise to pull in his horns a bit, lest another decade of very little warming, which is a real possibility, make him look quite silly.

    I take issue with Josh Willis. Yes, rising GHG’s cause warming. But the ‘how much and when’ questions are crucially important for public policy. It is the main stream answers to those questions (‘lots and soon’) which are ‘existentially threatened’ by the pause, not the basic concept of GHG driven warming. What is actually threatened is the credibility of model predictions of dire consequences of rising GHG’s, and ultimately, the public funding that depends on that credibility. Josh seems to be whistling past the graveyard a bit.

  62. I was amused by Andrew Desslers statement that:-

    “In a few years, as we get to understand this more, skeptics will move on (just like they dropped arguments about the hockey stick and about the surface station record) to their next reason not to believe climate science.”

    particularly now when there have been a few recent blogs exercised about it …

  63. VTG wrote
    the IPCC said

    “The best estimate of the human induced contribution to warming is similar to the observed warming over this period”

    They mean that there is no room for any natural variation in temperature over that time span of 34 years. None at all.
    They are asserting that in there opinion the world temperature would have stayed inviolate , ie changed not one iota in that time frame.
    Now you know that that this not true.

    Over 34 years, without any human influence whatever, There would have to have been some variance, up or down.
    But the IPCC boldly asserts that all the warming could only be due to human induced contribution.
    The second statement is obviously wrong. Give it up.

    • angech,

      1) The time from 1950 to 2010 is 60 years, not 34
      2) The model estimate is zero +/-0.1 degree – this is not “changed not one iota”
      3) the attribution statement allows for up to 0.32 degrees change. Even further from “not one iota”

      You appear to have misunderstood what the IPCC position is.

      • VTG the best estimate is that the human induced warming in that time period is similar to the observed warming.
        Your argument and Gavin’s is that this means 100% of the warming in that time was caused by human activity according to the IPCC.
        If you now wiggle around and say, hold on, it’s not 100%, we do not know then Judith is right to say that “most” of the warming means 51% or greater and you should apologise for the rant saying 100% was meant.
        Or you could fix your myopia by thinking about what I said.
        Natural variability, our ignorance of the complexity of this system, means we can never attribute temperature changes over such a short period to one cause, human CO2 in this case.
        People who do so are pursuing an agenda, as in your case, not science or reason.

  64. Geoff Sherrington :
    .. let us assume that the LIA was real and that there might be 1-2 deg C of warming since then.

    Only instrumental record that goes back to depth of LIA is the CET.
    http://www.vukcevic.talktalk.net/CET-s-w.gif
    It shows that summer temperatures have stayed almost constant during last 350 years, or to be more accurate rose by less than 0.1C/century (rose from 15.1C to 15.45 C)
    It shows that winter temperatures have risen during the same period rose by nearly 0.4C/century (rose from 3.05C to 4.35C)
    It is more than clear that these changes have nothing to do with CO2, and that Gavin Schmidt (and his sorry band of cataclysmates) are advocating utter nonsense.

  65. Steve Fitzpatrick

    Wunsch:
    “If I spend three years analyzing my data, and the only defensible inference is that “the data are inadequate to answer the question,” how do you publish? How do you get your grant renewed? A common answer is to distort the calculation of the uncertainty, or ignore it all together, and proclaim an exciting story that the New York Times will pick up.”

    Ouch! I don’t think Carl is trying to make lots of new friends in climate science.

  66. Global warming? Who cares about puny global warming?

    Skynet became self-aware at 2:14 a.m. eastern time today.

    http://vaviper.blogspot.com/2013/08/august-29th-skynet-becomes-self-aware.html

  67. Animation of warm and cold anomalies 1997 – 2002 per Reynolds v2 SST
    http://bobtisdale.files.wordpress.com/2012/02/north-atlantic.gif

    Note sequence of Nino 3.4 and North Atlantic warm and cold anomalies.

    More:
    Tisdale, Bob. “Animations Discussed in ‘Who Turned on the Heat?’” Scientific. Bob Tisdale – Climate Observations, September 3, 2012. http://bobtisdale.wordpress.com/2012/09/03/animations-discussed-in-who-turned-on-the-heat/

  68. Gavin asks if anyone wants to know why Judith allegedly isn’t conving anyone.

    We already know, Gav. It’s because most are precommitted to alarm and CAGW, since this is where their grant farming is rooted – carefully aligned with the both ideology and financial vested interests of their paymaster. Much like yourself and that deeply dishonest mann friend of yours.

  69. Eric Ollivet

    I would like to address the following question / concern to Judith Curry and the Climate Etc community, regarding calculation of SST data.

    Looking at SST data (HADSST2 time series available at woodfortrees.org for instance), one can observe that global SST is actually calculated as the true average between Northern Hemisphere SST and Southern Hemisphere SST.
    SSSg = (SSTnh + SSTsh) / 2

    Yet one should keep in mind that if sea waters represent 70% of the globe, the extension of SH sea waters is about 33% higher than the one of NH sea waters.
    Hence the global SST should not be calculated as the true average of NH & SH SST’s, but as a weighted average accounting for the respective extension of NH and SH sea waters.
    SSTg = (3.SSTnh + 4 SSTsh)/7

    Implication: As SST-SH has been showing a cooling trend for 17 years, whereas SST-NH shows a warming trend, the global SST trend should be a (slight) cooling but not a flat / slightly warming one.

    Conclusion: Global SST data are biased / corrupted.

    Open question: Why are SST-SH and SST-NH so significantly “diverging” from each other, whereas they were quite comparable before 2003 ? Was there a modification of the “corrections” applied to the raw data ?

  70. This discussion is highly academic. My children plan to buy or build houses. The question is, whether the walls should be 36 cm or 3 m thick. Therefore I have tried to estimate the temperature change in the next 60 years and found about 1 K. I think this is acceptable in view of the temperature changes in the last 3000 years. 30 cm thickness are sufficient.

    • Don’t buy anything less than a few meters above sea level would be wise words to them (most of Florida is out).

  71. There is a lot here and I cannot cover it all. So let’s concentrate on Cheng and Tung. I was not much impressed when they brought in the heat lost to the ocean bottom as potentially applying to the hiatus. As to oscillations, they mention a La Nina-like pattern in the Pacific which immediately tells me that they are on the wrong track. You cannot have La Nina like pattern without an accompanying El Nino like pattern for the simple reason that they are always created in pairs. As a consequence, as much warming as an El Nino brings a La Nina takes back again and global temperature does not change. Convince yourself of this and look at the El Nino peaks in the eighties and nineties (before 1998). Use satellite data and remember that local mean temperature is the mid-point between an El Nino peak and its neighboring La Nina valley. There is much confusion in the literature about it even now despite the fact that I made this situation clear in my book “What Warming?” two years ago. We still get mention of fairy tales like an “El Nino like climate” that Hansen assigned to the Pliocene. My copy pf Science was a week late arriving so I did not get to see the Cheng and Dung graphics until Friday. Something very obvious jumps out from these graphics at you. Namely, deep ocean temperature graphs for the Atlantic and Southern oceans show increased warming at a depth of around 1500 meters beginning roughly around the year 2000. There is also a lesser amount of it in the seventies and eighties. At the same time, both the Indian and the Pacific oceans are affected more uniformly and much less. It is pretty obvious that heating at the 1500 meter depth can have nothing to do with global atmospheric temperature. This being the case, it is clear that we are dealing with changes in the thermohaline circulation. We know where it starts – in the Arctic Ocean where warm water brought there by currents cools, sinks, and flows south along the bottom until it reaches West Antarctic. That is a re-staging area from where it turns east (still along the bottom) and continues until it reaches the northern part of the Pacific where it eventually surfaces. This explains why the Southern Ocean is involved. But where does that heat come from if it is not Trenberth’s lost heat? It is actually simple. I proved in my 2011 paper (E&E 22(8):1069-1083) that Arctic warming, currently still under way, is caused by warm Gulf Stream water carried into the Arctic by ocean currents. Direct temperature measurements near Svalbard have shown that water temperature reaching there by 2010 was likely warmer than any time within the last 2000 years. This warming started suddenly at the turn of the twentieth century, prior to which there was nothing there except for two thousand years of slow, linear cooling. It was caused by a rearrangement of the North Atlantic current system at the turn of the twentieth century which started carrying warm Gulf Stream water into the Arctic Ocean. That is the source of this heat. Greenhouse warming is ruled out as a cause because there was no increase of atmospheric carbon dioxide when the warming started. Such an increase is required by the laws of physics if you want to start a greenhouse warming from scratch. I might add that it is a necessary but not a sufficient condition for that. Unfortunately those big shots who write reports like the AR5 either do not understand this or simply did not bother to read the scribblings of a denier, and still keep babbling about Arctic warming caused by human actions. Now that the thermohaline circulation has shown definite signs of life there is one more hypothesis I want to tie to it. I would not do this if I had not doped out the explanation and cause of the ENSO oscillation in the Pacific. It is basically a back and forth sloshing of equatorial Pacific water from side to side along the equator. Judging by the data from the eighties and nineties its resonant frequency is about five years. A couple of years back BEST from Berkeley came out with a newly improved global temperature data set. They proudly showed their temperature curve going back to 1750, a record at the time. They also claimed to have discovered a couple of volcanoes that nobody had heard of by identifying appropriate volcanic coolings. I looked at it and realized that these were not volcanic coolings but part of a global temperature oscillation starting in the 1700-s and continuing to 1900, after which it petered out. What was unusual about it was that the oscillation was a perfect example of a damped harmonic oscillator with a period of 25 years. That is five times the length of an ENSO cycle. If this is water oscillating its path length would need to be five times that of ENSO, meaning five tomes the width of the Pacific. That sounded too ridiculous to me so I forgot about it. Except that I remembered. When observations by Cheng and Tung showed that the thermohaline circulation, at least the stretch from the Arctic to Antarctic, was doing something, I started speculating again about the possibility of that damped oscillation doing something as well. It is something created by a giant disturbance sometime in the 1700s, it had a long path through the oceans, and was losing energy every time it traversed that path. It should really be Müller’s job to follow up but if he is not interested some other interested party ought to do it.

  72. Pingback: Weekly Climate and Energy News Roundup #143 | Watts Up With That?

  73. Over at RealClimate, on this topic they claim ”
    It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human induced contribution to warming is similar to the observed warming over this period.”

    This logically CANNOT be true can it? If the ocean has a 60 yr warming/cooling cycle, you MUST have net heat LEAVING the ocean during the warming cycle (likey in the form of increased evaporation and then rain). The ocean cannot ALWAY be taking heat away from the atmosphere (just less during the warm cycle) but never giving any back, or else it would have boiled away millions of years ago. Therefore, the contention that the ocean does not (did not) release heat into the atmosphere during the 1970-2000 cannot be true and therefore the statement that all of the observed warming during that time period was due to human causes incorrect.

    • This phony attribution is being defended like the phony hockey stick. Telling.
      ========

      • Interestingly, they really need this phony attribution to make the science justify the policy. The phony hockey stick wasn’t necessary to make the science, only the false narrative of stable climate before man.

        So this will be defended even more fiercely. Gad, it could be decades.
        ================

    • You don’t understand. There’s always huge amounts of net heat leaving the ocean, in the form of very cold water at the poles (esp. cold at the South Pole) diving to great depths. Heat is transferred downwards from the surface (and warmer layer nearer to it) via turbulent vertical mixing. As cold water dives at the poles, the water everywhere rises (a little). Thus the water warmed via turbulence is rising, carrying the heat transferred back towards the surface.

      All of the “net” transport everybody’s talking about is extremely tiny changes to one or more of these very large transports. Plenty of scope for natural variation.

    • The Hiatus
      alcheson | August 31, 2014 at 11:46 pm takes RealClimate seriously which is not smart for a scientist. Those people are still hung up on the hiatus as this sentence shows: “The Hiatus. Has the climate stopped warming?
      Carbon dioxide in atmosphere shows steady rise, so why are temps not higher?”
      The answer is very simple, and I have said that before. The Arrhenius greenhouse theory has been predicting warming ever since the hiatus started, 17 years ago, but as they themselves say, there has been none at all. If your theory predicts warming and nothing happens for 17 years that theory belongs in the waste basket of history, right next to phlogiston, another failed theory. The only greenhouse theory that correctly explains the hiatus is the Miskolczi greenhouse theory, the one that has been blacklisted by IPCC since 2007. It predicts exactly what we see: addition of carbon dioxide to the atmosphere does not warm the air. The difference between them is that Miskolczi theory (MGT) can handle several greenhouse gases simultaneously absorbing in the IR while Arrhenius can not. It only applies to carbon dioxide which is not even the most important greenhouse gas in the air. Water vapor is. There is approximately two to three percent of it in the air, several hundred times more than of carbon dioxide. According to MGT carbon dioxide and water vapor, the most important GHGs in the atmoshere, will jointly establish an optimal IR absorption window in the atmosphere which they control. The optical thickness of this window in the IR is fixed at 1.87 according to Miskolczi. This corresponds to a transmittance of 15 percent or absorbance of 85 percent. If you now add carbon dioxide to the atmosphere it will start to absorb, just as Arrhenius says. But this will increase the optical thickness. And as soon as this happens water vapor will start to diminish, rain out, and the original optical thickness is restored. The added carbon dioxide will of course keep absorbing in the IR but simultaneous reduction of water vapor will keep the total absorption constant and no greenhouse warming is possible. That is the exact reason why constant addition of carbon dioxide to the atmosphere for the last 17 years has been unable to cause warming predicted by the Arrhenius theory. From MGT it also follows that runaway greenhouse effect that Hansen keeps talking about is impossible. Also impossible is the ordinary greenhouse effect that IPCC claims is causing anthropogenic greenhouse warming or AGW. This makes AGW a pseudo-scientific fantasy, invented by over-eager climate scientists to prove that greenhouse warming exists. The current hiatus is not the only rime in history when warming stood still. The previous time it happened was in the eighties and nineties, before the arrival of the super El Nino of 1998. You do not know about it because in ground-based temperature data, meaning HadCRUT, GISTEMP, and NCDC, it is transmogrified into a phony raising temperature region. I previously thought this fakery raised global temperature by a tenth of a degree in 18 years but better measurements show that they add 0.2 degrees Celsius over that interval.The actual temperature rise for the entire 20th century was 0.8 degrees Celsius but the rate at which temperature rises in the eighties and nineties is 5 to 6 degrees Celsius per century. They even had a catchy name for it – the “late twentieth century warming.” This fake warming simply does not exist as you can verify yourself by comparing it to satellite temperature measurements. Or better yet, look at Figure 15 in my book “What Warming?” that shows this lack of warming clearly.

  74. VeryTallGuy asks: “3) So you do indeed appear to find 0% and 100% anthro equally likely. I still find that incredible! I’ll try and explain why. Let’s look at GHGs only for simplicity. IPCC (AR5 box 12.2) gives a range of 1-2.5 for TCR with a midpoint of 1.8, and even Nik Lewis in his GWPF paper gives a range of 1-2 with a most likely of 1.35. With CO2 at 307ppm in 1950 and 390PPM in 2010 that gives and expected response of
    Lower bound 0.35 degC anthro (54%)
    Lewis midpoint 0.47 degC anthro
    IPCC midpoint 0.62 degC anthro
    Upper bound 0.86 degC anthro (133%)
    Even taking Lewis’ numbers, this seems completely irreconcilable with your position. Are you now arguing TCR is constrained below 1?”

    I am reminded of the debates in physics over the last century. Einstein argued that the universe was in steady state and refused to accept the data that showed the universe was changing, expanding. Many physicists were unwilling to believe in black holes. You look at a model and when the data doesn’t conform to the model you declare like these before “the data is wrong.” Inevitably the data is never wrong and the science is what’s wrong. The fact the IPCC comes up with a TCR above what the actual recorded gain was we now know half the gain was almost certainly the effect of ENSO cyclic behavior which means your models are indeed wrong and TCR is somewhere around 1.

    There is a clearly cyclic effect that is a major part if not the majority of the gain from 1975-1998. No matter what your models say, no matter what your assumptions are the facts are that between 1950-2014 or so the temp change is significantly less than the TCR of 1.8 or 2 or 3. This means the models must be adjusted not the data.