God and the arrogant species

by Judith Curry

There is a well-known story in the Bible about arrogance. People were speaking one language, engineering progressed, and they started to build a tower which they intended to make so high it would reach the heavens.  In the story of the bible, God descended and confused he language of the people, so that they could not understand each other; and then they were scattered all over the earth.

I think that the meaning of this can be felt in large conferences, where we are thousands of scientists in hundreds of sessions, each one of us working in his own isolated domain, with hardly any knowledge of nearby domains, let alone of the big picture. So we think that what the story is trying to tell us is that good communication leads to progress, progress is followed by arrogance, and arrogance is followed by loss of communication, which leads to stagnation, which is, we think, where science is now.

This gem comes from a presentation by Christofides and Koutsoyiannis [link] (h/t David Hagen):

God and the arrogant species: contrasting nature’s intrinsic uncertainty with our climate-simulating supercomputers

Antonis Christofides* and Demetris Koutsoyiannis

A presentation given at the Air & Waste Management Association’s 104th Annual Conference & Exhibition Session T09-02 Orlando, Florida, 21 June 2011

Read the whole thing, it is short, clever, and profound.  Some excerpts:

In the middle ages, people were told they were all sinners, and thus they would burn in hell. However, they could buy indulgence: they could pay an amount of money and get a piece of paper which certified that their sins were forgiven. Today, we are told we are all sinners (because we exhale carbon dioxide, give birth to children who do the same, and we also drive cars), and thus we will burn in hell as the Earth warms up. However, we can buy indulgence: for example, if you fly to London for the weekend, you can buy indulgence for that particular sin.  “Offset the carbon emissions” is another way of saying “pay this amount and we’ll pretend that you did not make this trip”. Other forms of indulgence are compact fluorescent lamps and hy- brid cars, which also do not make any difference. To us, climate change is largely a religious issue, which is why we chose to involve God in this presentation.

Although the climate has always been in perpetual change, many scientists who support the anthropogenic global warming hypothesis claim that this time it’s different, because their climate models show that the increase in carbon dioxide fits the current climate change better than any alternative explanation. This argument is circular, since the models reproduce the hypotheses of their programmers. What is most important, however, is that this way of reasoning is rooted in the fallacy that climate can, in principle, be described in deterministic terms; that if we could analyze the system with sufficient granularity and make sufficient measurements then we would be able to produce sufficiently good predictions; and that there must necessarily exist an identifiable causal agent behind every trend or shift. We explain that climate, like many natural systems, exhibits “Hurst-Kolmogorov behaviour”, which means it is intrinsically uncertain, with real limits to the potential for attribution and prediction.

If you calibrate with one data set and test on another, it’s OK; but climate models are not tested in this manner. If you only calibrate and do not test, then “calibration” is a misnomer: it’s actually data fitting. So modelers adjust the parameters so that models behave as they have hypothesized they should behave; then they use this behaviour as evidence that their hypothesis is correct.

In the real world, God makes no warranties on what will happen in the long term. Yes, the sea will most likely continue to rise in the next century, but beyond that, it is really uncertain what [climate] will do. 

So when we design a structure, such as a dam, and we try to predict the design flood, then it’s not a good idea to use the notion of the “maximum probable precipitation”, because there is no such thing, and because it can be (and has been) exceeded; it’s also not a good idea to consider a “signal” (e.g. a constant average value) plus “noise” (e.g. variability that follows a distribution), because God does not err, and therefore he does not distinguish between signal and noise. It is better to use Hurst-Kolmogorov dynamics, with which we do not predict the future in a deter- ministic sense; instead, we predict the possible range of outcomes given an uncertainty level (or vice versa), without distinguishing between “signal” and “noise”. The application of this method results in higher uncertainty estimates than with other methods, which un- derestimate uncertainty.

JC comments:  Well which analogy do you like better, the tower of Babel or the indulgences? :)

The point I would like to pick up on is this one:

What is most important, however, is that this way of reasoning is rooted in the fallacy that climate can, in principle, be described in deterministic terms; that if we could analyze the system with sufficient granularity and make sufficient measurements then we would be able to produce sufficiently good predictions; and that there must necessarily exist an identifiable causal agent behind every trend or shift. 

In the real world, God makes no warranties on what will happen in the long term. 

Understanding the limits of predictability is the key challenge.  The arrogant species is fooling itself if we think we can ‘project’ the state of the climate in 50 years or 100 years, even if we somehow knew what the anthropogenic forcing would be.  Yes, it seems that all other things being equal, the climate would be warmer with more CO2, but there is no reason at all to expect all other factors to remain the same.    Some frame the problem is “not if, but when” we will realize dangerous climate change; we know where this is headed, but not sure exactly when this warming will be realized in the surface climate.

Personally, I am in awe of the complexity of the climate system and don’t want to anger the gods with the arrogance of claiming to understand climate change.  I understand a few things, and collectively we understand more things, but I suspect the area of ignorance (white area of the Italian flag) remains uncomfortably large.

346 responses to “God and the arrogant species

  1. My stochastic projection is that rationalists see AGW climate models for what they are: dead animals — climate models have zero predictive value:

    (Demetris Koutsoyiannis et al., On the credibility of climate predictions)

    • if people can separate the phony GLOBAL warmings / coolings, from climatic changes – people can understand about the climate a lot; not before!!! existing models are created for to confuse / to create fanatics

      Wetter climate is milder climate; dryer = more extreme. Human can do a lot about it. But not before the fanaticism in both camps is exposed. Talking about climatic changes and inserting in the ”model” the phony GLOBAL warming, is created for brains degradation. The result is easy to see in the ”Climate Changer’s” mentality / attitude …

      GLOBAL warming is impossible / GLOBAL Ice Age is impossible; the laws of physics prove that; but not to a fanatics / the ”closed parashoot brains” from both camps. Pretending to know the GLOBAL temperature on the top of that to compare one year with another, is the mother off all lies… look at their ”GLOBAL temperature charts”… western universities are producing new deviate cadets every season.

  2. ThePowerofXThePowerofX

    Personally, I understand that people who don’t summarise the evidence correctly, (ab)use the ‘arrogant species’ and ‘Tower of Babel’ thing to support all kinds of rubbish.

    Oliver will like like this one.

    • They are not presenting a summary of evidence; they are presenting a theory, and a credible one at that. Your arrogance is helpful in this context.

      • I was traveling today and just returned home. I liked the first part of the video, . . . until I realized the video falsely concludes that Apollo 11 did not reach the Moon.

        Apollo 11 samples came from the Moon.

        The first analysis of lunar soils and breccias (compacted soils) returned by Apollo 11 showed the enrichment of lightweight xenon isotopes from mass fractionation ~3-4% per mass in the solar-wind [Science 167, 1-336 (entire issue, 30 January 1970)].

        http://www.omatumr.com/Data/1983Data.htm

        Solar mass fractionation was confirmed later analysis of SW-implanted elements in meteorites, in other lunar samples, and in the probe that entered Jupiter in 1995.

        http://www.lpi.usra.edu/meetings/metsoc98/pdf/5011.pdf

        Solar mass fractionation was also confirmed by the enrichment of lightweight s-products in the solar photosphere:

        http://www.omatumr.com/abstracts2005/LunarAbstract.pdf

        Since NASA later tried to avoid evidence of solar mass fractionation, it is incredible to conclude NASA induced mass fractionation in the elements correlated with surface area of grains (hence solar-wind implanted) in the Apollo 11 samples.

        The above video may be an attempt to discredit a CSPAN recording of NASA belatedly releasing data from the Galileo Mission to Jupiter:

      • ”Climatology” is the newest profession (20y old)- they incorporated the knowledge / tactics with the ”oldest profession” end result is; lots and lots of GLOBAL warmings / coolings FOR BIG CASH IN ADVANCE. On their GLOBAL temperature carts; temperature goes up and down as a yo-yo…? truth: global temperature is overall always the same!

        Precursor of all evil started before Climatology was a profession: in the past, when a geologist cannot get a job for mining company – makes up a story – to get on the front page; meteorologists were embarrassed of them. Example: he finds a rhino skeleton in Spain – instant ”carbon-dating” = declaring that 70 000y ago ”planet’s temperature” was warmer by 10C, to sustain a rhino on Iberian peninsula…?!That goes in the education books and becomes ”official” planet was warmer by 10C, you don’t believe – will not graduate!…

        If he told the truth, wouldn’t got even on page 3 in newspaper. Here is the truth: at that time was a land-bridge between .north Africa and Spain (wasn’t Gibraltar straights) 2] before the mongrels invented artificial fire, Sahara was the best Savannahs on the planet – plenty grass and water.for grazing animals. 3] There are rhinos in South Africa now, which has similar climate as Spain. – If the founder said: ”here in Spain I found some old bones from a rhino – the poor rhino forgot to take his compass and go back to Africa for the winter – left his bones here… But, THE PLANET WAS 10C WARMER because of those few bones – is front page news / lectures in the universities +++++ (that was a small example)

        All their proofs of phony GLOBAL warmings are similar, real explanations are boring… Now we have ‘Paleocene, Holocene crap sold as ”GLOBAL” Medieval milder temperature in EUROPE, sold as GLOBAL warming; by the ”common-sense deficient Skeptic D/H” just to give oxygen to the Warmist protagonist. Skeptics believe 101% in GLOBAL warmings, they are bigger Warmist than the people they call ”Warmist” because they say that is 90% possibility of GLOBAL warming (they leave 10% as a ”back-door exit”) because they have brains.

        Skeptics know that is impossible to predict 2 weeks in advance – BUT THEY KNOW THAT IN 100y will be GLOBAL warning by 0,5C….?! Ian Plimer’s bi-product of D/H… Prof. Plimer has put section of their brains for common sense in induced coma. Reason my real proofs are avoided / rejected / silenced by their ”truth phobia”. My formulas will win, boys

  3. Willis Eschenbach

    Koutsoyiannis, in my experience, is always worth listening to.

    Thanks for this, Judith, good find.

    w.

  4. “… at the 30-year climatic time scale, the average correlation coefficient rises slightly to 0.237 for temperature and remains slightly negative (–0.046) for precipitation; however, the average efficiency values become tremendously negative, –81.6 for temperature and –49.5 for precipitation. This clearly shows that GCMs totally fail to represent the HK-type [Hurst-Kolmogorov behavior of] climate of the past 100–140 years, which is characterized by large-scale over-year fluctuations (i.e. successions of negative and positive “trends”) that are very different from the monotonic trend of climatic models. In addition, they fail to reproduce the long-term changes in temperature and precipitation (Fig. 8). Remarkably, during the observation period, the 30-year temperature at Vancouver and Albany decreased by about 1.5°C, while all models produce an increase of about 0.5°C … With regard to precipitation, the natural fluctuations are far beyond ranges of the modeled time series in the majority of cases …” (Ibid.)

  5. ‘Put the blame on Meme, boys, put the blame on Meme.’
    (and on models that reproduce the hypotheses of their programmers)
    (and on indulgences.)

    H/T Rita Hayworth

  6. “There is no scientific justification for some of the extremist economic and social penalties that a minority of zealots are trying to impose on the people of the world.” ~Koutsoyiannis

  7. Does everyone know who ‘they’ are? They are the same people who made Al Gore rich and hated G. Bush for standing in the way of Kyoto.

    “… they are unable to predict weather beyond a week or two, yet in conjunction with the IPCC they presume to tell us what to expect over the next few decades.” ~Koutsoyiannis

  8. Yes – great post.

    This seems to me to just about fall within the realms of ‘science’ but at the same time makes the point that it is less about ‘the science’ than many climate scientists would like to believe. Especially climate scientists that don’t like unpredictability and nasty, woolly vagueness.

    But then, like Pekka, I think its less about the science than those people who don’t think it’s very much about the science..

  9. IPCC scientists have already admitted the simple truth about computer climate modeling:

    “In fact there are no predictions by IPCC at all. And there never have been. The IPCC instead proffers ‘what if’ projections of future climate that correspond to certain emissions scenarios. There are a number of assumptions that go into these emissions scenarios. … [T]he projections are based on model results that provide differences of the future climate relative to that today. None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models.” (Kevin Trenberth)

    • And yet the modeling results are claimed to be strong evidence of a future threat, including by Trenberth. So there is less to this quote than meets the eye.

      • In a way, I think there’s actually more to this quote than meets the eye.

      • The Medium is the Message. AGW True Believers use of climate models is powerful evidence that their fear of AGW is a mass mania.

      • David, if somebody in the model has: was warmer by 0.632C, you take it as a fact; it’s not their fault. Especially that ”two (2)” on the end of 0,632C. That ”precision” is not a precision; but to show to the protagonist that: Skeptic’s logic and common-sense is lower than in the maggot, not to be able to see the scam. Its same as when a bank-robber is rubbing cop’s nose with the stolen gold bullions; and the cop is stupid enough to notice

        1] 40km thick layer of troposphere – they are monitoring on few places at 2m altitude… what about the heat in the other 39km and 998m?! Doesn’t that heat count?! Does the other 99,999999999999999% of the earth’s troposphere belong to Jupiter or Neptune?! Between 3m and 3km altitude is much more warmth (and keeps changing every minutes), than on few cubic meters at 2m altitude!!!

        2] they are monitoring only the HOTTEST minute in the 24h, one minute V 1439 minutes….? Do those 1439 minutes the temp goes up every day the same as that single minute?! Why the hottest minute is not at the same time every day, is proof enough that; one minute by itself is completely meaningless.

        3] having more monitoring places on one hemisphere than on the other; is NOT A SCIENCE. Just few examples why my proofs will win, why the FAKE ”Skeptics” are guillotining the truth more than the Warmist = BY TAKING WARMIST’ DATA AS FACTUAL >the active Skeptics are twice as guilty! Legitimising the Warmist lies is ”the mother of all lunacy and crimes”

    • Wag, I’d like the source for that, please. A very useful quote.

      • here you go–>

        http://blogs.nature.com/climatefeedback/2007/06/predictions_of_climate.html

        “We will adapt to climate change. The question is whether it will be planned or not?” (Trenberth)

        We are seeing hfere what Western academia has become: Trenberth is reading out of Mao’s Little Red Book. It makes you wonder if there is an oversized picture of Mao over Trenberth’s desk.

        Mao’s cookie: He who is unwilling to voluntarily sacrifice free will today ‘for the good of society’ will face tanks in Tiananmen tomorrow.

        Bush who stood up to Leftists—just like the lone Tiananmen Chinaman—because he refused to sign the Kyoto treaty and how they and Al Gore hate him for that.

  10. If a system is too complex at a particular point in history to be fully understood and correctly modeled, that is NOT evidence of the existence or involvement of a supernatural being in the process.

    It is frustrating to not fully understand an issue, but the climate is certainly not unique in this regard. Science does not yet understand what composes the majority of the matter in our universe. We don’t understand gravity, but we make use of it. Historically, humans have tended to form incorrect premature conclusions where the complexity exceeded the current knowledge. The climate seems yet another example of people jumping to conclusions without ample evidence.

    • So, you’re saying man does not cause gravity, right? Maybe we are getting somewhere afterall.

    • I saw a well known scientist say on TV that she believed they only understood about 4% of the matter that composes the universe. And in the coming decades as they learn more that number may even go lower.

  11. I too, like what Koutsoyiannis has to say. The damned hubris of humans will ultimately bring on our downfall as a species as it has already done with previous civilisations in our recorded history. One exception may be that civilisations based on sharing, caring with a complete absence of pride could well survive.

    • Remember, AGW alarmism is only a Western phenomenon and really more a symptom of the fall of Western civilization. Scientists outside the West and outside the mass delusion have likened the science of the West’s AGW schoolteachers to the science of ancient astrology.

  12. Judith: This quote: “”””this way of reasoning is rooted in the FALLACY that climate can, in principle, be described in deterministic terms; that if we could analyze the system with sufficient granularity and make sufficient measurements then we would be able to produce sufficiently good predictions; and that there must necessarily exist an identifiable causal agent behind every trend or shift……”””””…..
    To call this paragraph a FALLACY is nothing more than “The Fox and the
    Sour Grapes”…..The Fox doesn’t know how to get the grapes and therefore
    they are sour……
    The proposed deterministic road of science is the King of all Roads instead….and if no OBVIOUS CAUSE has been found, then something
    important has been OVERLOOKED and is still MISSING……
    Science has therefore to go back to its solid roots, which are globally accepted, and check all over again from the beginning, avoiding all overlooking…..
    The ERROR consists of (1) NOT going back to the roots of science (2) reckoning that computer simulation models are able to simulate out/in/about heuristic results (which do not result by analytic thinking with a sharpened pencil…. (3) the unwillingness of CAGW of taking back/questioning/going back to the stand before CO2-hype was pulled
    off……(4) and the arrogance of CAGW, which worsenes the
    scientific and political misery, as noted in your “God and Babel tower ” quote…..
    ……. Each one of us should be prepared to honestly take back
    his views if forecasts do not materialize…..
    JS

  13. Demetris Koutsoyiannis has introduced some provocative concepts into the climate science arena that deserve careful consideration. Beyond the reluctance of some to consider unorthodox ideas, however, I believe his theorizing has been enjoyed a poorer reception than he might have wished because he has overstated its implications. The presentation by Christofides and Koutsoyiannis is probably no exception, with its puzzling mixture of misconceptions (the CO2 we exhale contributes to global warming, GCM model parameters are adjusted to yield the climate trends they simulate – a misconception not limited to these authors), as well as straw man arguments and/or non sequiturs.

    An example of the latter is encompassed in claiming “the fallacy that climate can, in principle, be described in deterministic terms; that if we could analyze the system with sufficient granularity and make sufficient measurements then we would be able to produce sufficiently good predictions; and that there must necessarily exist an identifiable causal agent behind every trend or shift. We explain that climate, like many natural systems, exhibits “Hurst-Kolmogorov behaviour”, which means it is intrinsically uncertain, with real limits to the potential for attribution and prediction.”

    The straw man is I suppose the implication that anyone has proposed that climate behavior can be described in exclusively deterministic terms. However, if we relax that constraint, and instead take literally the words “sufficiently good predictions”, then the claim isn’t so much a straw man as a conclusion for which inadequate evidence is presented. “Sufficient” is not the same as “perfect”, and so the issue is not whether uncertainty exists but whether it is so great as to preclude, even theoretically, predictions of value for our purposes. The argument is already refuted for model-based weather predictions, and its validity or invalidity for longer term climate predictions is not something deducible from the Hurst-Kolmogorov concepts the authors refer to and describe in detail elsewhere.

    The authors take pains to emphasize the stochastic behavior of many climate phenomena. When this is incorporated into the principle of long term persistence with large Hurst exponents, the conclusion is drawn (as with other forms of autocorrelation) that apparent trends resulting from a “causal agent” are less easy to distinguish from stochastic variation than if each data point were independent of its predecessors, or more accurately, far more data are required for the distinction.

    To my mind, the stochastic aspect of this concept is not a problem. In the recent ergodicity thread, several of us (Tomas Milanovic, Pekka Pirila, WebHubTelescope, MattStat, Vaughan Pratt, and I) discussed the role of stochasticity vs determinism, with Tomas emphasizing the deterministic element in chaotic behavior, and the rest of us putting emphasis on the stochastic elements that introduce uncertainty into climate behavior in general. I won’t put words in the mouths of the others, but my own perspective was that stochasticity is an inevitable attribute of many physical systems because perfect knowledge of all elements of the system is impossible.

    What is more problematic in my opinion is the implication that “persistence” and “trends” are necessarily antithetical interpretations of the same data. In the autocorrelation thread, I suggested that persistence is a description, not a mechanism, and so when it is observed, we can legitimately ask what mechanism may be operating. The answer may be elusive in some cases, but in others may well be attributable to an identifiable climate forcing, or even an identifiable internal oscillation, with the length of persistence often a clue as to which mechanism should be pursued. In this sense, persistence shouldn’t be seen as a mysterious property independent of causality, but as a reflection of some cause that in some but not all cases may be identifiable. If this is what Koutsoyiannis is saying in his own words, then I have no difficulty with it. If he is suggesting that persistence makes a high degree of uncertainty inevitable, I don’t think he has justified that conclusion. I would agree, however, that persistent behavior that reflects some short term mechanism can confound efforts to identify a separate mechanism underlying a longer term trend. As argued by Koutsoyiannis, the problem may require more data points than ordinarily assumed in order to be resolved.

    If Koutsoyiannis is monitoring this thread, he may conclude that I have completely misinterpreted what he is saying. That’s certainly possible, but I would be interested in interpretations by others, with particular reference to the notion that persistence requires a physical mechanism, even if it isn’t always one we can easily elucidate, and that stochasticity doesn’t necessarily preclude good predictions even if it precludes perfect ones.

    • I suggested that persistence is a description, not a mechanism, and so when it is observed, we can legitimately ask what mechanism may be operating

      Indeed, persistence is merely a description, and it is an inappropriate description – one of the reasons Dr Koutsoyiannis has coined the term “Hurst-Kolmogorov dynamics”. Memory is also not really an appropriate choice – indeed, the mechanism itself is more akin to amnesia than memory.

      The mechanism has been described in the Physica A reference below. It is a consequence of statistical thermodynamics; the maximisation of entropy production, coupled with a number of simple constraints, will result in a complex system exhibiting certain behaviours; either IID gaussian noise, autoregressive noise, or Hurst-Kolmogorov dynamics, depending on the constraints.

      The constraints of many climatic series (e.g. global temperature) are such that Hurst-Kolmogorov dynamics are expected to govern the internal variability. This is strongly supported by proxy evidence ranging from instrumental records, Greenland and Antarctic ice core records, as well as deep Geological time proxies.

      Ref. link, Koutsoyiannis, D., Hurst-Kolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432, 2011

      • I think the linked site reinforces my perception that Koutsoyiannis has overextended the implications of his concepts and data beyond realistic applications. The dependence on entropy production maximization illustrates this conclusion. More on that concept and its significant limitations for real world application can be found in this blog at Maximum Entropy Production.

        I continue to believe that Koutsoyiannis has something valuable to offer, but it’s hard to take him seriously when he uses his Hurst-Kolmogorov dynamics as the foundation for all or even most recent climate change, when it seems clear that much more is going on than can be explained from his theorizing, including a dominant role for climate forcing in perpetuating long term trends.

      • it’s hard to take him seriously when he uses his Hurst-Kolmogorov dynamics as the foundation for all or even most recent climate change

        I’m afraid this quote clearly shows you don’t understand what Dr Koutsoyiannis is trying to convey, and more worryingly, suggests that you think natural variability can be ignored. Before criticising an approach, it is important to understand that approach first.

        Now you want to argue that we already have a non-stationary component in the global temperatures. But we also have natural variability, and before you can declare a non-stationary component (e.g. a deterministic trend, or change in the population mean), you need to rule out the possibility that the change in mean is caused by natural variability.

        What your comment amounts to is this: you are arguing that we don’t need to understand the structure of natural variability in the climate system because we already know there is a trend. But to detect such a trend, you need to reject the null hypothesis of natural variability, and in order to do this, you need to know the structure of natural variability.

        In this way, your logic and reasoning is false. You are putting the scientific cart before the scientific horse. You cannot meaningfully reject natural variability without understanding its structure; unless, of course, you are denying the existence of natural variability!

        On the topic of maximising entropy production, it is merely a tool from which other theorems can be derived. When applied correctly, it is a very powerful tool. I have not yet found an error in the application; if you are aware of one, please let us know (and be specific). If there is no error in the application, your commentary about it is irrelevant. The predictive capability stems from the complete analysis, not one aspect in isolation.

      • Spence – Contrary to your impression, I think we have an excellent handle on natural variability, which includes both natural forcings (solar and volcanic), and internal climate modes. Our ability to estimate trends from anthropogenic forcing (GHG warming and aerosol cooling) comes from direct evidence on these phenomena and also good evidence for the magnitude of internal unforced variability. In the latter case, we know from ocean heat data that the unforced variability could have contributed only a minor fraction at most of post-1950 warming. Prior to 1950, our evidence is less clear, but suggests that it was not a dominant factor in that interval either. Our estimates have error bars, to be sure, but they are not wildly inaccurate.

        This has been discussed extensively in all its ramifications in too many threads on this blog for me to repeat the details here, but for an example of the principles involved, you might want to visit Heat Uptake and Internal Variability.

        This is an example of why I conclude that Koutsoyiannis has overextended the conclusions he draws from his theorizing, based on his unfamiliarity with a wealth of relevant climate data that shows the limits to those conclusions and the dominance of climate forcing in long term trends.

      • Fred

        You wrote: “I think we have an excellent handle on natural variability”

        What was the cause of the recent trend in temperature? What was the specific cause for the lack of the warming trend? If you are not overatating your knowledge you should know what specifically happened to make the recent trend less than was anticipated. How much will it warm over the next five years?

        Imo– climate science is just learning what impacts natural variability and you again greatly overstate your knowledge

      • I wasn’t aware that Koutsoyiannis even got attention anymore; his original appearance on the stage came from a highly flawed application of climate modeling. As far as I remember, they only “proved” the very obvious fact that very local (single realisations) of weather aren’t coherent across observations and models. It is unclear how to even link the article cited by spence to climate whatsoever. It’s sure very fancy-sounding, but doesn’t add anything to our understanding of climate.

      • Rob – In addressing the questions you raise, timescale is critical, because natural variability can dominate on some short timescales while forced trends dominate over longer intervals such as 1950 to the present. However, I don’t want to get into an extended discussion of this (which has been discussed extensively in the past) if your questions are basically an invitation to argue rather than an indication of curiosity on your part. If there is some specific detail you are truly curious about, I’ll offer my perspective, but I’m not interested in arguing whether mainstream climate science has got its conclusions right or wrong.

      • Chief Hydrologist

        The improbable narrative in the superficially dispassionate language of science continues unabated. Drop in some terms like maximum entropy or internal varibility – and it all works.

      • I think we have an excellent handle on natural variability, which includes both natural forcings (solar and volcanic), and internal climate modes

        You *think* we have an excellent handle on natural variability?

        Do you even understand the natural variability Dr Koutsoyiannis is trying to understand? The difference between an autoregressive model (as used by climate scientists) and HK dynamics (as proposed) only becomes apparent at very long time scales.

        Furthermore, our current models fail miserably at reproducing natural variability. This was recently given as the explanation as the discrepency in model output in hindcast by a climate scientist; not only for deterministic behaviour, but statistical measures of climate as well. And now you claim that natural variability is captured well by models.

        Well, the models have been tested, and they fail miserably at timescales longer than one year. So no, models do not accurately capture natural variability, and yes, climate scientists acknowledge this. They have no choice but to accept it in the face of the evidence presented by Dr Koutsoyiannis.

        As far as I remember, they only “proved” the very obvious fact that very local (single realisations) of weather aren’t coherent across observations and models.

        No, he demonstrated that models fail to predict either the mean or several statistical measures on continental scales; this was “explained” by climate scientists by them admitting that models do not capture natural variability (neither the mean nor several other statistical measures estimated by the team at Itia), in stark contrast to the exact opposite claim made by Fred Moolton above. Furthermore, the papers on modelling credibility are just two from a large number of papers, without understanding these you will probably not understand the points he is making.

        Drop in some terms like maximum entropy or internal varibility – and it all works.

        Not sure what your point is here CH. Don’t rely on my usage – refer to the usage in the papers. If there is something wrong with it, let me know.

      • Chief
        I agree with your comment.

        Fred- You claimed to have an excellent handle on natural variability. I doubted your claim as it seems to me to be being unscientificly valid.

        If I understand your modified claim it is that you do not understand how much natural variability will impact short term conditions, but that you believe you understand how they will impact long term conditions. Is that correct? Would you be able to define over what time periods you are defining short term vs. long term trends?

      • Spence – I’m not trying to be mean-spirited, but it does seem clear to me that you are unaware of the physical evidence we have on the magnitude and timescales of natural variability. To appreciate that, you need to comprehend some basic geophysics principles and then apply observational data to those, as illustrated in the link I cited. You refer to possible long term variability, but the importance of empirical data is that they exclude a major role for that over the decades since 1950 and probably earlier as well. Hurst-Kolmogorov dynamics can’t overcome that evidence.

        I don’t doubt that Koutsoyiannis has some informative insights to offer. Unfortunately, he has been co-opted by individuals unwilling to acknowledge the now well-established dominant role for anthropogenic greenhouse gas based warming over those recent decades, and Koutsoyiannis himself seems to have exhibited that same unwillingness. That is one reason why he isn’t going to be taken seriously by people who understand climate dynamics; it’s not due to their failure to understand him, but his failure to understand the science..

      • Fred, once more and just to be clear:

        Natural variability is the null hypothesis.

        In order to show evidence for an effect, you must first reject the null hypothesis.

        You cannot do this without knowing the structure of natural variability.

        Your claim goes against one of the fundamental tenets of scientific research.

        I am sorry if you do not understand this. I cannot explain it any more clearly. Please refer to Dr Koutsoyiannis’ post below.

      • Spence – I have a sense I’ll be repeating what I wrote elsewhere, but there isn’t any single “null hypothesis”. We do, however, as I’ve written, have positive evidence rejecting a dominant role for natural variability averaged over the past six decades, and we also have separate evidence for the dominant role of forced trends. Together, they leave some margin for error, but not for a radical rethinking of how we understand recent climate behavior.

      • “I wasn’t aware that Koutsoyiannis even got attention anymore; his original appearance on the stage came from a highly flawed application of climate modeling. “

        Much of my understanding his work comes out of his presentations, and I have only started to read some his other papers recently.
        I commented on another thread concerning one of his 2011 papers on rainfall modeling:
        “Can a simple stochastic model generate rich patterns of rainfall events?”
        It is only critical insofar as I can do a much better job than he can on the modeling. He tried to apply H-K dynamics because he thought he detected a power-law, but it was only a phantom and I used plain old statistical physics to get a perfect single-parameterfit to the empirical rainfall data.

        That said, I am all for Koutsoyiannis using maximum entropy to figure out a simple stochastic model, because that has been my objective as well. Yet, I think he is making some incorrect assumptions on how to proceed.

        Other than that, the Clausius-Clapeyron correction is neat and I wonder if that has some applicability to calculating the atmospheric lapse rate, which may need a variational approach to handle the mix of adiabatic and isothermal processes involved.

      • Spence – I have a sense I’ll be repeating what I wrote elsewhere, but there isn’t any single “null hypothesis”.

        This is really essential and fundamental. There are as many null hypotheses as there are questions. No one can tell, what’s the right null hypothesis for all others. As the climate change issue is a policy issue, there is a different null hypothesis for each policy proposal. To justify any active policy proposal or to justify postponing any action a different null hypothesis should be applied.

        A proper discussion of null hypothesis is possible only in connection to a precisely formulated question, not as the only generic null hypothesis. The climate science by itself is not built on such precise questions, because it is an exploratory science that relies on well known theories, which are difficult to apply. Such a science contains few important questions that can be answered by yes or no. On the other hand specific policy proposals may well be such that a null hypothesis may be formulated to help making a decision on them.

        It’s a standard practice of skeptics to claim that there is just one null hypothesis. They refer also regularly to the requirements of the scientific method. The do that invariably erroneously showing that they don’t understand the basics of the meaning and value of scientific knowledge. The literature contains so many quotes related to these issues that there’s no difficulty in finding “supporting quotes” for any claim. Many of the quotes are wise and important, when given in right context and understood as they were supposed to be understood, but totally worthless when used as they are used here almost daily.

      • WebHubTelescope, I am keen to hear your advice on some difficulties I have, since you wrote in your other comment that you “understand why Koutsoyiannis has a hard time getting published with his research work”.

        Thanks for reading this paper and providing critical comments. I guess you mean this: http://theoilconundrum.blogspot.com/2012/02/rainfall-variability-solved.html

        Then I guess you are on the right track, but you took only one step in this track. If you take more steps you may agree with me more and you may appreciate the Hurst-Kolmogorov dynamics more. Or perhaps you took only half a step because your analysis is about the marginal distribution, while the Hurst-Kolmogorov behaviour is about the joint distribution. You may say that these two are often confused in the literature, but I always insist we should distinguish the two. The power law behaviour in the dependence structure is totally irrelevant with existence or non-existence of power laws in the marginal distribution.

        Please take a look (in particular at slide 8) of a predecessor presentation on this paper given in 2007 (http://itia.ntua.gr/789/ ); this stuff was not included in the final paper but I hope it may help you reconsider statements such as “he thought he detected a power-law, but it was only a phantom” etc.

      • In reply to both Fred and Pekka

        I did not argue for a single null hypothesis.

        But when sampling data in a distribution, it is a fundamental error to ignore the uncertainty introduced by estimating from a sample rather than a population.

        Likewise, when sampling autocorrelated data, it is a fundamental error to ignore the uncertainty introduced by the autocorrelation in the data.

        These elements MUST be a part of ANY null hypothesis conducted. The suggestion I am implying a generic single null hypothesis is an incorrect interpretation of my comments.

      • Spence,

        My comment may be misdirected by the inclusion of your name in the quote. Apologies for that.

        In most general terms I come back to the requirement for statistical testing: The question must be formulated precisely. Testing for statistical significance requires a model of stochasticity. Various models of autocorrelation are common choices and may be the best available, when no theory tells more, but even including autocorrelations is too little. A real statistical test of significance requires a more comprehensive hypothesis that specifies better the nature of the stochasticity in the data.

        There’s also a lot of experience that the autocorrelations have long tails in very many practical fields of application and that those long tails can commonly be successfully parameterized by power laws over a considerable range in accordance of Hurst exponents. It’s also known that the range of applicability of the power law is seldom, if ever without limits in either direction. This leaves open the question, how much fundamental is in the success of the power laws and how much of that is due largely to the flexibility of the power law, when absolute accuracy is not required and when the range is not more than a couple of decades and often less.

        The MaxEnt approach comes up in the work of Koytsoyiannis and has been discussed here extensively by WHT. My views of that are similar to what I said above about power laws. There are certainly good reasons for the usefulness of that approach, but again it lacks rigorous theoretical foundation. It works for a wide set of problems. That cannot be pure coincidence. Rather it’s certainly due to some kind of law of statistics in the same spirit as the law of large number guarantees normal distribution, when certain conditions are met. Vaguely speaking it appears to be the case that breaking the rules required for normal distribution leads very often to the applicability of MaxEnt principle and results very often in power law tails over some range.

        This kind of observations are useful, but they should not be overextended as it’s in general not possible to predict how well the approach will work and as the success in the approach tells rather little about the basic dynamics. The wide ranging practical success of the methodology and the impossibility to learn much about the basic dynamics from the success are two sides of the coin. One could not be true without the other.

        Coming back to the beginning of this message. The assumptions related to applying maximum entropy or Hurst exponents are also too imprecise to satisfy the requirements of valid statistical testing. They allow for too many choices and lack a priori justification that’s required for a real test. That leaves open the real significance of the results obtained.

      • Pekka, I’m afraid your discussion makes no sense to me. Perhaps I am not clear on what you are trying to say.

        I saw WHT’s contributions on the last thread. His comments were that HK dynamics were “hokey”, which is not a scientific term I am familiar with, and that HK dynamics were “not a real model”, which is absurd (no model is real, and HK dynamics have a clear physical foundation). I outlined the similarity of his criticisms of HK dynamics with equally ill informed criticisms of quantum mechanics, and gave the Clausius Clapeyron example as per Demetris below.

        The rest of your post makes little sense to me. For example:

        There are certainly good reasons for the usefulness of that approach, but again it lacks rigorous theoretical foundation.

        The rigorous foundation has been provided, and I am unaware of a rigorous foundation for any alternative. This doesn’t mean we should stop looking but it seems to me this criticism is far more applicable to mainstream climate science than it is to Koutsoyiannis’ work.

        the success in the approach tells rather little about the basic dynamics.

        This makes even less sense. Stochastic dynamics present a rich descripion of the structure of variability and many falsifiable predictions. For example. The mainstream approach (deterministic forcing) tells us we should expect the power spectral density of orbital forcing to yield the greatest variance at 40kyr, next at 20kyr, then 100kyr being the smallest. HK dynamics predicts 100kyr greatest, 40kyr next, 20kyr next. The results are consistent with HK dynamics, not deterministic forcing. And when we go to the next timescale up, the orbital patterns disappear – requiring a new kludge in the explanation for each timescale.

        This is important. Not only does stochastic modelling make testable predictions, HK dynamics is the ONLY model which consistently matches observations across timescales without kludges to the theory. So we have
        1. A plausible derivation from first principles
        2. Falsifiable predictions
        3. A simple model which matches all observations without kludges

        I accept that even accounting for this, the model may still be wrong. But it is, without a doubt, the most compelling and convincing model I have yet seen for the climate.

        The assumptions related to applying maximum entropy or Hurst exponents are also too imprecise to satisfy the requirements of valid statistical testing. They allow for too many choices and lack a priori justification that’s required for a real test. That leaves open the real significance of the results obtained.

        This, I think, is the worst part of your commentary and I sincerely hope I have misunderstood.

        There is nothing imprecise about the mathematical formulation of the Hurst phenomenon. It is quite exact.

        Estimating the statistics of time series is difficult. But this is never a reason to dismiss an idea. “Nature has dealt us X, but X is difficult to work with, so we’ll ignore X and assume Y”. Nonsense.

        As for too many choices for analysis, once again this is not a correct metric upon which to decide what form of autocorrelation to assume, and secondly, autoregressive functions have been studied to a far greater depth, with far more choices, so even if it were a basis for a decision, the choice would not be the one you propose…

      • The Clausius-Clapeyron paper doesn’t appear particularly interesting to me. Much of the paper discusses the well known fact stated in every textbook I have that the explicit solution is valid only approximately and within a very limited range. (The paper tells also that this is well known, but claims that this is not taken into account in practice. There may be some examples of that, but that’s not true in general.)

        There’s more in the paper about the alternative approximation and about maximum entropy in the derivation of an alternative approximation, but all that is rather vague.

        Evidently the referees have found the paper worth publishing so perhaps I’m a bit harsh, but to me the paper doesn’t seem particularly significant.

      • Pekka,

        The point of the paper is not the practical aspects (the Magnus equation is widely known and applied) but the importance of the principle. How ignoring uncertainty when there is no physical basis for doing so leads to the wrong answer.

      • Spence, I am grateful for your neat comments–including the last one.

      • If the important issue is the principle, my question is, what does the paper tell on that?

        My impression is that nothing really. There’s no theory on that, only an example, which doesn’t prove anything.

      • Spence raises a number of important issues that deserve more attention from the technically more proficient denizens. I rather think that Fred and Pekka have a conservative but well thought out viewpoints which tend to be resistant to new approaches being put forward on this blog.

        With respect, I suggest that more focus be placed on the arguments at hand rather than on repeating views that already have been placed on the table, so to speak. Climate science has not progressed very well to date; maybe its time to look at new approaches and to check that the underlying science adds up.

      • Peter,

        One reason for my reaction is that many of the new arguments have not been used to develop knowledge but rather as evidence that better knowledge cannot be reached. They emphasize uncertainties and are being used as evidence of fundamental uncertainty. I fully agree that there’s a lot of uncertainty on many issues related to climate science, but I do not agree that any of these new arguments that I have seen gives such additional evidence on the nature or extent of the uncertainties that would make them really significant.

        To learn about the extent of the uncertainties one must go the practical cases and study, what the evidence can tell. Attempt to prove something by these general arguments has not been successful. People, who don’t want to believe what climate scientists tell like such arguments without understanding how little they really add to the knowledge about the uncertainties in climate science.

        Presenting new constructive ideas is always welcome, but using vague and weak arguments to discredit more specific science is not.

      • Pekka, I am hugely disappointed with this comment:

        One reason for my reaction is that many of the new arguments have not been used to develop knowledge but rather as evidence that better knowledge cannot be reached.

        Neither Demetris nor I have made any such claim, and I do not understand why such a claim would be made (although I accept others may have said such a thing).

        On the other thread I gave an independent example of quantum mechanics vs. classical mechanics. This is a debate that has raged for nearly 100 years and still continues today. Classical mechanics is built on determinism. Quantum mechanics is built on probabilities and uncertainty. Does the uncertainty at the core of quantum mechanics prevent better knowledge being reached?

        The answer is surely no. Quantum mechanics, for example, correctly predicts the spectrum of a hydrogen atom from first principles. Classical mechanics, despite being practical, intuitive and deterministic, cannot make this prediction.

        Why not? Because nature has probability and uncertainty at its very core, not determinism. If you build your theorems on uncertainty, then you create a closer representation of nature than if you build your theorems on determinism. Indeed, determinism is in fact an emergent property of uncertain systems.

        The move to uncertainty is uncomfortable for many. It is not as intuitive as determinism. Even today, people are publishing articles questioning quantum mechanics. Theorems built on uncertainty will inevitably show some things we believe today are simply false – once we get over this hurdle, and take our heads out of the sand of determinism, a whole world of new ideas, and better predictions will result.

        Nobody should be suggesting that Hurst-Kolmogorov dynamics will prevent learning or better knowledge, unless they unaware of the last 90 years of scientific research.

      • The thing to remember is that Koutsoyiannis likes to write in the form of presentation slides. His is a declarative approach to exposition. I took a look at his presentation (http://itia.ntua.gr/789/ ) and I think I am detecting some sort of convergence in thinking. He mentioned slide 8, and that is exactly the same distribution that I applied to the rainfall rate data.

        Yet, he does not use that distribution to fit to his dataset. Why the heck does he not do this? It is painfully obvious that the fit to the curves with the BesselK has a correlation coefficient of like 0.99999 or higher, and he could get a lot of people on his side if he were to do this.

        The reason he doesn’t is that Hurst-Kolmogorov evidently requires several applications of the stochastic process. What I did was apply only a doubly stochastic process, and found no power-law and it fit the data very, very well. But Koutsoyiannis doesn’t want to stop there and he thinks it should be triply stochastic, and beyond. You see, that is apparently the only way to generate the long power-law tails he wants to be able see in the data. But I can’t see how a triply stochastic process would work. The double stochastic is adequate as it models the randomness in a localized process at one level and then applies a second level at the geographically or temporally macroscopic level. The only way I can see a third level operational is one could somehow apply planetary or orbital influences. But the Iowa rainfall data did not extend that far in time and space so I am not sure why he didn’t limit it to the two levels alone.

        So when he said that I didn’t do the joint distribution, I think it means that I didn’t extend the stochastic process to more levels. I only did the marginal (?) doubly stochastic model. The triply stochastic model would not have fit the Iowa rainfall data. And I also admit that I didn’t do an autocorrelation model yet.

        So what we have here is a case of someone holding back on making some progress in understanding in the hope that he can prove the larger thesis on the applicability of long-term power laws.

        Bottomline, this is nothing new or fancy. The applications of the doubly stochastic model is also known as superstatistics, and is the idea of Christian Beck. In the end, what I did was nothing more than the equivalent of compounding the distributions of random variables to come up with a simple case of a doubly stochastic model. It is fine that some scientists want to transform that into something they want to call Hurst-Kolmogorov dynamics, but the statististics is still the basic statistics, and there is no way around that.

        With that said, why don’t we step back and actually figure out what the autocorrelation function is for this doubly stochastic exponential distribution. That would put a stake in the ground and we can use it as a yardstick for making some progress in understanding stochastic climate change. Koutsoyiannis is on to something but he is having trouble articulating it and getting it accepted, like we all do when dealing with these highly technical topics.

        As usual, I am actually trying to reach out and try to think in terms of alternate views — because as long as they are not totally crackpot, I believe we can gain some benefit from these views. We just have to be honest and point out where we are holding back some information. That is what open access blog science is all about. Things are uncertain as its is.

      • Spence,

        My comment on the implication refers on the general use of similar arguments on this site. Not everyone is guilty of that and some of those who are guilty sometimes are not every time.

        Concerning papers of Koutsoyiannis, they have been brought up several times in the past and most definitely having that kind of reasons for bringing them up.

        The issue of quantum mechanics is of very different nature. There the question is about the best fundamental theory for understanding, not about vaguely formulated approaches that can be made to give reasonable results. Quantum Mechanics has been formulated very precisely and there’s nothing vague about it. There remain issues of philosophical nature concerning its interpretation, but the mathematics is precisely defined and it’s success in explaining accurately much that cannot be explained by classical mechanics beyond doubt.

        It’s true that the early steps in developing QM were more vague and it took time to reach the consistent and precise formulation. I have told, where I see potential for improving the theoretical understanding of the practical success of maximum entropy or Hurst exponents. There are certainly also other possibilities for further quality science but that requires more rigorous approach to reduce the vagueness of the assumptions and to systematize the field. All that was done for the QM and resulted in a very strong theory.

        Here the question is not about finding a new fundamental theory like QM but about understanding why and when statistics leads to certain phenomena under certain conditions. The connection between the phenomena and the conditions is the issue to understand systematically.

      • Web,

        Just skimmed this thread. Inasmuch as I understand it:

        “But I can’t see how a triply stochastic process would work. The double stochastic is adequate as it models the randomness in a localized process at one level and then applies a second level at the geographically or temporally macroscopic level. The only way I can see a third level operational is one could somehow apply planetary or orbital influences. But the Iowa rainfall data did not extend that far in time and space so I am not sure why he didn’t limit it to the two levels alone.”

        What about intermediate scale processes? How does one know how many scales are necessary? It seems to me the binary choice of “local” vs” macroscopic” is not necessarily true and may apply only to some systems, not others.

      • Pekka, it is curious, when I posted up the example of quantum mechanics yesterday, I added a footnote stressing that I was *not* comparing the theories, but rather comparing the *reaction* to the theories by people who are uncomfortable putting uncertainty at the centre of their analysis. I thought, since we were discussing reactions to the theory above, this would be obvious from our train of thought, but it seems I should have added the footnote on this comment as well.

        But you are in good company. Einstein was not comfortable building theories on uncertainties, either :-)

        I do not understand what you mean by “vague”. The assumptions built into Demetris’ work are simple, clear and unambiguous mathematical declarations. The mathematics are fully formalised. There is nothing “vague” about it. You may disagree with the application of an assumption: but this is not about vagueness. So I think we have a communication problem here with the use of the word “vague”. I think it is the wrong word to use, and it is not clear which word you mean to use.

      • BillC, To form an initial understanding, the math has to be tractable. What I did was make it somewhat tractable by including only two levels. The random Monte Carlo sampling algorithm for the BesselK is amazingly simple; one takes the log of a uniform random number between 0 and 1 and then multiplies it by a scale set by the log of another random number between 0 and 1. That is all there is to it, and it fits the data. This becomes a composite stochastic that describes the overall variable strength of a storm modified by the variability within that storm.

        What Dr.K wants to do is keep on multiplying that out, thereby stretching the exponential until it looks like a power law. Look up the concept of “stretched exponential” to see an alternative explanation.

        To answer your question, certainly I would interested if other scales exist. Incidentally, the compositing when applied to a ratio of two processes immediately leads to a power law. That is a very common occurence.

      • I have no particular problems with the uncertainty that is inherent in QM – but I do think that it’s nature is not understood at all by people, who have spent quite a lot of time in understanding quantum mechanics.

        I consider an approach that contains essential assumptions vague as on approach, when the assumptions are not based on a coherent theory.

      • Pekka, I think this highlights where you are going wrong in your understanding.

        I would not use the word vague (as mentioned, it is the wrong word to use in English – I understand English may not be your first language) for this. What you refer to are perhaps unsupported assumptions, or unnecessary assumptions.

        This also shows you have not fully understood what Demetris is doing. Let’s go back to the Clausius Clapeyron example. The (faulty) equation – used in many climate text books – is a consequence of an unsupported assumption, that of a constant latent heat of vaporisation.

        What we find, is that Demetris removes the physically unsupported assumption (or “vague” assumption in your terminology) of a constant, and replaces it with an assumption which is better supported (entropy maximisation).

        So by your own definition, Demetris is in fact eliminating unnecessary assumptions – being less “vague” – rather than adding them.

      • Web- I’ll try to take a look this week, maybe starting with the page on you blog that Dr K linked to.

      • Spence – that is very clearly stated, if correct.

        To further clarify: Variable latent heat of vaporization with respect to what? Combination of P an T?

        What magnitude of variation with respect to absolute value do we see?

        Please – if you tell me “read the paper”, I will read it hoping the answers are very obviously summarized, or it is explained succinctly why not.

      • Yes. he presents the claim that the integrated equation is commonly misused. I don’t have many climate science books, but more physics books. I found that equation in several books I looked at, but all were very careful in pointing out that its applicability is limited. That includes the one relevant climate science book that I have, the book by Pierrehumbert. Two other climate related books included the correct equation, but it was used in a way were the inaccurate integral form was not needed nor presented in the books.

        My preliminary conclusion is that the claim of the paper was unjustified and presented to boost without justification the value of the ensuing discussion.

        Similarly many other points brought up in the paper don’t appear at all as novel as one might think reading the paper without much further knowledge on the subject matter.

      • Pekka, since you have so many books, could you check whether or not they use the equality of chemical potentials in the derivation of the Clausius-Clapeyron equation. Then perhaps you can reread my paper and decide yourself if this equality holds (I demonstrate that it does not hold; am I wrong?).

      • What we find, is that Demetris removes the physically unsupported assumption (or “vague” assumption in your terminology) of a constant, and replaces it with an assumption which is better supported (entropy maximisation).

        One point that I have made that entropy maximization is not a fully defined method as it depends on additional conditions, i.e. additional assumptions. The right conditions deriving the differential Clausius-Clapeyron equation, but there’s nothing new in that. When that is extended to something integrable, additional assumptions are introduced. They may be quantitatively better than the simple assumption of constant latent heat, but even so they are assumptions. Thus they are very relevant when we are discussing the importance of the principle rather than the practical value of the quantitative outcome.

      • Demetris,

        I should check the following more carefully to be sure, but I think that the equality vs. non-equality of the chemical potentials is really a matter of definition, i.e. of setting the scales for the two phases. You modify somehow the definitions to create the difference, which is not there in the standard definition.

      • “You modify somehow the definitions to create the difference”

        Wow!!! Do I??? (Full stop).

      • Demetris,

        You define ξ as the amount of energy to break the bonds and include that in your final result, but in the standard theory that is canceled out in the definition of the chemical potentials of the two phases making them equal in equilibrium. You get a different result, because you define the chemical potentials in a non-standard way.

      • Demetris,

        Again I write this with some reservation, but I do think that you assume much more than you realize when you use your equations (26) as an replacement of full material properties in your derivation. Using the Sackur-Tetrode equations like that is not strictly correct, but it may be a good approximation that is the reason for your good quantitative results. I would say that you get your results because of these equations, not from the maximum entropy principle. Having assumed the validity of those equations there are also other ways of deriving the same results.

        If I’m correct as I do believe that I am, the maximum entropy principle is not really essential at all for the result, which is rather just a direct consequence of the Clausius-Clapeyron equation and the Sackur-Tetrode equations. As I have also written, the maximum entropy principle adds nothing new to the derivation of the Clausius-Clapeyron equation either, although it may be possible to involve it in the derivation.

      • “You modify somehow the definitions to create the difference”

        Wow!!! Do I??? (Full stop).

        Yes you really do. Your whole approach is somehow backwards in the sense that you use as input something that you cannot know at that stage, i.e. the formulas (26). That’s not a valid approach. How do you know that the formulas are correct (they are not in the way you use them).

        The equality of chemical potentials (or using another word Gibbs free energy) is the fundamental condition for the equilibrium state at phase transition. Claiming that that equation doesn’t hold is evidence of serious lack in understanding thermodynamics.

      • Pekka, I won’t interject at this time in your discussion of Clausius Clapeyron with Demetris as I will probably cause more confusion than light, so I will wait to see if Demetris responds – although as head of a dept, he has a busy schedule.

        Back to the topic of assumptions, I am glad you can recognise that assumptions are not a zero sum game. One benefit of maximising uncertainty is that it allows us to lift many unsupportable assumptions. Indeed, if you are unhappy about including unsupported assumptions, you must be apoplectic about GCMs which are littered with such assumptions. These are present often due to necessity (due to perhaps limitations of computational power, pragmaticity, to simplify things and allow determinism, or perhaps even due to limits of knowledge).

        A brief story here. Raising HK dynamics at another blog, a commenter pointed me to a post on scienceofdoom. I was pleased about this because I enjoy scienceofdoom’s style, very technical and always willing to discuss openly, and was under the impression that the author had stopped blogging. I think this was the post. I think the poster was trying to impress me with how complex these models are and how much thought goes into them. Of course, all I saw was unsupported assumption piled upon unsupported assumption… To be fair, the post starts out describing very early models with slab oceans and the weakness of these models are explained in detail. Yet even as the models move on to more complex ocean representations, there are still decisions made in representing these that are forced by pragmaticity; simplifications to retain determinism and tractability of processing. And this is just one tiny corner of a GCM!

        With all of these unsupported assumptions, it comes as no surprise that GCMs fail miserable and reproducing natural variability, although it has been hard work to get climate scientists to acknowledge even this. And it is no surprise to me that by adopting a method which allows us to eliminate these assumptions, Demetris derives a much simpler model which demonstrably captures the characteristics natural variability more accurately than any GCM.

        A couple of final points. For those that are not so familiar with Dr Koutsoyiannis’ work, it is worth noting that HK dynamics are not the only thing addressed by the work. Indeed, his entropy maximisation work is in fact an uncertainty cookbook with a number of different recipes; these recipes define conditions under which we might expect a range of outcomes from a system, including IID gaussian, autoregressive behaviour or HK dynamics. In the last article I gave an example from my own area (shot noise in a photon detector) in which the cookbook leads to the correct answer (which is also well known). I like this type of a test because it is quick and easy to do, and underscores the predictive capabilities of the methods.

        Lastly, thinking of perhaps a more intuitive view of possible criticisms. Could we phrase it something like this. In a complex system, without representing it in detail, we cannot know for sure that there is something internally controlling behaviour, and therefore applying a principle such as maximum entropy may be inappropriate. I suspect it may be possible to develop a counter example where this is the case. But I think, in general, nature does not behave this way. Furthermore, in a situation where we do not know or have evidence of such a thing, maximising uncertainty must be the most pragmatic way forward.

      • Demetris,

        Your error may technically be related to leaving out the volume per particle for the liquid phase. A very small volume leads to large negative logarithm and the logarithm of the ratio of the volumes in the liquid and gaseous phases is probably exactly the term that cancels ξ in your approach.

        In any case the success of your approach is in the fact that the equations (26) are a better approximation of properties of water than constant L. As you noticed the constant makes no difference for the result and therefore that error in equations (26) does not destroy your results.

        You make a better approximation, there’s no doubt about that, but there’s nothing fundamentally important in that and the maximum entropy principle is not essential for the results in a way that would make it preferable to more direct calculations of the phase transition equilibrium states.

      • Spence,

        I believe that i found the exact point, where he made the error as explained in my above comment.

        Making errors is common and as such not very serious. What’s much more serious is to make bold claims on fundamentals on weak grounds. What is also more serious is not understanding, how and what to pick from sources as input for the analysis.

        As there are similar issues concerning bold claims and questionable argumentation also in his other papers, I’m not ready to have much faith in anything that his has written, at least when the papers extend beyond his main field (hydrology).

      • Spence, you say “I won’t interject at this time in your discussion of Clausius Clapeyron with Demetris as I will probably cause more confusion than light, so I will wait to see if Demetris responds – although as head of a dept, he has a busy schedule”.

        Please, feel free to interject. If you read above my brief comment again, you will see that I put a full stop. I trust you have seen the paper and you may have noticed that I carefully use the definitions and mathematics, so I do not find any constructive character in participating in an arrogant discussion, in which the rigour of mathematics is replaced with statements of the type “serious lack in understanding thermodynamics” etc.

      • Pekka, my post crossed with yours (I started writing it before your recent ones), and as I stated, my contribution on your most recent comments may cause more confusion than light, so I will wait for Demetris to respond to those. As for your comment:

        What’s much more serious is to make bold claims on fundamentals on weak grounds.

        I once again reiterate the point I made in my previous comment. If you are unhappy with the analysis by Demetris, you must be apoplectic at the unsupported assumptions and overstated claims based on GCMs.

      • Well, it seems this morning I am cursed to cross-post and disrupt the flow of conversation.

        Thank you for your response Demetris. I am not well versed in subtleties and I think your full stop went over my head. I agree that further conversation is likely to become frustrating (hence my view that further discussion would perhaps result in more confusion than light).

        In addition to subtlety, diplomacy and tact are not strong points for me either. I have ended up in one or two heated debates online (perhaps needlessly). Learning when to walk away and how best to do it is a valuable lesson indeed.

      • It is a general truth that a major fraction of all scientific papers are of little real value except that they are needed to keep the scientists working. Valuable results are often created, when a seemingly less important research leads the scientist to an unexpected observation. If they wouldn’t be doing that other and apparently less important research they wouldn’t have had the change of hitting the important one either. Many papers are written also by young scientists, who are still learning the craft.

        My view is that this state of matter is unavoidable, but that means also that nobody should put very much value on any single paper (or any single author) before it’s value has been confirmed by further development. As climate science is so controversial, there will always be papers that are of little real relevance but which appear superficially to support some views. People who like those conclusions bring them to net discussion. Most of the really obvious cases are papers picked up by skeptics, and these papers are often really terrible. Koutsoyiannis is not among the authors of terrible papers, but my view is that he is also applying his knowledge to new areas without sufficient understanding of these areas and presenting conclusions that are not really substantiated by the analysis.

        The large computer models are a different problem. People, who are really working with these models and publishing results have usually quite a lot of expertize in their field. They don’t make similar errors, but I do agree that there are many serious issues concerning the reliability of the model results and that those difficulties should always be taken into account when the results are used. The open problems are large enough to make estimating the reliability and accuracy of the model results difficult for the best experts as well.

      • but my view is that he is also applying his knowledge to new areas without sufficient understanding of these areas

        Interesting projection.

        People, who are really working with these models and publishing results have usually quite a lot of expertize in their field. They don’t make similar errors

        … which brings us full circle back to religion.

      • For the first issue, the one single error alone may be considered a proof of my view, perhaps not making it, but publishing it claiming effectively that all specialists have erred for 150 years – or not realizing that the claim meant that, while the error was actually clear and obvious. Then we have the additional reasons..

        For the second. There’s certainly more subjective judgment in using and evaluating the GCMs than in much of scientific work, but there are still many independent groups and many competitive scientists in the field. It’s false to imagine that those scientists would have formed a collusion to mislead others rather than act largely as individuals who look for weaknesses in what others have done and compete on, whose methods and justifications are best. The principles of scientific work are operational, while the outcome is still far from perfect.

      • Pekka, I do not think you understand the meaning of the word “projection” in this instance. Nevertheless, you have shown without a doubt that your imagination is an astonishing place.

        but publishing it claiming effectively that all specialists have erred for 150 years – or not realizing that the claim meant that, while the error was actually clear and obvious.

        There are so many errors in this statement I am not sure where to begin.

        What has been presented here does not show that “all specialists have erred for 150 years” any more than quantum mechanics showed that all scientists studying classical mechanics had erred for the 250 years prior to that date. Such a claim is a misunderstanding of science so deep that I am not sure if you are serious or not. Your response is so peculiar I think only satire can get close to embodying my astonishment.

        It’s false to imagine that those scientists would have formed a collusion to mislead others

        Good grief, are you suggesting we think it is a conspiracy now? Pekka, you have not understood a single thing we have said to you. You are tilting at windmills.

      • The point that I’m discussing is the role of chemical potential, which Demetris emphasized further in this thread. That may seem trivial to you, but erring on that point like the paper does proves severe lack of understanding. It’s a issue that on which no one, who has learned enough thermodynamics to write papers on it can err in that way.

      • Pekka, that is what my projection comment refers to.

      • But I must say that we have discussed the work of a third person too long already.

        Sometimes that is difficult to avoid when somebody wants to use questionable papers to support his views. Arguing against erroneous arguments presented in that way cannot necessarily be done without rude comments on the paper in question. In this case I would prefer to stop here.

      • I think that my reference to thoughts about collusion was mild after your derogative use of the word “religion”.

      • Indeed, and I think that is why Demetris stopped when he did.

        PS. As yet, nobody has correctly diagnosed an error in the paper to me. Please see Tomas Milanovic’s work on “slaying the sky dragon” for an example of how to go about a critique of a rigorous mathematical analysis.

      • I can repeat the error. if you don’t understand, why it’s a explicit error in mathematics, it’s your problem.

        When the point of difference between chemical potentials was emphasized, I realized immediately that the error must by in neglect of the large difference between molecular volumes between the liquid and the gas. Then it didn’t take much reading of the paper to notice that exactly the relevant term had been left out thinking erroneously that it is so small that it can be neglected.

        Note that I found the error, because the result is explicitly against one fundamental equation of thermodynamics and that it was so easy to point out exactly, where it was done with basic understanding of thermodynamics.

      • Pekka, read the paper again. It is not the difference that is left out.

      • It is stated as one side of the difference, but that means in practice that it’s the difference.

        This was so obvious because it’s known that the binding energy in compensated by the volume related term in the equilibrium condition.

        When the paper claimed that there’s is a extra term, it was obvious, what had been left out from compensating the spurious term.

      • “Please see Tomas Milanovic’s work on “slaying the sky dragon” for an example of how to go about a critique of a rigorous mathematical analysis.”

        That must have been a waste of time. The Sky Dragons argument only deserves ridicule. On the other hand, Koutsoyiannis actually makes people think about looking at the problem from a different perspective. I definitely get something out of them.

      • It is stated as one side of the difference, but that means in practice that it’s the difference.

        What?

        The difference between the volume of the gaseous phase and liquid phase is approximated by assuming the volume of the liquid phase is negligible, so the difference becomes just V sub G. That isn’t the same thing as leaving out the difference at all.

      • WHT, I fully agree! I most certainly am not intending to draw comparisons between the work of Itia and the work of the sky dragons!!!

        My request was that Pekka more clearly sets out the criticisms of the paper so they can be responded to, and I wanted to give an example of how to lay out the criticisms (Tomas was very thorough and careful, irrespective of the quality of the paper). To be fair, Pekka has now been a little clearer and this is helping identify the root of the misunderstanding, I think.

      • Spence,

        One more time.

        According to the well known and accurately formalized theory of thermodynamics the chemical potentials are equal at the equilibrium in phase transition. Any claim that it’s not so is contrary to the basic theory. Such a result was obtained in the paper based on putting together equations that are not as fundamental, but rather inaccurate. The Sackur-Tetrode equation is accurate for monoatomic ideal gas, not for real substances and even less for liquids. Even, when it’s not accurate, it can lead to useful quantitative approximations as shown also by the paper in question.

        Lets consider the phase transition. When a molecule is released from liquid to gas two things change. First the energy that is required to break the molecular bindings in lost from kinetic energies. Secondly the molecule moves from liquid where the molecular volume is small to gas where the molecular volume is large. These two changes both contribute to the Gibbs free energy/chemical potential. At equilibrium these two factors cancel exactly. The paper includes the first change but has thrown away the second one in dropping the volume term for the liquid. This description is not totally exact, because the whole formula for the liquid is not really correct, but that doesn’t make the paper any more correct, rather it makes the error more severe. I pointed our, where the error is, not how the approach could be rigorous as it cannot.

      • No, you have misunderstood the mathematics in the paper. The change in entropy due to the phase transition is accounted for.

        Yes, the term for the volume of liquid phase is not included. But the term for the volume of the gaseous phase is included. Since the change in the former is trivial, and the change in the latter is large, this is a reasonable approximation.

        So the entropy due to the change in volume is accounted for. The assumption and basis is clearly stated in the paper.

        It is an approximation, so perhaps we could be wary of it (a numerical check should be easy to conduct), but I see no subsequent operations that might cause problems (e.g. further differencing, raising to an exponential). The differentiation to obtain the maxima needs care, but the rates associated with VG are also large in comparison to changes VL, so to a first order it does not concern me.

        Since we can be satisfied that the consequence of this would be some error term, and we can see the result yields a good match to observations, this suggests to me that the error term remains small. If someone *really* wanted to show there was a problem, it would be easy to do so with numerical analysis of worked examples to show how the error due to the approximation propagates. But such an analysis would not yield the answer you expect.

        In summary: you have misunderstood the paper. Read it again.

      • Spence,

        Learn some thermodynamics.

        The smallness of the molecular volume in the liquid is important to the derivation as it’s directly related to the size of the binding energy. They go together and including one while excluding the other is fundamentally wrong. That second term is explicitly excluded in the formulas and that’s an error.

      • So I suggest we do some numerical analysis to resolve the differences we are talking about here and your response is “learn some thermodynamics”?

        We can safely conclude you are lazy as well as arrogant then.

        Never mind, Pekka. You continue complaining about models which match observations, while supporting models that disagree with observations, and I will continue to do science.

      • Spence,

        What’s your point concerning numerical analysis?

        I have made it clear from beginning that I have no objections on the numerical results presented in the paper. The whole controversy is on two issues:

        1) Is the paper correct when it claims that the chemical potentials differ or is that claim fundamentally wrong and the outcome of an explicit error in the derivation.

        2) Is the numerical accuracy of the results of the maximum entropy principle or has it been introduced by additional assumptions made in the paper and specifically by the formulas (26).

        I agree with the author that the first point doesn’t affect the numerical results, thus there’s nothing to check there for this reason.

        The two points are relevant for the scientific value of the paper, and from one of his messages I conclude Demetris also considered the point 1) important in that way, while the importance of the point 2) might be emphasized by many.

        The second point is similar to what occurs with the maximum entropy principle very often. It’s a useful tool in practice and has produced good numerical results in very many cases. It’s, however, seldom used with full care considering rigorously, what else has been assumed in the calculations. Therefore it’s very common that it’s not known, what is the real reason for the success (the principle or other input). A good tool may be a good tool even, when it is used in absence of rigorous basis, but one should be more careful, when fundamental value is given for the tool.

      • For the sake of record I add that may previous comments on the explicit error in the paper on Clausius-Clapeyron equation were incomplete.

        To complete the point it’s necessary to add that the paper has directly a non-standard definition of the chemical potential. Chemical potential is not defined through the partial derivative of entropy as the paper claims but as a partial derivative of U, H, F or G. Which of the four is to be taken depends on the choice of other variables to be held constant. For some situations, the formula of the paper is correct, but only for some conditions. Combining this error with the error introduced in equations (26) explains fully the fundamental error that has led to the result of differing chemical potentials.

        To start with it’s really nonsensical to start from approximate (and actually erroneous) equations picked without further justification from a test book. The Sackur-Tetrode equation is well known and understood for the ideal gas, but not for real gas and even less for liquids. As I don’t have that particular text book I cannot check, what has led Koutsoyiannis to think that the formulas would be correct as given. Whatever the reason, it’s totally false to start from such formulas and use them to “derive” something much more fundamental.

      • Pekka, I cannot contribute to a discussion when its tone is like it was, so I preferred to remain silent. But I have been interested about the essence of your critique. I have checked what you identify as errors in my paper and I do not think these are errors. However, you are right that the chemical potentials should be equal and I spotted my error, which was in a partial derivative. I prepared a corrigendum which I submitted to the journal and also posted in my web site (same with that of the paper). You may see that I acknowledge the discussion in the blog.

    • Spence_UK, Fred Moolten and Pekka Pirilä
      Null Hypothesis in Scafetta (2011)
      You variously note:

      I think we have an excellent handle on natural variability, which includes both natural forcings (solar and volcanic), and internal climate modes.

      But to detect such a trend, you need to reject the null hypothesis of natural variability, and in order to do this, you need to know the structure of natural variability.

      A proper discussion of null hypothesis is possible only in connection to a precisely formulated question,

      To reconcile these statements, may I recommend evaluating the evidence and null hypothesis compiled by Nicola Scafetta (2011) and his evaluation of anthropogenic warming:
      Nicola Scafetta, “Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models” Journal of Atmospheric and Solar-Terrestrial Physics, in press. DOI: 10.1016/j.jastp.2011.12.005. PDF

      We show that the GCMs fail to reproduce the major decadal and multidecadal oscillations found in the global surface temperature record from 1850 to 2011. On the contrary, the proposed harmonic model (which herein uses cycles with 9.1, 10–10.5, 20–21, 60–62 year periods) is found to well reconstruct the observed climate oscillations from 1850 to 2011, and it is shown to be able to forecast the climate oscillations from 1950 to 2011 using the data covering the period 1850–1950, and vice versa.

      Consider my initial null hypothesis:

      Global surface temperature from 1850 to 2011 is composed of harmonic superposition of natural cycles with 9.1, 10-10.5, 20-21, 60-62 year periods.

      To this I see Scafetta adding an anthropogenic contribution:

      “the same IPCC projected anthropogenic emissions
      would imply a global warming by about 0.3–1.2 1C by 2100”

      In this case, how would you formulate or modify this null hypothesis?
      What statistics are needed to falsify it to justify the anthropogenic warming?
      On what basis would you distinguish and validate/disprove the Scafetta vs IPCC models?

      • David – The problems with Scafetta’s theories have been thoroughly explored in previous posts – please use the search box for “cyclomania” and also review the earlier thread on Scafetta/Loehle. I don’t want to recapitulate all of that discussion, but it’s worth pointing out that fitting cycles to phenomena is always possible if you’re not required to specify the cycle length in advance but must look for one in the historical data that happens to correlate with the phenomena you’re studying. The problem gets worse if you are allowed to relax the constraints so that a 60 year cycle, for example, can be 57 or 64 years. This is one of many reasons why Scafetta doesn’t appear very convincing. Another is the absence of a clearly plausible and demonstrable mechanism. Let me cite an example. Scafetta proposes that his cycles explain the flat temperature between the late 1940s and late 1970s, but a mechanism is needed. We know from Martin Wild’s work that anthropogenic aerosol cooling appears to be a good explanation, consonant with the industrial aerosols that increased following WW2. Wild shows that this effect is seen in the reduced solar radiation reaching the surface in the absence of a reduction in solar intensity, and this reduction can’t simply be explained by cloud changes because it’s observed under cloud free conditions. Aerosols can interfere with the passage of sunlight from space to Earth, but I don’t see an astronomical mechanism as plausibly doing the same. In any case, that’s only one example.

        To some extent, most of this is not particularly relevant. That’s because the dominant anthropogenic ghg contribution to post-1950 warming is not something we conclude by default. It’s not a case whereby anthropogenic warming is what is left over after everything natural is accounted for. Rather, our detailed knowledge of the radiative and atmospheric behavior of CO2 and other ghgs tells us that these ghgs must inevitably exert strong warming effects if their concentration rises significantly. This comment is not the place to revisit the issue of climate sensitivity and feedback magnitude, so it suffices merely to say that ghg warming is inevitable and not put a number on it. On the other hand, anyone who would claim that if the range of possible natural variability could theoretically extend to 100% of observed warming, CO2-mediated contributions might be trivial or absent is making a claim that is demonstrably false by what we know about CO2. That’s a theoretical blind alley. It doesn’t mean that we shouldn’t learn as much as we can about natural variability (although I doubt much is related to astronomical events of interest to Scafetta), but it does mean that on the timescales of particular interest to us – multidecadal and centennial – we need to acknowledge a predominant role for anthropogenic forcing from changing ghg concentrations. The situation may be different on much shorter and possibly on much longer timescales.

      • Markus Fitzhenry.

        “This comment is not the place to revisit the issue of climate sensitivity and feedback magnitude, so it suffices merely to say that ghg warming is inevitable and not put a number on it. Rather, our detailed knowledge of the radiative and atmospheric behavior of CO2 and other ghgs tells us that these ghgs must inevitably exert strong warming effects if their concentration rises significantly.”

        Your wrong Fred, atmospheric gases only dissipate heat energy and have no strong warming effect. Atmospheric gases are the mechanism that slowly cools the Ocean surface temperature. Your logic is argumentum ad populum. You accept the premise as being correct and you do not consider another contrarian view, of which there are many.

        “We know from Martin Wild’s work that anthropogenic aerosol cooling appears to be a good explanation, consonant with the industrial aerosols that increased following WW2”

        Any bloody excuse except that Co2 forcing is wrong. Hopeless. Go back and learn the Philosophy of Science before you attempt to apply it.

      • Cyclomania is a fitting term (if you know what I mean).

        In statistics, indices such as Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) are often used to judge how well a model fits some set of data. The number of modeling parameters required usually worsens the judging criteria. Garden variety cyclomania has so many parameters that it would sink the AIC and BIC scores.

        I should point out again to the the empirical rainfall results that Dr Koutsoyiannis modelled in a recent research article of his. As a model comparison, I took the average rainfall rate as the only adjustable parameter and was able to fit the rainfall distribution to better than a 0.9999 correlation coefficient. And I didn’t even actually have to adjust that parameter because I used it directly as Koutsoyiannis reported it in the article.

        Now the way that AIC and BIC works is that another model has to have a better criteria value than the currently favored model to beat it in a statistical sense. I would like to see more people use this kind of criteria to evaluate modeling results. When you have a model that has a much better information criteria score than any other competing model, you have to sit up and take notice.
        http://theoilconundrum.blogspot.com/2012/02/rainfall-variability-solved.html

      • Another interesting point that Koutsoyiannis brings up is that occasionally a distribution of some disordered parameter is not maximum entropy. For example the rainfall distribution that I referred to is not maximum entropy, given the only constraint of mean rainfall. Yet by deriving a composite of two distributions, one for the local variations and one for the collections of variations, where each of these is maximum entropy, then we get the resultant model. This isn’t MaxEnt but it derives from the pair of MaxEnt marginals.

        What is controversial is that Koutsoyiannis and others suggest that another entropy definition, in particular Tsallis entropy, is needed to correct this deficiency. I am not sure about this, as I think it all has to do with the maximum entropy distribution deriving from the notion of Boltzmann and Gibbs energy distributions. So what happens is that the distribution parameters always have to reduce to energy considerations for the principle to take hold. If you take a different marginalization of the probabilities you are no longer looking at it from the perspective of energy. The Tsallis entropy may be a crutch to allow power laws to stand in for the physically required Gibbs view.

      • Web, you say “What is controversial is that Koutsoyiannis and others suggest that another entropy definition, in particular Tsallis entropy, is needed to correct this deficiency.”

        This comment helps me to clarify that the framework in the first 15 slides in our 2007 presentation I mentioned before we use the classical, Boltzmann-Gibbs-Shannon, entropy definition. So, we derive everything based on the classical definition. Then we discuss also Tsallis generalized entropy definition as an alternative way to derive similar results in one optimization step.

        Since then, we have made a progress and we do not use the Tsallis definition at all. You may take a look at a recent paper by Papalexiou and Koutsoyiannis (2012), http://itia.ntua.gr/1182/ . The same results can be obtained by justifiably generalized constraints rather than generalized definitions.

        A final comment: When you talk about these two papers, please do not say “Koutsoyiannis says …” etc. You may notice that Simon Papelexiou is the first author in both papers and the first paper to which you refer includes Alberto Montanari as a coauthor. It is unfair to refer to a paper by the name of the second author, i.e. myself. In this case it is even more unfair, because the idea that the Tsallis entropy definition is not needed is Simon’s, who is my colleague and PhD student. Also, the name Tsallis entropy is not very fair, because Havrda and Charvát preceded Tsallis by a couple of decades in giving this definition (albeit independently–and Tsallis has recognized this I think). In the last paper we use the term Havrda–Charvát–Tsallis entropy for this.

      • Ok I stand corrected on the nitpicking concerns but I see no response on the bigger picture.

  14. Markus Fitzhenry.

    Be true to yourself, is all he asks, your his son.

    He is starting to become annoyed about the damage being caused to his children by the invalid AGW theory.

    He has been casting the net lately to gather in all his good fish, those that try and avoid his net will be condemned.

  15. Methinks you’re going to need another post to keep this one from getting cluttered, Judith:

    http://www.huffingtonpost.com/peter-h-gleick/-the-origin-of-the-heartl_b_1289669.html

    He admits to forwarding the Heartland documents, but not creating the “fake.” I’m sure that most “skeptics” will take him at his word on this one.

    Heh.

    • Markus Fitzhenry.

      “”In an effort to do so, and in a serious lapse of my own and professional judgment and ethics, I solicited and received additional materials directly from the Heartland Institute under someone else’s name.””

      He has admitted fraudulent activity Joshua. Most sensible people will be fundamentally disgusted that Peter thinks his ideology is more important than Gods.

    • Joshua –

      Sorry to burst your bubble, but I’d bet that a large number of skeptics, myself included, would be willing to take him at his word. Hell, confirmation bias can happen to the best of us.

      Besides, taking him at his word includes his admittance of (at a minimum) misrepresentation, not to mention falling for a reasonably obvious fake. Promulgating the fake to others just makes his actions worse.

      So yes, I’ll happily take him at his word – but I’ll also expect him to do his duty as a citizen and cooperate with Heartland to catch the author of the fake memo….

      (To all, sorry for continuing the OT, but I got caught by the smarmy ‘heh’.)

    • “Innocent until proven guilty”, they say.

      Let’s see how this plays out.

      Once he divulges the name of his co-conspirator, i.e. the “creator” of the “fake document” he “leaked”, we’ll know more.

      Max

  16. The great thing about the Bible is that even if you are a non-believer (me), there is always something practical to be found there. Human arrogance? Who can doubt it!

    Is there, was there, will there ever be a more profound arrogance than “the science is settled.” ?

    • Perfect! Nothing more need be said.

    • Outstanding sentence and question

    • this is actually a very profound response, and yes, if you check out the story in Genesis it makes sense that the building of the tower of Babel was showing the arrogance of man.

      What is missing though, is that in the Acts of the Apostles, this lack of understanding was reversed :)

  17. What is most important, however, is that this way of reasoning is rooted in the fallacy that climate can, in principle, be described in deterministic terms

    Is saying that the climate will maintain a relatively constant temperature, no matter how much ACO2 is added to the atmosphere, describing the climate in deterministic terms?

    Today, we are told we are all sinners (because we exhale carbon dioxide, give birth to children who do the same, and we also drive cars), and thus we will burn in hell as the Earth warms up.

    Ok – driving cars and thus the Earth heating up – I’ve heard people say essentially that.

    But “sinners..(because we exhale carbon dioxide, give birth to children who do the same…..)” and “we will burn in hell….”

    Yeah – poetic license and all that, but is this kind of hyperbole really necessary? Does anyone think that it contributes in any positive way to a reasoned debate?

    Really?

    • Joshua
      Re: “saying that the climate will maintain a relatively constant temperature”
      They do not say that. Reread.
      Re: “Does anyone think that it contributes in any positive way to a reasoned debate?”
      Yes!
      It is important to expose and examine the religious nature of the debate and the use of ad hominem and rhetorical methods that are directly contrary to sound science. Consider the very use of “climate change” as an equivocation for “catastrophic anthropogenic global warming”!
      Foundational is implicit the worship of nature and trying to maintain a specific temperature or a very narrow temperature range – when climate naturally changes by far greater amounts – and there is no sound basis for preferring one temperature over another – other then pride and fear.

      • David –

        They do not say that. Reread.

        I’ll point it out to you in the future when/if such posts appear here at Climate Etc.

        It is important to expose and examine the religious nature of the debate

        Do you have a link to someone saying that people are sinners for breathing or giving birth to children who breathe, or that we’ll “burn in hell as the Earth warms up?”

        That is, other than the author of this article in his straw man argument?

      • [Do you have a link to someone saying] that we’ll “burn in hell as the Earth warms up?”

        These guys
        come kinda sorta close to it.

      • Joshua
        See the range of temperatures they show in their Figs 5, 6 and 7, and the related discussion. Earth’s climate naturally varies across at least 10 degrees, and we are supposed to get alarmed over a change of 2 degrees? – especially when there are major benefits to agriculture from higher CO2 and we will have many more mouths to feed?

        Re: “sinners” . . .”“burn in hell as the Earth warms up?”
        That their accurate paraphrase of the catastrophic anthropogenic global warming argument using religious language with the correct meaning for the words.

      • Christine starts off by telling us (citing Sagan and Hawkins):

        A C02 concentration below 350 ppm in the atmosphere is what leading scientists agree is safe for humanity. We’re above 390, and climbing. I’m a mother, an educator, and a former registered nurse, concerned about climate change

        Need we read more?

        When it starts off with “balderdash” (a term that’s been used here to describe “bad science”), it will probably continue as such.

        Max

      • Christine isn’t citing Sagan & Hawkins there, she is citing every National Academy of Science of every major country in the world.
        Yet here in this little echo chamber the arguments continue, and over and over the mantra is repeated that the world’s scientists are wrong, and that humans spewing over 90 million tons of heat-retaining pollution into the atmosphere every day could not possibly be the problem with the increasingly weird and extreme weather events around the world.
        And since you folks seem fond of quoting the bible, here’s one more for you: “The nations were angry, but the time for your wrath has come. It is time for the dead to be judged- to reward your servants, the prophets, the saints, and all who fear your name, both unimportant and important, and to DESTROY THOSE WHO DESTROY THE EARTH.” rev 11:18

      • Chistine, that’s very relevant. Here is the full quotation, the original Greek and its translation into English from http://www.sacred-texts.com/bib/poly/rev011.htm

        Εὐχαριστοῦμεν σοι, κύριε ὁ θεὸς ὁ παντοκράτωρ,
        ὁ ὢν καὶ ὁ ἦν
        ὅτι εἴληφας τὴν δύναμιν σου τὴν μεγάλην
        καὶ ἐβασίλευσας.
        καὶ τὰ ἔθνη ὠργίσθησαν,
        καὶ ἦλθεν ἡ ὀργή σου
        καὶ ὁ καιρὸς τῶν νεκρῶν κριθῆναι
        καὶ δοῦναι τὸν μισθὸν τοῖς δούλοις σου τοῖς προφήταις
        καὶ τοῖς ἁγίοις καὶ τοῖς φοβουμένοις τὸ ὄνομα σου,
        τοὺς μικροὺς καὶ τοὺς μεγάλους
        καὶ διαφθεῖραι τοὺς διαφθείροντας τὴν γῆν.

        We give thee thanks, O Lord God Almighty,
        which art, and wast, and art to come;
        because thou hast taken to thee thy great power,
        and hast reigned.
        And the nations were angry,
        and thy wrath is come,
        and the time of the dead, that they should be judged,
        and that thou shouldest give reward unto thy servants the prophets,
        and to the saints, and them that fear thy name, small and great;
        and shouldest destroy them which destroy the earth.

        That is, the Revelation says it is God’s job to judge those who should be judged and destroy those who should be destroyed. I guess to expropriate God’s job is more than arrogance, it is hubris.

    • Yes, really. Ok. so it’s from The Onion (“America’s Finest News Source”)- a satirical publication for those not familiar with it- but like all good satire has an annoying grain of truth:

      http://www.theonion.com/articles/scientists-look-onethird-of-the-human-race-has-to,27166/

      In Chemical and Engineering News a year or so ago the editor, a true believer of CAGW, had on his Editor’s Page “Too Many People”.

      This is not so much hyperbole as you might think….

      • Peter,

        In Chemical and Engineering News a year or so ago the editor, a true believer of CAGW

        You may have noticed that the same editor wrote an editorial wringing his hands about “limitless growth”…about the same time his organization was crowing about its record-setting membership numbers.

      • Peter –

        The author makes it sound as if this type of rhetoric is typical

        In the middle ages, people were told they were all sinners, and thus they would burn in hell. However, they could buy indulgence: they could pay an amount of money and get a piece of paper which certified that their sins were forgiven. Today, we are told we are all sinners (because we exhale carbon dioxide, give birth to children who do the same, and we also drive cars), and thus we will burn in hell as the Earth warms up

        The comparison is made to the ubiquitous message that “people were told that they were all sinners” of the middle ages.

        Just as bad, IMO, as creating such straw man hyperbole would be defending it as accurate. We all slip into hyperbole at times. But when we’re called on it, we should own up to it.

      • Pete Bonk

        Love the Onion.

        The rhetorical question is asked:

        Scientists: ‘Look, One-Third Of The Human Race Has To Die For Civilization To Be Sustainable, So How Do We Want To Do This?’

        Answer: The “scientists” who tell us this should be the first to volunteer themselves and their families to be sacrificed to help save the rest of “the human race”.

        It’s the least they can do. Logical, no?

        Max

    • joshua you need a little bit of self-examination to actually come to grips with why the story of the Tower of Babel is a great example of the arrogance of climate scientists.

      Since I am a “believer in God”, I will address you in those terms because I really do think that you have completely missed the point.

      First of all, because of man’s fallen nature we will always be sinners because we constantly make decisions that could be construed as evil. Now when it comes to the religion of Man Made Globull Warming, it is quite easy to see where adherents believe that others have “sinned”. Here are some hints:
      oil = evil
      coal = evil
      solar power = good
      driving a puny car that is hybrid = good
      Do you get the picture? It is about our actions, and it is about the way that people try to salve their consciences.

  18. OFF TOPIC- Peter Gliek admits it. Go to Dot Earth.

  19. For further evidence exposing the nakedness of the Emperor’s new CAGW clothes, see Demetris Koutsoyiannis et al. at ITIA and their detailed quantitative work on climate, Hurst-Kolmogorov dynamics, and persistence.

    For example:
    D. Koutsoyiannis, A. Efstratiadis, N. Mamassis & A. Christofides “On the credibility of climate predictions” Hydrological Sciences–Journal–des Sciences Hydrologiques, 53 (2008).

    “… precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.”

    This is discussed at ClimateAudit I, and at II e.g., “Par Frank observes:

    “In essence, they found that climate models have no predictive value.”

    In another example, in contrast to IPCC’s > 90% confidence, Koutsoyiannis et al. show that:

    The Hurst-Kolmogorov dynamics, also known as long-term persistence, has been detected in paleo-climate reconstructions, dating back to 3,000 ky.
    * Only a portion (36-46%) of natural variance can be described by the orbital forcing. . .
    * The residual time series, describing the 54-64% of natural variations can be described as an HK (Hurst-kolgomorov) process. This is not white noise.
    * The decline of the mean temperature during the last 3,000 ky could be
    explained as an intrinsic characteristic of HK process and not a
    deterministic trend.

    Orbital climate theory and Hurst-Kolmogorov dynamics 2010

    Similarly they find:

    The performance of the GCMs, quantified by the correlation and efficiency of coefficients*, is poor. . . It is obvious from the quantitative results that the climate models cannot reproduce the historical time sequence of events. . . .
    The quantitative results depict the inability of climate models to reproduce the actual (historical) temporal variation of rainfall and temperature, and in particular, the occurrence of extreme events.
    • In temperature they reproduce the seasonality and statistical characteristics of maxima, thus behaving like typical random number generators.
    • In rainfall they do not reproduce seasonality neither the statistical behaviour of daily maxima.
    • More specifically, in rainfall extremes, GCMs consistently err by up to an order of magnitude. A systematic overestimation of the rainfall frequency is observed, along with a severe underestimation of the rainfall intensity in all studied locations.

    Statistical comparison of observed temperature and rainfall extremes with climate model outputs

    Caution: The Emperor does not take kindly to being publicly told that he is exposed. See D. Koutsoyiannis observes one paper:

    ” was rejected outright (in chronological order) by Nature, Science, Nature Geoscience, Nature Physics . . .and twice by Geophysical Research Letters . . .because of a repeatedly strong negative reviewer . .

    Consequently, being an expert in rejections, he advises “younger colleagues not to strive too much to publish their papers, particularly the good ones, in the journals with the highest impact factors.”

    It is sad that such great promises and arrogance have to be so explicitly exposed. It would be far better for editors and reviewers to ask the blunt questions up front rather than flowing with the lemming crowd and fall so far.

    • David – I think it was about a year ago that I discussed the 2008 paper in some detail. I’m not prepared to repeat that now, but my reaction then was similar to my comment above – Koutsoyiannis, who has interesting things to say, would be taken more seriously by journals and the scientific public if he didn’t seriously overinterpret his findings to draw conclusions unwarranted by the data. We see some evidence for that in the excerpts quoted by Dr. Curry in her post, and that seems typical of what I’ve observed in the past.

  20. I think our son had the first shot at the “indulgences” analogy. He was an AF officer for 9 years. One of his missions was commanding the ground support mission for the C17 flights that open Antarctica every summer. He rode with Ann Curry and the Today show crew on the NZ – McMurdo leg. The Today crew was headed to Amundsen Station. Given the 17 hour time change jet lag, and our son being my son for sure, he decided to jerk with the Today show producer some. He asked him if he had any idea how much fuel it was costing to send them all down there and if he understood the carbon implications. The producer said, “We thought of buying credits.” Our son said, “What are those?” and let the producer unwind the whole tale – which he already fully understood. Having his BS in engineering and his MA in Theology, he responded to the effect: “Oh yes. They’ve had that practice for centuries but didn’t they used to call it indulgences?” The producer was a very good sport…after the boy bought him a beer.

      • Just a fun story. Sorry you didn’t find it so.

      • While I enjoyed the indulgences story, I’m prompted to ask where the indulgences go in the respective cases of catholic and CO2 indulgences.

        If they both go into the pockets of those who dreamed up the scam then I buy the analogy.

        However if any of the CO2 indulgences are used to offset the harm created by CO2, then I don’t find the analogy so convincing.

        My apologies if I inappropriately inserted a serious note into what was only intended as humor, cynicism, or satire.

      • I just did that because Jeff obviously enjoyed relating a story where his boy jerked someone’s chain, so I duly reciprocated.

  21. It is common amongst non-scientific folk to attribute causes to effects in their lives based on insufficient data. Most people I know are continually making assertions that “this caused that” in their lives in a totally unscientific way, with almost no corroborating data to support the assertion. Most of the systems that govern our lives are complex, and it is impossible to separate out one factor as a cause (e.g. “Taking vitamin pills gave me good health”).
    Some complex systems are exceedingly sensitive to initial conditions, and it is technically impossible to pin down the initial conditions precisely enough to make deterministic predictions of the outcome in the future. The system appears to behave chaotically, as if it changes in ways that are unpredictable and erratic. Yet, random events may appear to have a pattern – for a limited time period. For example, if one tosses a coin repeatedly five times, there is a 1 out of 32 chance that you will get five heads in a row. If, while you are tossing the coin, you stand on one foot, you might think that standing on one foot caused the coin toss to be heads. The book Fooled by Randomness by Nicholas Taleb is:

    “… about luck disguised and perceived as non-luck (that is, skills) and, more generally, randomness disguised and perceived as non-randomness (that is, determinism). It manifests itself in the shape of the lucky fool, defined as a person who benefited from a disproportionate share of luck but attributes his success to some other, generally very precise, reason. Such confusion crops up in the most unexpected areas, even science, though not in such an accentuated and obvious manner as it does in the world of business. It is endemic in politics, as it can be encountered in the shape of a country’s president discoursing on the jobs that ‘he’ created, ‘his’ recovery, and ‘his predecessor’s’ inflation. We are genetically still very close to our ancestors who roamed the savannah. The formation of our beliefs is fraught with superstitions – even today (I might say, especially today). Just as one day some primitive tribesman scratched his nose, saw rain falling, and developed an elaborate method of scratching his nose to bring on the much-needed rain, we link economic prosperity to some rate cut by the Federal Reserve Board, or the success of a company with the appointment of the new president ‘at the helm’.”

    In general, most effects observed in climate change appear to be highly chaotic and probably derive from a multitude of potential causes. The major challenge for climatologists is to ferret out which causes are most significant, and derive mathematical relationships between the putative causes and the observed effects. Implicit in this process is the belief that the system is deterministic and is not obscured by chaotic factors. However, there is typically no proof that the climate systems are deterministic. Even if they are deterministic, they may still be determined by so many conflicting contributing causes that attribution of the role of each putative cause is always very difficult, if not impossible. Most climatological analyses end up with a scatter plot in which the X-Y space is mostly filled with data points, and only a climatologist could believe that a valid trend could be extracted from this mess.

  22. I agree climate science has many attributes of religious faith including the climate science apologetics (used in the religious sense of the word) of both sides to defend “the faith” against objections, when there is too little understanding to be certain, like the many many posters here and elsewhere or even 75% sure like the IPCC. We don’t know what the effect of increasing CO2 emissions will be. It doesn’t appear to be catastrophic, but the evidence may change. We probably won’t understand attribution and sensitivity for a long time, 15 years or more as Curry and others suggest. Climate science apologetics is an example of arrogance. We don’t know. We can’t know now or any time soon. Get over it!

  23. At a national meeting, with a broad constituency, the morning plenary sessions were the largest gathering, and the speaker, having a year to prepare and stepwise encouraged and monitored to do so, summarized the “State of the Art.” of a particular area of the science. In three days, there were 3 plenary sessions, each designed by a functioning committee to bring together the latest science, presenting the areas that continue to need work. The presentation was a framework on which to hang the rest of the much smaller group and individual presentations. There were side groups, presenting the pro’s and cons of an issue; deliberately confrontational and to involve individuals in the group an opportunity to provide anecdotal or off topic observations. Free wheeling I might say. The role of the chair was to prevent ad homonyms and name calling. No hockey fights; the gloves always stayed on. Communication is the issue; whether the tower of Babel, were we still can visibly listen respectfully, or at a social gathering of likeminded groupies. There are always agendas and advocates, yet the secret to getting along, is that all participants have an eye on the goal, in this case, understanding climate science. “My way is best” has no meaning in the scheme of things. The goal is always addressed at each plenary session. Focus, focus, focus. Then everybody goes home more prepared to understand their own role in the collective work in progress. A pretty good system when we want to find solutions.

  24. Chief Hydrologist

    I am again going to make an appeal to authority on models – although the fallacy typically relates to a biased authority. James C. McWilliams, on the other hand, is currently a professor at the UCLA Institute of Geophysics and Planetary Physics and Department of Atmospheric and Oceanic Sciences. The last time I raised this someone pointed out that McWilliams was not notably a skeptic – indeed notably not a skeptic. Although how this has any relevance to his insider comments on AOS models I don’t know.

    ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but non-unique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

    A model can be tuned to ‘several empirical measures’ – but the non-unique choices available introduce an element of instability to the models. A choice of slightly different parameters (within the bounds of data or process uncertainty) create the potential for a large variation in the solution. It is similar to the idea of sensitive dependence to initial conditions – a small change in initial conditions creates a non-linear change in output. This is indisputably a property of the Navier-Stokes partial differential equations of fluid motion that are at the core of the all climate models. To estimate the intrinsic variability of climate models within the bounds of the feasible value of parameters – the models should be run multiple times with slightly varying parameters when forecasting climate. The problem with the latter is the availability of computing power such that there is no estimate of intrinsic variability. One can say the model mimics reality – but it is a misleading claim because this is only one of a large number of solutions – and radically different solutions – possible within the bounds of uncertainty in climate data and processes.

    ‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ Op.cit.

    The problem of plausible formulation is that it requires at least a 3 orders of magnitude increase in computing power. ‘A full representation for all dynamical degrees of freedom in different quantities and scales is uncomputable even with optimistically foreseeable computer technology.’

    ‘The global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ A UNIFIED MODELING APPROACH TO CLIMATE SYSTEM PREDICTION
    by James Hurrell, Gerald A. Meehl, Davi d Bader, Thomas L. Delworth , Ben Kirtman, and Bruce Wielicki

    The problem transforms to a chaotic initial value problem – exploring links and mechanisms – it always was. The application of models to future states represents only misdirection akin to a magic trick by some who understand to dupe gullible and numerically naive followers.

    The other plausibility criteria of James McWilliams is even funnier. It involves a posteriori solution behavior. So how de we interpret this? It seems that the model is plausible if it conforms to expectations about the quantum of the solutions. So if the model calculates the temperature increase for a doubling of CO2 as 3 degrees C – the answer is deemed plausible and the graph shot off to the IPCC. What we are not told is the range of model variability within the bounds of feasible parameters – because they don’t know. This could be anywhere from minus 10 to plus 11 for all I know – the models themselves are non-linear complex systems.

    ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’ Op.cit.

    Beth claims that she is descended from a long line of Scottish engineers. I am descended from a long line of poor white trash. My ancestors actually were petty English criminals and prostitutes. But Scottish enlightenment dissidents introduced a dominant strand in the cultural development of the early colony where science and enlightenment values went hand in hand.

    I mention this only in that it is the enlightenment values of individual freedom. free markets, democracy and the rule of law that are under attack from within using a misrepresentation – or misunderstanding – of science as a stalking horse.

    I will leave my too long post with the following quote from one of my other favorite papers – The Wrong Trousers from Gwyn Prins & Steve Rayner.

    ‘Although it has failed to produce its intended impact nevertheless the Kyoto Protocol has performed an important role. That role has been allegorical. Kyoto has permitted different groups to tell different stories about themselves to themselves and to others, often in superficially scientific language. But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values. Nor is that to be seen as a defect. Of course choices between competing values are not made by relying upon scientific knowledge alone. What is wrong is to pretend that they are.’

    Robert I Ellison
    Chief Hydrologist

    • Robert
      Well put on climate model variability. See Fred Singer on Climate Uncertainty SEPP.org

      James Murphy [Nature 2004] lists some 100 or more parameters that must be chosen, using the modelers.”best judgment.” Varying just six of these parameters related to clouds can change the climate sensitivity from 1.5 up to 11.5 degC [Stainforth et al 2005]. . . .
      3) Chaotic Uncertainty. It is well understood that climate is a chaotic object and climate models reflect that property. . . .For example, the five runs of a Japanese MRI model show temperature trends that differ by almost a factor of 10, an order of magnitude. (If more runs had been performed, the spread would have been even greater.) One can show [Singer and Monckton 2011] that taking the mean of an ensemble of more than 10 runs leads to an asymptotic value for the trend. . . .For example, of the 22 models in the IPCC compilation of “20 CEN” [an IPCC term for a group of climate models] there are 5 single run models, 5 two-run models, and only 7 models with four or more runs.
      Conclusion: Clearly, models cannot be used to predict future global temperatures reliably.. . .

      Overcoming Chaotic Behavior of Climate Models S. Fred Singer 2011 SEPP

      Here we conduct a synthetic experiment, and use two distinct procedures to demonstrate that no fewer than about 20 runs (of 20-yr length of an IPCC General-Circulation Mmodel) are needed to place useful constraints upon chaos-induced model uncertainties

      Part of the “arrogance” is confidently showing one or a few runs without discussing the full range of the uncertainty and the data fitting involved.

      • Chief Hydrologist

        Hi David,

        It seems Singer and MCWilliams agree. It is an interesting comment on the number of model runs.

        Cheers

    • Chief this post is an excellent overview of the state of climate models and I found it most interesting, even though the whole post seems OT. The Gleick admission is also OT so .. the hell with it! Its been an interesting day :)

      • Chief Hydrologist

        It is about the methods of prediction – so I believe on topic – but as you say – what the hell. Or projection as some would say – although with what distinction I don’t know. It is an impossible to make realistically with the method of dynamically complex models.

      • Chief Hydrologist

        That doesn’t make sense even to me – I think I will just stop.

      • :)

  25. Peter Gliek could be described as arrogant! :)

      • Gleick writes:

        “I only note that the scientific understanding of the reality and risks of climate change is strong, compelling, and increasingly disturbing, and a rational public debate is desperately needed. My judgment was blinded by my frustration with the ongoing efforts — often anonymous, well-funded, and coordinated — to attack climate science and scientists and prevent this debate, and by the lack of transparency of the organizations involved.

        I agree “rational public debate is desperately needed. I question his confidence (arrogance?) on “the scientific understanding” and believe that the uncertainties involved have been seriously understated, as shown by Koutsoyiannis et al. (See above).
        Revkin concludes:

        Gleick’s use of deception in pursuit of his cause after years of calling out climate deception has destroyed his credibility and harmed others.

      • David,
        This partial confession by Gleick likely only punctuates a corruption that started years ago. And remember this: He is a major AGW promoter. He has made a good living promoting AGW. He *knows* the theory of climate that he believes in is the right one, and we who dare to disagree are either genetically deficient (as his pal Mooney claims) or corrupt ( as he and so many other believers claim). There is no doubt in his mind he is right and we are wicked for not agreeing. Even now he is being praised by believers over at Dot Earth.

      • Well put, hunter. I’m as guilty as Gleick in that regard. ;)

    • Well, not now

  26. Although the climate has always been in perpetual change …

    Evidence=> http://bit.ly/ABvBAT

    Now they want the 1960-1991 period to be the reference average global mean temperature of the millennium.

  27. What is most important, however, is that this way of reasoning is rooted in the fallacy that climate can, in principle, be described in deterministic terms; that if we could analyze the system with sufficient granularity and make sufficient measurements then we would be able to produce sufficiently good predictions; and that there must necessarily exist an identifiable causal agent behind every trend or shift.

    This statement was supported by the IPCC in 2001:

    climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.

    http://bit.ly/z02WKM

  28. “So when we design a structure, such as a dam, and we try to predict the design flood, then it’s not a good idea to use the notion of the “maximum probable precipitation”, because there is no such thing, and because it can be (and has been) exceeded; it’s also not a good idea to consider a “signal” ”

    It seems that you can estimate the upper limits of how warm earth could get- assuming the Sun output doesn’t change by a large amount [not actually predictable- other than from history it seems constant].
    So Earth without sun’s output changing- you never get anything approaching Venus happening with Earth. And even if the sun output increased so we were getting same sun as Venus distance- it still take centuries of years to warm up. And we could do fairly easy things to prevent even this type of dramatic warming- solar shade, cost a couple trillion dollars.

    The swing to cooling has more dangerous variables- more sudden disaster potential- within a year or 10 years one could have global crop failure. Not likely but similar to “maximum probable precipitation”.
    In short, the cold is the killer, warmth can’t be a serious issue.
    If you could grow orange trees in Oregon, it’s not a big deal. though even that is not going happen in next 100 years or ever.

    But return to something like the Little Ice Age climate is within realm of possible- and possible millions of people could die if we didn’t respond adequately to such a change- such a change possible within years to decades of time.
    Responding adequately might simply have policy of storing food- quite easy, but currently, it’s not seen as vaguely necessary to have one year or more of emergency food supply for 7 billion people.

    • Chief Hydrologist

      You are absolutely right.

    • You have raised some interesting issues in relation to the art of survival. It is true that westernised humans have generally lost their connection to the land and that the raising of crops and livestock have been delegated to the farmers (who have received very little recognition or a fair economic reward for their output).

      The flooding of New Orleans showed in graphic detail just how helpless people become when services are cut. It seems to me that these people will not survive any prolonged natural disaster unless they receive copious amounts of external aid.

      On the other hand, the populations of the subsistence economies of Asia, Africa and the Americas seem more likely to survive any prolonged natural disaster that could occur from earthquake, volcanic eruption, tsunami or weather.

      What is my point? Simple. The more civilised people become, the more hubris they get, the less they support each other and individually the more vulnerable they become. The subsistence populations may be less civilised but they have less hubris and are far more supportive of each other and together are far less vulnerable.

  29. And yet, summer is warmer than winter.

    • Willis Eschenbach

      Chris Colose | February 21, 2012 at 1:30 am | Reply

      And yet, summer is warmer than winter.

      Gosh, inane meaningless statements. Can I play too?

      And yet, the tropics are warmer than the poles.

      Your turn, Chris …

      w.

    • Chris Colose

      Only locally, Chris – not globally (just like “day is warmer than night”).

      Max

    • Chief Hydrologist

      I suppose the point is that this is predictable in advance? But can we predict the summer temperature 10, 20, 30 or 40 years in advance in both a shifting climate and with chaotic tools? It is impossibly idiotic to think that it can be done.

    • Chris,

      You should read Dr Koutsoyiannis’ work on the credibility of climate predictions. You will note that the models are reported to have some limited skill at the monthly time scales – that is, he confirms models do predict seasonal changes.

      He also notes that predictive skill falls off precipitously and has no skill whatsoever at the one-year and thirty-year timescales.

      So your comment is addressed by the corpus of work produced by Itia. Yes, we have sufficiently knowledge to predict seasonal variability. That is where our knowledge, at present, ends.

      • He also notes that predictive skill falls off precipitously and has no skill whatsoever at the one-year and thirty-year timescales.

        Certainly some people are clueless when it comes to prediction. What does this obvious fact prove about predictability of climate?

    • Chris,
      Sadly your attempt at sarcasm instead sort of sums up your profundity rather well.

  30. God does not err, and therefore he does not distinguish between signal and noise.

    Noise: a warming of 0.46 deg C from 1992 to 1998 in 6 years => http://bit.ly/xL01uw

    Signal : a warming of only 0.24 deg C from 1944 to 2011 in 67 years => http://bit.ly/yWnViq

  31. Vanity is the gateway sin. It appears in the Bible long before the Tower of Babel. When the serpent cons Eve into eating the fruit of the forbidden tree in Genesis, he appeals to her vanity. “For God doth know that in the day ye eat thereof, then your eyes shall be opened, and ye shall be as gods, knowing good and evil.”

    “Ye shall be as gods,” sounds just like how some of our more messianic CAGW advocates see themselves.

    • “Ye shall be as gods,” sounds just like how some of our more messianic CAGW advocates see themselves.

      Comparing the confidence levels expressed in this blog with that in scientific journals, I’d say it was the climate skeptics here that regard themselves as having godlike infallibility. One glance at the journals shows that the difference is night and day!

  32. Cross-posted from JunkScience comments:

    “The rise of “Climate Science” is a stark warning to the modern world that in no wise have we “developed” beyond susceptibility to massive deception of the public and its chosen politicians by those able to commandeer a sufficiently robust selection of gatekeep-able information channels.

    The positive feedback of mutual financial, status-awarding, and authority-awarding actions by bureaucracies, politicians, scientists, media, and activists has powerfully deflected the economies and policies of nations world wide. The thoroughness with which supposedly clear and widely acknowledged standards of scientific research and analysis have bee pushed aside is at least the equivalent of any historical precedent.

    Complacency can cost you everything.”

  33. Personally, I am in awe of the complexity of the climate system and don’t want to anger the gods with the arrogance of claiming to understand climate change.

    Thanks JC for that.

  34. Judith Curry

    An excellent essay.

    It addresses a point that Einstein is reported to have made:

    the only thing more dangerous than ignorance is arrogance

    IMO IPCC AR4 WG1 SPM is the epitome of both.

    Max

  35. Chief@9.35.
    Thanx for your posting on the problems of models.I liked your metaphor ‘misunderstanding of science as a stalking horse.’
    You’ve quoted the paper by Gwyn Prins and Steven Rayner before. Must read it after I finish reading ‘Open Society and its Enemies.’
    Slight correction, I don’t come from a long line of engineers, just a few.
    No convicts though! (joke.)

  36. This argument is circular, since the models reproduce the hypotheses of their programmers.

    According to this reasoning, all models of all phenomena are worthless because they merely reproduce the hypotheses of those who created those models. Therefore no one should waste their time modeling anything.

    I’m having difficulty following the logic of this reasoning. Could someone please clarify it so as to make it slightly more convincing?

    • Vaughan Pratt.

      Circular logic in action:

      GISS models (with appropriate input assumptions) show that it should have warmed by 1.3 degC since 1880 due to increased forcing from GHGs

      Actual records show that it has only warmed by 0.7 degC (and a part of this was caused by natural forcing components, i.e. the sun)

      So are the model assumptions that led to the 1.3 degC model estimates wrong?

      No, of course not. The thermometers are wrong.

      Wait a minute! They’re not wrong but half of the warming is still “hidden in the pipeline” waiting to reach “climate equilibrium”.

      Is that “circular” enough for you, Vaughan?

      Max

      • Is that “circular” enough for you, Vaughan?

        If by “that” you mean the words you’re putting in other people’s mouths, Max, then I ‘d say it wasn’t so much circular as rubbish.

        No one but you is claiming 1.3 C warming since 1880, on any basis. You have to be joking.

      • Vaughan Pratt

        You write:

        If by “that” you mean the words you’re putting in other people’s mouths, Max, then I ‘d say it wasn’t so much circular as rubbish.
        No one but you is claiming 1.3 C warming since 1880, on any basis. You have to be joking.

        You are dead wrong (because you failed to check out the relevant literature). The “words” came not from me, but from Hansen et al.

        In the now famous Hansen et al. “hidden in the pipeline” paper, the authors state:
        http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf

        The observed 1880 to 2003 global warming is 0.6° to 0.7°C (11, 22), which is the full response to nearly 1 W/m2 of forcing. Of the 1.8 W/m2 forcing, 0.85 W/m2 remains, i.e., additional global warming of 0.85 x 0.67 ~ 0.6°C is ‘‘in the pipeline’’ and will occur in the future even if atmospheric composition and other climate forcings remain fixed at today’s values.

        Let me do the arithmetic for you: 0.6° to 0.7°C observed plus 0.6°C ‘‘in the pipeline’’ = 1.2° to 1.3°C total.

        The authors start with the assumption (based on their model simulations) that the warming should theoretically have been 1.3°C. Yet the thermometers only showed half this amount .

        But, instead of correcting the hypothesis and model inputs to agree with the physical observations, the authors use “circular logic” to conclude that the “missing heat” is “hidden in the pipeline”.

        Tip: Before you accuse someone of “putting something in someone’s mouth” and posting “rubbish”, check out the sources – otherwise you end up looking silly.

        Max

      • The “words” came not from me, but from Hansen et al.

        Excuse me, Max, but the words in question came from you. You correctly quoted Hansen, but that quote does not say anything like to expect 1.3 °C warming between 1880 and now.

        1. It correctly says that the temperature increased between 0.6 °C and 0.7 °C between 1880 and 2003. (The HADCRUT3VGL anomalies for those two years were respectively −0.247 °C and 0.467 °C, an increase of 0.714 °C.)

        2. It estimates 0.6 °C in the pipeline, which sounds about right. However you’re the one claiming the pipeline should have emptied out in the 8 years between 2003 and 2011, not Hansen. Earth’s climate system has considerable thermal inertia, and given the high thermal capacity of the ocean and the slow rate that heat moves around in it, it could be decades before the whole of that amount comes out.

        Tip: Before you accuse someone of “putting something in someone’s mouth” and posting “rubbish”, check out the sources – otherwise you end up looking silly.

        Thank you for that tip. Allow me to return the favor. Before lifting stuff out of context from an article, try to get an idea of what the article is about. The first sentence of the article is “Earth’s climate system has considerable thermal inertia.” You seem to think that an article is not something with a point but simply a source of quotes that you can twist to any meaning you want and then put them in the mouths of climate scientists.

        Had you complained that the article downplayed possible large natural contributions to climate change (it mentions ENSO but not AMO), you would have had me completely on your side. Unfortunately you missed that opportunity and instead chose to distort what the article says.

      • Vaughan Pratt

        I have to assume that you are not intellectually challenged or thickheaded, so let me correct what you just wrote and go through it step by step.

        1. Hansen et al. concluded from their models that it should have warmed by 1.2 to 1.3 deg.C from 1880 to 2003.

        2. Yet the thermometers out there (even those next to AC exhausts or airport runways) showed it had actually warmed by only 0.6 to 0.7 degC, or around half of the theoretical amount of the model estimates.

        3. Instead of correcting the assumptions that went into the model estimates to make them agree with the actual observed results, Hansen rationalized the missing heat with the “hidden in the pipeline” postulation.

        4. This is an example of “circular logic”.

        It’s not much more straightforward than that, Vaughan.

        Max

      • It’s not circular logic because Hansen had that heat-in-the-pipeline concept incorporated in his model by the early 1980’s according to the research articles of his that I have read.
        http://pubs.giss.nasa.gov/docs/1985/1985_Hansen_etal.pdf

      • Vaughan Pratt

        It IS “circular logic”, no matter how much you try to rationalize it..

        You are actually confirming this.

        You are simply telling me about the “timing” of the various Hansen model assumptions.

        The fact that Hansen’s two model assumptions agree, is no big deal.

        The fact that they are not corroborated by the physical observations IS a big deal.

        Moral of the story: when the actual physical observations do not confirm your hypothesis, REVISE YOUR HYPOTHESIS (to agree with the actual physical observations).

        Max

        PS But we have beaten this to death and I doubt that you are going to bring anything new into the discussion, so let’s let it lie where it is and let’s let others here make up their own minds.

      • Vaughan Pratt

        Just realize that latest post was from WHT, not you.

        Message in response still stands.

        Max

    • Chief Hydrologist

      Let’s see if I can explain it to you Vaughan old buddy. You see the argument goes that recent temperature rises can’t be tuned in the models without greenhouse gases. Ya turn off the greenhouse gas module and – lo – a different answer. It’s like a magic trick. I saw David Attenborough do it on TV once – absolute proof as he said. We – darn it – I’m convinced. Ya turn it on – ya turn it off. See – utterly convincing.

      How’s that becoming a better you working out? I’m there for you Vaughan – buddy.

    • I’m having difficulty following the logic of this reasoning. Could someone please clarify it so as to make it slightly more convincing?

      In the case of the hydrodynamic core the solutions are not known and open (infinite) eg Gallovotti As the experiments are essentially on the program generating them,and not on the idealized NS fluid the constraint is well known ie experiments do not vote.

  37. In the real world, God makes no warranties on what will happen in the long term.

    Einstein argued that “God does not play dice.” His colleagues answered this with, “Albert, who are you to tell God what he can and can’t do?”

    Christofides and Koutsoyiannis complain about arrogance, and then have the arrogance to claim a hotline to God. Beware of Greeks bearing messages from God.

    • Chief Hydrologist

      I thought God played Hamiltonian billiards?

      • Where they said “God makes no warranties.” If they didn’t get this directly from God, where did they get it from? And can we trust 2nd or 3rd hand information about warranties issued by God? Would you?

      • Chief Hydrologist

        A man said to the universe:
        “Sir I exist!”
        “However,” replied the universe,
        “The fact has not created in me
        A sense of obligation.”

    • Vaughan Pratt

      Can you show me where Christofides and Koutsoyiannis “claim a hotline to God”.

      I missed that in their essay.

      Max

      • Where they said “God makes no warranties,” Max, see above. (Getting hard to hit the right Reply button these days.)

        In answer to CH, I thought your gods played the field of cowgirls. ;)

      • Markus Fitzhenry.

        Cowgirls in Oz don’t get played by no one, especially God.

      • Chief Hydrologist

        I was leaving that one alone ‘less I be seen to countenance the impugning of the honour of my true love – and I should then need to demand satisfaction on the field of honour. The problem there is that Vaughan doesn’t have any.

        But Markus is correct about Australian cowgirls.

      • Chief

        Well, now – we got them cowgirls right here in Switzerland, ‘n they c’n sing too – they c’n even yodel

        Max

  38. Read the whole thing, it is short, clever, and profound.

    With such a strong recommendation I was hesitant at first to say this presentation was short, stupid, and brutish, but after reading more of it I decided he who hesitates is lost, or at least terribly confused, when it comes to this level of utter stupidity.

    It’s not the content I object to. I can understand their claim that AGW is an utter crock, what else is new? We get this all the time here. What I can’t accept is a disrespect for logic midway between the Goon Show and Monty Python. They don’t even qualify as morons, they’re unqualified morons. Sheesh.

    • Chief Hydrologist

      Midway between the Goon Show and Monty Python and infinitely less funny? Vaughan – that’s your niche in life and no one can take it from you.

  39. Chief Hydrologist

    hah-hah no I’m not you are? Is that the best you can do Vaughan old buddy. I may as well phone it in.

    http://www.insults.net/html/odd/random.html

    Just check the appropriate boxes. In your case that would be – intense abuse, male, democrat, smart, underweight and ugly.

  40. Trenberth et al. tell us that the managements of major national academies of science have said that “the science is clear, the world is heating up and humans are primarily responsible.” Apparently every generation of humanity needs to relearn that Mother Nature tells us what the science is, not authoritarian academy bureaucrats or computer models.

    http://bit.ly/yEXfY9

    • Girma,
      Science is so blatantly man-made and many factors were NEVER considered….so I had to create my own world with all the evidence as factors.

  41. JC

    Thanks for an interesting article by Christofides and Koutsoyiannis.

    For those who want to check their graph on chaos, I have done it here => http://bit.ly/wMwUzi

  42. …progress is followed by arrogance, and arrogance is
    followed by loss of communication, which leads to
    stagnation, which is, we think, where science is now.

    We need to give credit to the American Physical Society for disassociating from “AGW is incontrovertible” claim.

    http://on.wsj.com/yN7Cvm

    That claim would have been really arrogant!

    • Girma, the APS statement is unequivocal. It notes that “global warming is occurring.”

      Except for the “Girma, ” I tacked on at the beginning, those are not my words but those of the APS President in his letter you just linked to in the Wall Street Journal.

      • Vaughan Pratt

        How could anyone disagree with the statement “global warming is occurring”?

        Just look at the thermometers out there (even the ones that are not next to AC exhausts or airport runways).

        It has warmed at a rate of 0.04 to 0.05 degC per decade since 1850 on average, with multidecadal cycles of warming and slight cooling (see Girma’s analysis).

        We just finished a ~30-year late 20th century warming cycle, whick was statistically indistinguishable from a 30-year early 20th century warming cycle (before there was much human CO2). In between we had a 30-year cycle of slight cooling, despite increased CO2 emissions starting in the post WWII boom. For the past 11 years it appears that the rapid warming of the late 20th century has stopped and it has cooled slightly, despite CO2 concentrations rising to new record levels.

        So, yes – there is no doubt that seen over the past century and a half “global warming is occurring”.

        What we DO NOT know, however is WHY this is so.

        Max

      • “So, yes – there is no doubt that seen over the past century and a half “global warming is occurring”.

        What we DO NOT know, however is WHY this is so.”

        Because we are in a interglacial period. And instead glaciers retreating constantly during interglacial period, as one might loosely expect, we instead had an unusual period in which glaciers significantly advanced- the period known as Little Ice Age.
        Rather it seems the question of why did glacier suddenly and significant start to advance during interglacial period is more unknown than why we returned to a 10,000 year warming trend. The why of the cooling of Little Ice Age, seems to be related to the sun [very low amount of sunspots and for centuries of time]. Was just the sun activity?
        If it was solely due to sun and number sunspots, we are now in period of low sun activity and many experts predicting such low sun activity could extend for decade or more. If sun spots activity is main reason or only reason for cooling- we could facing a possibility of drop of say of 1 C over the next few decades. Assuming low sun activity continues.
        Or is the sun activity being amplified by some other affects- where you must have 1 or 2 other factors combining to get significant and prolonged cooling??

      • What we DO NOT know, however is WHY this is so.

        “We” presumably being your cohort, right, Max?

      • Markus Fitzhenry

        Don’t be stupid Vaughan. If your cohort knew the science, AGW wouldn’t be under such venomous attack.

        Consider it a forced scientific process.

      • If your cohort knew the science, AGW wouldn’t be under such venomous attack.

        That’s a fair point, MF. It’s pretty much how the patients in a mental hospital view the doctors. Clearly one should cut the patients some slack there, maybe the doctors really are the ignorant evil demons the patients think of them as.

  43. “I think that the meaning of this can be felt in large conferences, where we are thousands of scientists in hundreds of sessions, each one of us working in his own isolated domain, with hardly any knowledge of nearby domains, let alone of the big picture”

    Surely this is an admission that he a failure as a scientist, and he is projecting his to all others?
    Just because the author can’t be bothered trying to follow developments in other fields and fails to observe the ‘big picture’, does not mean that this is universal.

    • Your inference is the very example of hubris that the author is talking about. Surely you must agree that being all-knowing is part and parcel of science. It is proverbial that we all stand on the shoulders of giants and science cannot proceed except by assuming the honor and integrity of those whose whose work we must rely on before we can take the next step forward. That is what is so aggravating about all of the pathetic, fearmonging liars in academia who have no conscience when it comes to pushing their climate porn onto the children.

  44. ‘Surely you must agree that being all-knowing is part and parcel of science.’

    No, being all-questioning and knowing how to design external/internal positive/negative controls is part and parcel of science.

    • “If we knew what it was we were doing, it would not be called research, would it?” ~Albert Einstein.

      • “If we knew what it was we were doing, it would not be called research, would it?” ~Albert Einstein.

        By Einstein’s reasoning, research is what you’re doing when you don’t know what you’re doing.

        So if you know you are doing research, you don’t know you are doing research.

        Einstein was merely a genius, he wasn’t omniscient, and he was clueless when it came to logic. In his later life he had long walks with Goedel at Princeton, another genius in his youth who was 27 years Einstein’s junior and the century’s Einstein in the field of logic, but who ripened into an even bigger fruitcake than Einstein in his dotage. Much like Newton and van Gogh.

        The contributors to this blog can pat themselves on the back for following much the same trajectory as Newton, van Gogh, Einstein, and Goedel. With great progenitors comes great honor.

  45. David Springer

    Climate science has not built any great towers. Just a house of cards. Not act of God needed to bring that down. The first cool breeze will git ‘er done.

    Other sciences are doing just fine along with engineering and technology in general.

    It is my fervent wish that the stink of climate change dogma and the narrative produced by the GCM ensemble doesn’t come to permeate the productive sciences which did nothing to warrant any such taint.

    • No towers? The AGW True Believers have have an oversized portrait of Mao that they’d love to hang from their Tower of Babel.

  46. Talk about appeal to authority. This person is invoking how he thinks God thinks. Beyond that, no substance. Forcing can’t lead to warming because that would be too easy to predict, essentially. I don’t follow this line of logic.

    • Chief Hydrologist

      On the other hand – it wouldn’t be an appeal to authority fallacy because God is the ultimate unbiased expert. I am sure that I did a long post on the tools of prediction – https://judithcurry.com/2012/02/20/god-and-the-arrogant-species/#comments – so really that’s out of the question.

      You deny that there is any natural variability at all – slightly out there – but that’s really easy to predict. Nil effect. That just leaves greenhouse gases. Easy peasy. I can do it on the back of an envelope. Now that we have scaled the heights of climate science – it’s all sorted hey Jim.

    • I don’t deny that natural variability, if you include solar effects, has possibly hindered the recent warming, but I do deny that it amounts to more than 0.2 degrees except in unusual and very noticeable circumstances like major volcanic eruptions and measurable solar dimming beyond the sunspot cycle range. I could also consider solar effects and volcanoes as forcing, leaving non-forcing as internal variations (ocean redistributions) which are even more certainly less than 0.2 degrees when averaged over a decade.

      • Chief Hydrologist

        But I thought you were saying that the warming trend continued? Make up your mind. You of course miss the major influence on recent energy changes – being cloud associated with ocean variability. You know exactly what’s happening don’t you. I just wish for a consistent narrative.

      • It continues if you look at the BEST land temperature because land is more responsive to forcing changes, so it shows what the ocean is going to do in the future.

      • Chief Hydrologist

        Wow – now that’s a rationalisation.

      • You have to think of a reason for the land warming at twice the rate of the ocean for the last three decades. I have yet to see a skeptic put forward any idea about this data, except perhaps not to believe it.

      • Chief Hydrologist

        The ocean did warm in the CERES and ARGO period – up to the 2011 at least. It was mostly as a result reflected SW changes which overwhelmed the TSI changes.

        Land is more responsive to forcing than oceans? I don’t particularly care to even to even think about this nonsense. You have to look at energy at TOA not make up stuff.

      • Land is more responsive to forcing than oceans? I don’t particularly care to even to even think about this nonsense.

        Moreover you refer to those who do think about it as idiots, CH. This creates an even sharper line between those who take your side instead of the other side. As one who hates fuzzy lines I strongly support you, CH. Keep up the good (?) work.

      • Markus Fitzhenry

        Chiefhydro. Vaughan seems to think that the land has more heat capacity then the oceans. Or, maybe he’s never been in a boat.

        I’m not going to tell him what has the higher temp at the surface.

      • Markus, you scored an own goal. It was CH that disagreed with my statement that effectively the ocean has a higher heat capacity. Take it out with him.

      • Just to clarify, the fact that convection acts far more strongly in the ocean than on land is an even bigger factor than their respective heat capacities. The difference can be modeled with a resistor R in series with a capacitor C. C is roughly comparable between land and sea (maybe slightly higher for sea), but R is *much* smaller for sea.

        Those with access to SPICE or its equivalent can then experiment with what happens as R and C are varied. Jane Goodall has kindly volunteered to assist those without.

  47. Judith and all, ευχαριστώ! (I use my mother tongue to more originally express my feelings).

    Wagathon, you exaggerate my flair for English. I wish I had told all you quote and attribute to me but I think some of them are due to others. In terms of the “simple truth about computer climate modelling”, you may wish to see a recent interesting formal discussion in Hydrological Sciences Journal (http://www.tandfonline.com/toc/thsj20/56/7 — the last two items in the contents page).

    manacker, thanks for the excellent quotation of Einstein. There is a similar one by Heraclitus: “Ύβριν χρή σβεννύναι μάλλον ή πυρκαϊήν” (“More than a fire, one needs to extinguish hubris”, Heraclitus, fragment 43)

    Vaughan Pratt, you may be ensured that Antonis Christofides and I do not have “a hotline to God”. Interesting, though, that you interpreted our statement “In the real world, God makes no warranties on what will happen in the long term” as such a hotline. I thought the meaning of it is trivial–even very common in sermons :-) . So, do you think there are some to whom God makes warranties about what will happen in the long term? Then these some may have the hotline you attribute to us.

    J. Seifer, you are right, we are saying the grapes are sour. Do you know some who have demonstrated these grapes are ripe?

    Fred Moolten, if you suspect you misinterpreted what I am saying (and thanks for this tactfulness), in particular on the issue of stochasticity vs determinism, you may perhaps wish to read my paper “A random walk on water”, where I explain my views (http://itia.ntua.gr/923/ ). Also, your point on description vs mechanism is interesting. Do you think probabilistic/stochastic concepts cannot represent mechanisms? What are then the mechanisms in thermodynamics? If you find too hard to accept my explanation, based on extremal entropy production, of the emergence of Hurst-Komogorov dynamics (thanks Spence_UK for pointing this out), you may take an easier thermodynamic example, for instance, the saturation vapour pressure and the Clausius-Clapeyron relationship. What is the mechanism behind it? Isn’t it maximum entropy = maximum uncertainty? (You may see section 4 in my recent paper about this, http://itia.ntua.gr/1184/ ).

    Joshua, you really amaze me that you wonder if someone is saying that we’ll “burn in hell as the Earth warms up”. The most recent example it came to my mind is this. In a debate here in Greece, my opponent who represented the IPCC orthodoxy presented a slide with the question “What will happen” and a single-word answer “Hell”. (This slide is on line in http://www.blod.gr/lectures/Pages/viewlecture.aspx?LectureID=32 , time 61:53/slide 69 — sorry that it is in Greek but you may trust my translation above).

    • Demetrious –

      Kalispera! You live in a beautiful country.

      Thanks for the response.

      I think that your characterization was hyperbolic, and as such, is less productive than your contributions might be otherwise. I see hyperbole as being inherently counterproductive when discussing issues as highly charged and partisan as is the climate debate.

      If I expressed wonder that anyone might say such things, then I misspoke. We can find extremists of all sorts who use hyperbole to make points. My question was with respect to you using the examples you used (breathing as a sin, having children who breathe as a sin, predictions of hellfire) as ubiquitous — such as it was in the middle ages when people were being told that they all were sinners.

      I think that carefully qualified language should always be an ideal we strive for when debating the relationship between science, philosophy, culture, politics, etc.

      • Demetris –

        BTW – sorry for misspelling your name (I’m a terrible speller), and thank your ancestors for me for great words like hyperbole!

      • Thanks, Joshua! And no worries: your spelling may be more correct than mine as I have simplified a little bit the original spelling of my name

        Δημήτρης (originally Δημήτριος)

    • for instance, the saturation vapour pressure and the Clausius-Clapeyron relationship. What is the mechanism behind it? Isn’t it maximum entropy = maximum uncertainty?

      Not necessarily as there are many ways for deriving it and most don’t refer to maximum entropy. The different derivations are certainly related as all are based on the Second law, but still they are different.

      That stochasticity may result in effective determinism is nothing new. It has been well known as long as we have had statistical mechanics.

      The laws of thermodynamics can be derived from statistics, but there are always also details that depend on the micro level dynamics. The Hurst type statistical properties cannot in most cases be derived rigorously. There are derivations that give such results, but they depend partially on assumptions that cannot be justified from first principles. On the other hand similar results may be obtained from assumptions that differ with respect to some important details. Thus the Hurst-type statistical properties can neither be really derived nor does the approximate validity of those properties tell very much about the actual dynamics.

      • Pekka Pirilä, you said:

        “Not necessarily as there are many ways for deriving it and most don’t refer to maximum entropy.”

        I doubt. I think the maximum entropy concept is always there, even if we don’t explicitly use it (e.g. in the equality of temperature in the two phases). The more we depart from explicitly using maximum entropy, the more likely is to make errors in our derivation. And there may be errors in some derivations. See demonstration in the paper I mentioned.

        Why and when stochasticity results in determinism is a matter deduced by probability theory.

        About the emergence of Hurst-Kolmogorov from statistical thermophysical considerations, see the paper linked by Spence_UK.

        When you speak of first principles, do you include the laws of thermodynamics in these principles? Then you have already allowed stochastics to enter in your edifice.

      • You may notice that I wrote: The different derivations are certainly related as all are based on the Second law. Other derivations are based on the equilibrium conditions and there’s certainly a relationship between equilibrium conditions and maximality requirement, when the maximization problem has been set up appropriately. Thus it’s indeed in a way possible to say that the other derivation can be based on maximum entropy.

        That’s, however, misleading as are many other statements about maximum entropy, because the results are dependent on the setup. The principle alone doesn’t lead to the results, it requires setting up the maximization problem correctly. This is a stage, where the user of the principle feeds in the essential factors that determines the outcome. The maximum entropy principle turns into a black box that converts the subjective input to an output. It appears that most users of the principle don’t realize the role of their subjective input. As I discuss in another message the output is not particularly sensitive to the input as long as the input is “reasonable”, but even making it to be “reasonable” is a essential subjective input. Much of the input is realized by the use of variables. The choice of variables may appear to be non-controversial as commonly used variables work just well, but the principle by itself would allow also for rather weird variables that would lead to very different results.

        It’s rather easy to construct plausible derivations, when much is known a priori from standard theory. That means, however, that the new approach is by construction reasonable, not by the merits of the approach. Maximum entropy can reproduce well known correct results, when it’s made to produce them. It could be made to produce seriously wrong results as easily. The principle doesn’t make that choice, the user makes it perhaps unknowingly.

      • Pekka, physical and mathematical concepts need to be gently handled to become powerful–and maximum entropy is not an exception. Being gentle with them should extend to avoiding depreciative characterizations for them (e.g. calling them black boxes) and those who use them respectfully.

      • Demetris,

        The problem is that the theory seems to be lacking. It works more like a nice rule of thumb than something based on solid theoretical background. Good rules of thumb are valuable in many ways but they cannot substitute for a real theory.

        As I have already written, the validity of these approaches as rule of thumb is extensive enough for concluding that there must be good reasons for that. That means that the range of possible setups that lead to similar outcomes must be large as a consequence of some principles of statistics. Moving from this kind of vague notion to real theory means studying the conditions for applicability of the approach. When that has been done well enough it may be possible to tell at least in some cases, whether the approach is guaranteed to work for those. Without such analysis the only thing that can be done is to try and compare with data or better theory.

    • Demetris – First, it’s gratifying that you’ve chosen to participate in the discussion after a post was written relevant to your work. Thank you.

      I believe we all understand that maximum entropy in an equilibrium situation is not the same as maximum entropy production in a system not in equilibrium, including the dissipative nature of the Earth’s climate system. The MEP principle is not at this point provable, and strongly doubted by many.

      I also think that Pekka Pirila below has described our common understanding of how the macroscopic behavior of a system can exhibit a degree of predictability despite the existence of stochastic elements.

      My understanding of climate behavior is strongly based on the positive direct evidence for dominant forcing effects, and is not dependent on estimating forced trends as simply what’s left over after unforced variability is subtracted. The relatively small contribution of that variability over certain intervals of major interest to us (the decades since 1950 for example) is however implied by data showing that major mechanisms of unforced variability would yield ocean heat uptake data very different from what has been observed, even allowing for measurement and sampling errors. Other possible mechanisms involving unforced variability as major players over that specific interval (but not necessarily earlier intervals or shorter ones) can be reasonably excluded on the basis of observational data on radiative fluxes and estimates of climate sensitivity, but those arguments are more complex than can be accommodated in this already long comment.

      None of this is to say that current models are adequate for accurate projections, but the role of forced trends can be reasonably well estimated without excessive reliance on model simulations.

      • Fred said, “The relatively small contribution of that variability over certain intervals of major interest to us (the decades since 1950 for example) is however implied by data showing that major mechanisms of unforced variability would yield ocean heat uptake data very different from what has been observed, even allowing for measurement and sampling errors.”

        Assuming that 1951 to 1980 is “average” would lead to your conclusion, but is 1951-1980 average? The Little Ice Age was a natural event and depressed temperatures. There is no particularly good evidence that the 1910 to 1940 period had significantly less volcanic impact than the 1980 to 2000 period, nor that solar was a major impact in the 1910-1940 period compared to the 1980 to 200 period. Even Michael Mann’s new paper on the volcanic impact leading to the Little Ice Age indicates that the Little Ice Age was a significant natural event.

        http://www.volcano.si.edu/world/largeeruptions.cfm

        When the volcanic impact is considered with the over-estimation of climate sensitivity and the non-uniform distribution of warming, the possibility of human impact being greater than 50% is seriously diminished.

        You cannot be confident that AGW is greater than average when you do not know what average is.

      • Chief Hydrologist

        Odder and odder – max. ent. is is a minor property of a system in not in thermdynamic equilibrium. It seems unnecessary to prove it. All energy coming from the sun is reflected or emitted back into space.

        I hark back to the problem of neglected forcing – cloud radiative forcing associated with ocean variability. If this is not considered – then nothing else makes any sense. This is a blind spot for all of the warmists. The evidence exists from ISCCP, ERBS, CERES and Project Earthshine. It exists in COADS observations. Studies from Dessler and Clements are interpreted as positice cloud feedback to warming – but they are studies of cloud associated with the PDO and ENSO.

        In the CERES period – the major power flux changes are in the SW out. In the same period TSI decreased – cooling – and IR out was constant. The planet did warm in the period as shown by ARGO data.

        I quite happily predict more summer rainfall in north east Australia over the next decade or three. This is related to Pacific conditions – the cool PDO and an increased frequency and intensity of La NIna.

        Fred’s observations are really just a narrative with no bearing on reality but just postcards from the edge of madness.

      • Dallas – Choice of intervals is critical. The 1951-1980 interval is unrepresentative of 1950 to the present (too flat), as is the 1981-2010 interval (too steep), but over the entire period, we can exclude a major role for unforced natural variability of the type we’re aware of (ENSO, PDO, AMO, etc.) from ocean heat data, although not a minor role. As for natural forced variability (volcanic and solar), we have reasonably good estimates for the past 100 years ago – see Figure 5 of Gregory and Forster 2008, with other sources showing similar forcing data. This allows us to attribute considerably more than 50% of warming to the GHGs – but only for that interval. There’s reason to believe it also applies to the entire past century, but that is more speculative.

        Climate sensitivity is not a major element of these conclusions, but it does tend to exclude the possibility that some transient internal variation could have mediated a half-century long dominant trend, because that would require an astronomically high climate sensitivity. Remember that climate sensitivity indicates the rate at which a flux balance at the TOA is restored after a perturbation – a high sensitivity signifies a slow restoration rate and a low sensitivity a rapid rate. Even the highest end of typical estimates would be far too low for a strong half-century trend from a very transient perturbation.

      • Fred Moulton
        Re: “Other possible mechanisms involving unforced variability as major players over that specific interval (but not necessarily earlier intervals or shorter ones) can be reasonably excluded on the basis of observational data on radiative fluxes and estimates of climate sensitivity,”

        I would welcome a formal post to Judith to understand your arguments.

        From the peanut gallery, I seem to be seeing the opposite.
        The quatitative work by Demetris Koutsoyiannis . et al. (links above) hindcasting climate models against surface data, do not generate any great confidence in the global climate models to put it mildly.

        Scafetta’s model seems to fit better than the IPCC’s :
        N. Scafetta, “Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models” Journal of Atmospheric and Solar-Terrestrial Physics, in press. DOI:10.1016/j.jastp.2011.12.005.
        We will see more as the decade progresses.

        Furthermore, Scafetta is able to use half the historical data to forecast the other half and vice versa. I have not seen that level of validation done by any of the global warming models. Quite the opposite.
        See Fred Singer on the global modeling run replication required to achieve reproducible results:
        Overcoming the chaotic behavior of climate models

        A synthetic experiment, using two distinct procedures, demonstrates that no fewer than about 20 simulations run on a typical IPCC general-circulation model are a prerequisite for determining useful constraints upon chaos-induced climatic uncertainties.

        Since the IPCC results come nowhere close, why should we not conclude that IPCC’s 90% confidence is nothing but hubris?

        Then Roy Spencer has some fascinating developments that equally seem to raise major issues with the conventional climate orthodoxy.
        Deep Ocean Temperature Change Spaghetti: 15 Climate Models Versus Observations

        The Rest of the Cherries: 140 decades of Climate Models vs. Observations

        I’ve Looked at Clouds from Both Sides Now -and Before

        So far, I find the IPCC’s arguments “Not Proven” for failure to include all the data (cf NIPCC), to allow for all the uncertainties, to systemically fail in long term trends higher than actual temperature, and for hubris of 90% confidence (speaking of arrogance)!

        PS the only two I know of who will have an obvious direct hotline to heaven will be the Two Witnesses. We may yet live to see that day.

      • Here is another example of Fred’s vast scientific knowledge on climate change.

        Fred wrote: “As for natural forced variability (volcanic and solar), we have reasonably good estimates for the past 100 years ago – see Figure 5 of Gregory and Forster 2008

        What Fred fails to mention is that the Gregory and Foster 2008 paper makes its conclusions almost solely based upon the outputs of various climate models, most of which have been shown to be very inaccurate.
        The paper in question showed that the models did a poor job of accurately representing observed results. (table 2) The figure 5 that Fred references describes the median estimates of the various forcing estimates and does not consider the accumulated potential margin of errors.

        Fred- help me where I have misunderstood.

      • David Hagen – I don’t think you comments address the evidence I cite that specifically excludes a major role for unforced variability in the post-1950 interval. The forcing data show that GHG positive forcing strongly outweighed other positive forcings for that era, and the GF08 paper is one of many sources for that evidence.

        It’s important to realize that although the estimated magnitude of the forcings is somewhat dependent on individual GCMs, their relative weight is largely independent, even if one assumes that some unknown amplification of solar forcing is operating. In other words, the basic conclusions don’t depend on the accuracy of the GCM simulations described in IPCC AR4 Chapter 9, although those simulations add weight to the evidence. A common fallacy is to think that the conclusions require those simulations.

        There is clearly an uncertainty term in all this, but the dominance of GHG warming post-1950 is substantial enough to make the IPCC attribution that it exceeds 50% a very conservative statement, with little prospect that any future evidence will overturn it.

      • Chief Hydrologist, Fred, Moolten, re: SW flux and late 20th C warming

        You’re probably both aware of these but just in case:

        – “Analysis of the decrease in the tropical mean outgoing shortwave radiation at the top of atmosphere for the period 1984–2000” (Fotiadi et al 2005). The study includes a global analysis, although the main focus is on the tropics.
        – “Evidence for strengthening of the tropical general circulation in the 1990s” (Chen et al 2002)
        – “Changes in Tropical Clouds and Radiation, Response” (Wielicki et al 2002b) .
        There was also a comment by Trenberth on the preceding two papers, and a response by Wielicki et al., included in the final hyperlink.

      • Fred, one issue with volcanic impact is the region. While Mann include major equatorial volcanoes, he did not include major high northern latitude volcanoes, mainly Kamchatka and the Krill Islands. These appear strongly in northern hemisphere temperature records.

        The northern hemisphere, especially the higher latitudes, 45 degrees and up, have a much higher sensitivity due to percentage land mass and snow albedo. You questioned a temperature dip in the late 1930s to 1940s, check the volcano timing yourself. I am just trying to solve a puzzle and volcanic activity is a major piece.

      • Dallas – the dip I was concerned with occurred from about 1945 into the early 1950s, but I agree it’s worth looking at the volcanic record more closely.

        Oneuniverse – Thanks for the interesting references. I’m not qualified to judge the instrumentation issues, although they are clearly important. In any case, it’s clear that substantial short term fluctuations occur in both LW and SW fluxes, some due to ENSO and volcanism, and others less well explained.

        It’s hazardous to draw too many long term conclusions from records not much longer than a decade, but certain general principles are probably worth noting. First, for all their inadequacies, the models predict a positive cloud feedback on CO2-mediated warming, with many of the models finding that positivity in both the LW and SW components. The empirical observations are roughly compatible in the sense that a positive feedback due to reduced cloud cover would show up as a reduced albedo (reduced reflected SW), and possibly an increase in OLR if the reduction in cloud greenhouse effect outweighed any increase in clear sky radiative forcing from an increased CO2 concentration. Obviously, one can conjure up other explanations for the changes, but my point would be that they are at least consistent with an effect of increasing GHG concentrations as modified by intervening effects from ENSO and volcanic eruptions.

      • Fred Molton- this is an example of where I believe you seem to claim more knowledge on the topic of climate change and more certainty of conditions than is warranted. Perhaps I am wrong and you can show me I am wrong. It won’t be the 1st time and I’ll learn from the process.

        You wrote:
        “As for natural forced variability (volcanic and solar), we have reasonably good estimates for the past 100 years ago”

        and,

        “It’s important to realize that although the estimated magnitude of the forcings is somewhat dependent on individual GCMs, their relative weight is largely independent, even if one assumes that some unknown amplification of solar forcing is operating.”

        Fred- I do not understand why you believe we have reasonably good data on the forcings for the last 100 years.

        The paper you referenced estimated the magnitude of various forcing solely on the output of model(s) isn’t that true? That paper and those models did not hold the relative weight of the different forcing constant over time did it?

        Why do you believe their relative weight is largely independent in the models? More importantly, why do you believe that the model accurately predicted the relative weights accurately over time? Does figure 2 lower your confidence? It appears the model was pretty darn inaccurate when checked against observations. What gives you the confidence that the models outputs are reliable for determining what the forcings were 100 years ago?

      • “The paper you referenced estimated the magnitude of various forcing solely on the output of model(s) isn’t that true?”

        Rob – That’s not true. Forcing data are based primarily on observed measurements processed through algorithms to translate them into values expressible in W/m^2 – e.g., the radiation codes utilizing the radiative transfer equations, CO2 concentrations and their change with time, spectroscopic measurements, temperature, and lapse rates.. Indeed, models are typically involved along the way, but their output is generally checked against real world data for consistency, including upwelling and downwelling radiative fluxes in specific frequencies. As an example for CO2, see the article byMyhre et al 1998. The process for solar irradiance is different, but the principles are the same. All leave margins of uncertainty, but the relative weights of solar and GHG forcings, for example, are so different that there is little chance they might overlap. Note that these are not model simulations of long term temperature trends as done in IPCC hindcasts – there the model inaccuracies are more of a problem.

        Where models are probably more important than for relative forcing weight is in estimating the relationship between forcing and temperature as described by climate sensitivity values. However, for long term forcings acting more or less globally, different climate sensitivity values will affect each forcing in a similar fashion without greatly changing their relative contributions to temperature change.

        You are correct in implying that there is enough uncertainty for a 100 year interval to leave open the possibility that ghgs contributed less than half of the warming influence. The data since mid-century are better, and combined with ocean heat uptake, which tells us that the long term temperature trend must have been primarily a forced trend, leads to the estimated dominance of the ghgs with enough room to spare to make the attribution “very likely” in IPCC parlance. “Very likely” is not 100%, and so something unexpected can never be excluded with absolute certainty, but this is one situation where the attribution is really very secure. In comparison, there have been other IPCC conclusions founded on less compelling evidence, which is why I find it puzzling that this particular attribution has been targeted in this blog.

      • Fred Moolton said (some way back):

        but over the entire period, we can exclude a major role for unforced natural variability of the type we’re aware of (ENSO, PDO, AMO, etc.) from ocean heat data, although not a minor role.

        Bender had a good way of talking about these patterns in the climate system – he referred to them as “eigenthingies”. What he is referring to, of course, is the eigenvalues of the covariance matrix of the climate state vector – one could describe them as “dominant patterns of variance” in the climate system.

        Bearing in mind the climate system is a non-linear, coupled system, and individual parts (ocean, atmosphere, even the land and tectonic plate movement), it is not possible to linearly separate the ocean and atmosphere. When this is realised, observing ENSO as a “cause” of temperature change is uninteresting and tautological. It is merely saying patterns of temperature change in the climate system are caused by patterns of temperature change in the climate system. Furthermore, these patterns are not deterministically predictable, as we saw from the attempt to provide an ENSO prediction (now abandoned, I think, as the predictions evidentally showed no skill – I could have saved them some money and told them this!)

        While the patterns certainly exist within the climate system, it is flawed thinking to :-
        1. Remove them in any study of stochastic natural variability. They are part of stochastic natural variability and their removal will prevent proper characterisation of variability.
        2. Arbitrarily label some eigenthingies “natural variability” (e.g. ENSO, AMO, PDO, etc) and arbitrarily label some eigenthingies “anthropogenic” without a credible model of natural variability. It is important to note that frequency is not, in itself, a proper justification, and neither is the simple observation that it is a pattern in the climate system.

      • Meh. Replace “eigenvalues” with “eigenvectors”. Too early in the morning for me to type straight :-)

      • Rob Starkey – My response to you last night was long, and the hour was late, so I didn’t add to it. However, I should clarify that the Myhre et al work I cited to illustrate part of the process uses models, but not GCM simulations of climate change that are a subject of much contention. Almost all real world application of physics utilizes models of one sort or another, but Myhre et al applied models in a very restricted way simply to evaluate the change in radiative long wave flux at the tropopause following a hypothetical change in CO2, with other variables held constant. In these models, there is no consideration of temperature change as a function of CO2 or time, no effects of ice, water vapor, cloud changes, no wind, ocean circulation, eddies, or other fluid dynamics, no heat transfer to the ocean, no evaporation or precipitation, and no changes in solar absorption, among other variables. The models focus on changes in radiative absorption and emission over the vertical height of the atmosphere as a function of the absorption coefficients at different IR wavenumbers. Because it is computationally demanding to compute this line by line (LBL) for every line in the CO2 absorption spectrum, the models also evaluate band averaging algorithms for their ability to match the accuracy of LBL simulations. Error magnitudes are estimated, and indicate that potential for error exists, but also that the computed values are going to be reasonably close to accurate. Real world satellite and ground based measurements yield results consistent with estimated values. For a very brief description of some of the relevant physics, Raypierre’s Physics Today article is worth visiting, or revisiting if you’ve seen it already.

      • Fred
        I am continuing this point because it is a situation where you seem to be writing things that you either know to be inaccurate or you are misunderstanding what is written in the papers you are reading. It does not seem to be good science.
        You wrote “As for natural forced variability (volcanic and solar), we have reasonably good estimates for the past 100 years ago”
        I wrote that I do not understand how you can claim that to be true because the paper you referenced used models to estimate the various natural forced variability and the model used seems to be far from accurate when compared to observations in recent times, so what makes you believe they are a good representation of conditions 100 years ago (or 50 years ago, or 25 years ago)?
        You wrote: “not true. Forcing data are based primarily on observed measurements processed through algorithms to translate them into values expressible in W/m^2 – e.g., the radiation codes utilizing the radiative transfer equations, CO2 concentrations and their change with time, spectroscopic measurements, temperature, and lapse rates.”
        Fred- using algorithms to translate forcings into values is developing a model. It isn’t a GCM, but it is a model. That is not necessarily a bad thing if the model matches observed results, but in this case it appears that the model did not match the observed results very well. Hence my point that you statement was not accurate when you wrote that we have good estimates of natural forced variability for the past 100 years. What we actually have are estimates from models that have been shown to be fairly inaccurate.
        I am trying to demonstrate where you frustrate people in your discussions here. What you wrote back was a long, obtuse response that tried to seem overly technical, but actually was largely irrelevant to the initial questions or points.

      • Rob – I’ll be content to let others read these exchanges and visit the references to make their own judgments, including whether I “misunderstand” what’s in the papers, and whether what I say is “irrelevant”.

      • Fred
        I had not looked at your latest message before posting my last comment.

        I am glad that you recognized that what was done was using models, just simpler ones.

        I took a quick look at the article by Pierrehumbert, but I do not find it to be relevant to the issue of whether it is accurate to claim that we have reasonably good estimates of natural forcings for the past 100 years. I still only see evidence that we have estimates from unproven models, and basically ones that have demonstrated fairly poor accuracy

    • I agree with your ending too that in the search for truth, for its own sake, we need to look “for better alternatives, perhaps less algorithmic-intensive, needing less powerful supercomputers (which, despite being also moneyintensive, ultimately may not make any difference), and more thought- and knowledge-intensive.”

      True too, if I understand you correctly, that the null hypothesis of AGW Theory has never been rejected, that all climate change can be explained by natural causes. “How could we reject a hypothetical model (e.g. one in which the climate sensitivity is very small), according to which the entire observed (past) variability is “internally generated natural variability”, while the response to change in external forces is negligible?” ~Koutsoyiannis

    • Demetris, thank you so much for stopping by to participate in the discussion

      • Pekka, http://www.engineeringtoolbox.com/carbon-dioxide-d_1000.html That is the source for the CO2 data.

        This is the air properties. http://www.engineeringtoolbox.com/air-properties-d_156.html

        Optimizing turbulent flow versus pressure loss was a pretty big deal in my former life. You can work the numbers tighter, but unless the engineering tool box data its totally bogus, I should be in the ballpark.

        Vaughan, I was also wondering about the thermal conductivity in the ocean. The 4C ocean thermal boundary layer in the Antarctic mainly, is in prime location to lose a little more heat with CO2 enhancement because of the temperatures. The impact of the CO2 is small, but long term is could have a significant impact on the ocean heat uptake if the rate of warming from the surface mixing layer in the mid latitudes and tropics is as slow as I suspect. I am looking for a mechanism to enhance the mid term pseudo cycles, I think this is it.

      • Capt Dallas

        Your number refers to liquid CO2 and you compare that with air. Such a comparison makes no sense. It tells nothing about, how CO2 dissolved water affects the thermal conductivity of the water. In principle it could be possible to create a pool of liquid CO2 to the bottom of deep sea, but that’s the only place where we could have liquid CO2 and that would not have any influence on anything not very close to it.

        You can find the thermal conductivity of gaseous CO2 here

        http://www.engineersedge.com/heat_transfer/thermal-conductivity-gases.htm

        CO2 in air will reduce heat conductivity but so little that it has no significance. I would guess that CO2 in water has also a very small reducing effect, but on that I have no data.

      • My understanding of the role of CO2 in global warming does not include liquid CO2. I’d therefore be very interested in assessments of such a contribution. If significant I would need to incorporate it into my model of long term climate change, which currently gives an R2 on the order of 0.9996 when neglecting it completely. It would need to further increase R2 in order to justify its consideration, which would be remarkable.

      • Just wait until carbon capture is widespread and people start feeding liquid CO2 in huge quantities to deep sea locations ;-)

      • The 4C ocean thermal boundary layer in the Antarctic mainly, is in prime location to lose a little more heat with CO2 enhancement because of the temperatures.

        (Sorry not to pick up on this faster, cd.)

        The 4C boundary is by no means confined to the Antarctic, cd. At a kilometer or deeper you can find that temperature just about anywhere in the world.

    • Demetris, Judith might have done you an unkindness by choosing to post one of your less technical meditations on hydrology and climate. The article “A random walk on water,” which you pointed out to Fred Moolten, has considerably more technical depth, warranting a more technical response than seemed appropriate or even possible for “God and the arrogant species”.

      The abstract of the random-walk article says “long horizons of prediction are inevitably associated with high uncertainty.” The second paragraph of the body of the article asserts that “the movement of planets is a typical example of a deterministic phenomenon, whereas that of dice is thought to be random.” These seem to contradict each other, leaving me confused as to your point.

      If one breaks time into frequency bands, it is surely reasonable to say that some bands are more crowded than others with signals bearing no evident relation to each other. From that perspective a simple definition of an apparently random signal might be one that is found in a certain band and has no evident analysis as a sum of simply described signals.

      Your abstract seems to be suggesting, as a generality, that lower frequency bands are more crowded in the above sense than higher. Yet your second paragraph gives an example of a (rather broad) band which is very uncrowded.

      Might it not be better to think of both hydrological and climate data as divisible into bands some of which are more crowded than others? This would be consistent with most of the rest of your article provided you did not express a preference for higher over lower frequencies or vice versa.

      Hydrology is outside my sphere of competence and I’ll have to go with your assessment of how such frequencies are distributed. I’m better acquainted with global climate, which like hydrology admits analysis into bands.

      Looking at the monthly global land-sea HADCRUT3 data since 1850, the bands corresponding to periods shorter than a decade seem rather crowded (hence random) to my eye, but this might just be my inability to separate those bands into sufficiently few simple signals.

      However from a decade up (or down if measured by frequency instead of period) the bands seem very uncrowded. There is a fluctuation with an obvious correlation with the 11-year solar cycle that seems to be well described as a sum of two phase-locked oscillations of respective periods in the vicinity of 125 months and 250 months, correlated with respectively the TSI and the magnetic polarity of the solar wind, with an evident causal mechanism for the former if not the latter.

      Some two octaves lower down there is another signal that is traditionally identified with multiple ocean oscillations which admits of an analysis as a sum of phase-locked oscillations, though with only 162 years of data to go on it is hard to be sure of this.

      Lastly there is a signal growing in proportion to the log of what the CO2 was some 15 years ago, a time constant that may result from the ocean’s heat capacity and relatively low thermal resistance.

      But that’s it. The 7 octaves below 3 nHz, corresponding to periods longer than a decade, would appear to contain nothing significant except these three signals!

      The ice cores give us a nice picture of the next dozen octaves below those seven, which the geological record extends by another dozen or so octaves. That’s a lot of bandwidth, and one would not be surprised to find a lot of signals across that wide range. The Milankovitch cycles are particularly well known, with a reasonably well understood cause (though Richard Muller might dispute just how well understood). Lower than that I don’t know what other cycles per se there are, mainly there seem to be just events and trends.

      For both regional and global hydrology, and likewise for climate, estimates of population density of signals in each octave would I think be very helpful in giving us a way of thinking about what randomness is and where it is particularly concentrated. This would replace the metaphor of a God that throws dice occasionally with that of the harmony of the spheres.

      • Vaughan Pratt said, “Lastly there is a signal growing in proportion to the log of what the CO2 was some 15 years ago, a time constant that may result from the ocean’s heat capacity and relatively low thermal resistance.”

        How low do you reckon? CO2 improves thermal conductivity and an average increase in surface wind velocity in the southern oceans amplifies ocean heat loss considerably.

      • Cap’n Trade, I don’t believe that atmospheric CO2 nor dissolved CO2 actually adds a significant thermally conductive path. If you are a solid-state physicist, one visualizes thermal conduction via phonons. Visualizing this I see individual CO2 molecules as being spaced much too far apart to conduct heat. At 390 PPM, the CO2 molecules will be spaced such that there will be on average other 15 molecules (N2, O2, etc) between them. How would that improve thermal conduction?

        Logically the background H20 will swamp CO2 in any case. Why is CO2 important but H2O is not? Or are you thinking about furnaces or compressors where CO2 is under a lot of pressure ?

        The stuff you say sometimes makes me think that I wasted my whole education because you are saying something profound that I somehow missed. But then I realize I didn’t and you just have the disease of the ordinary garden-variety skeptic, a reasoning deficiency syndrome.

        On the other hand, Vaughan Pratt is always interesting to read because with him you get a logical overdose.

      • How low do you reckon? CO2 improves thermal conductivity and an average increase in surface wind velocity in the southern oceans amplifies ocean heat loss considerably.

        Sorry about that. Instead of “and relatively low thermal resistance” I should have written “and relatively high thermal conductance.” Let me know if that doesn’t address your question.

      • The stuff you say sometimes makes me think that I wasted my whole education because you are saying something profound that I somehow missed. But then I realize I didn’t and you just have the disease of the ordinary garden-variety skeptic, a reasoning deficiency syndrome.

        Oh, come on, Web, when was cd ever mean to you? Be kind to the kind and mean to the mean (I won’t name names).

        On the other hand, Vaughan Pratt is always interesting to read because with him you get a logical overdose.

        Absolute truth and absolute falsehood are equally toxic. Fortunately most are immune to both or I wouldn’t sleep as soundly. ;)

      • Web said, “At 390 PPM, the CO2 molecules will be spaced such that there will be on average other 15 molecules (N2, O2, etc) between them. How would that improve thermal conduction?”

        That’s about right. CO2 has a thermal conductivity of 0.115 at -20 C versus about 0.020 for air at -20 C. That should improve conductivity by about 80milliWatts/m-2 or about 10% or the imbalance with all things equal. Increased average wind speed improves the conductivity mainly at the ocean/atmosphere boundary layer.

        The impact of the CO2 on conductivity is primarily at temperature below the water vapor’s impact temperature range and allows more heat transfer to water in all phases as well as N2 and O2 increasing the rate of upper level convection. So its only noticeable impact would be at the surface and just above the average cloud top altitude. I believe that it is one of the major reasons for the greater than expect increase in the rate of convection. If it is, its impact will increase with CO2 concentration.

        Conduction itself may be small relative to convection and latent, but it impacts both convection and latent heat transfer.

      • That spacing is 15 molecules in each dimension. This means it is 15^3 as rare to find a co2 than the non-trace molecules. Heat diffusion is isotropic in a gas.

        Did you really fall for my trap?

        Vaughan is reading this with bemusement, I am sure.

      • Web,

        http://redneckphysics.blogspot.com/2012/02/comparing-perfection.html

        That simple radiant versus R-value model has a couple of interesting features. One is the “Effective” radiant layer of the surface which is at the cloud top latent boundary. The second is the imbalance in the tropopause, which should be about 0.9Wm-2. CO2 would raise the average radiant layer in order to warm the surface, but the average “effective” radiant layer of the surface will also rise. So there is a trade off, or cap’n trade off :), between the radiant and conductive (conduction, convection and latent should be grouped with the R-value) impacts.

        The trick as you mentioned is determining what the conductive impact is relative to the radiant impact and how much each will change with various forcings. Conductive is more linear than radiant, so its impact should increase relative to radiant under reduced solar and/or albedo forcing, making it one of the longer time scale factors. That is of course, unless the tropopause begins to become saturated which doesn’t appear to be the case until CO2 concentrations are very high.

      • Web, I said sounds about right as in being small in comparison. At only 390ppm, CO2 is out number but .115/.020 is nearly six times the impact per molecule. A doubling of CO2 only produces about a 0.02% increase in thermal conductivity, but a small, steady increase over time has an impact. When you look into the GRIP cores, the swings were what, +/- 12 C over several decades or a century?

      • Conduction as you define it is diffusion and diffusion is very slow in a rareified atmosphere. When you suggest a 0.02% increase in thermal conductivity is significant, all I see is a 0.02% increase over a very small thermal conductivity to begin with. Face it, in this environment radiative thermal transfer completely overshadows thermal conduction.

        A CO2 molecule emitting an IR photon will go a long distance in comparison to a diffusional hop.

      • That’s about right. CO2 has a thermal conductivity of 0.115 at -20 C versus about 0.020 for air at -20 C.

        What is the source of these numbers?

        The sources that I have found tell that the thermal conductivity of CO2 is significantly (about 30%) less that that of air, which seems natural, because the conductivity decreases with the molecular weight. At higher temperature the difference gets smaller as the specific heat of CO2 goes up but remains below that of air.

      • Web, in general that is absolutely correct, conduction on average in the atmosphere is small and slow with respect to radiant. At the boundary layers though the situation is different. Heat transfer from the ocean to atmosphere can be 100:1 the transfer from air to ocean and is dependent on the turbulence or lack there of, at the fluid boundaries. Convection and latent that results are more rapid fluxes than the conduction to initiated them.

        Again at the lower troposphere latent boundary, the conductive heat transfer is greater stimulating more rapid convection and mixed-phase cloud formation. In between boundaries there is little impact.

        So that small 0.02% is a little misleading since that is for all things remaining equal, It is amplified by the increase in surface temperature and the average turbulence at the boundary layers. Since it increases the convective rate, that further amplifies the impact. Since convective and latent cooling are not insignificant in the system with all things equal a .05% increase would offset half of the 1% increase in warming expected with a doubling of CO2. Conduction, convection and latent do offset radiant forcing, and at lower solar forcing would stimulate cloud formation and reduce average cloud base altitude. Kinda like what looks like is happening in the system. The question is how much?

      • Sorry, I should have stepped in much sooner to say that the “relatively low thermal resistance” of the ocean that I was referring is more properly called convection, the means by which the warmth in the top few meters is conveyed several kilometers downwards.

        The thermocline is the region of greatest rate of change of temperature with depth. Above and below the thermocline the temperature is relatively constant with depth. The portion below has enormous heat capacity and acts as a regulator to keep the bottom of the thermocline relatively constant with time. The portion above is more susceptible to global warming itself (though I have no idea about the contribution made by cd’s point about improvement of thermal contact of air with water via rising CO2) and I imagine is the biggest determinant of the delayed impact of rising CO2. This delay creates the appearance of a climate sensitivity of 1.85 degrees per doubling when it would be closer to 2.85 if the world’s oceans were a mere two feet deep (with the area the same).

        The lower the thermal resistance between the surface and the thermocline, the longer the delay before the heat pouring into the tthermocline and below is felt back at the surface.

        Not taking this effect into account when inferring climate sensitivity from observation can take upwards of a degree off its actual value today. This is different from computing only no-feedback sensitivity, though with a similar sensitivity-lowering effect.

      • Chief Hydrologist

        The process of heat movement to the deep oceans might more properly be called turbulent convection. Eddies not from the surface alone but generated from irregularities in the ocean floor – sub-sea mountains and valleys – and interacting with sub-surface currents.

        The heat rises buoyantly in water that is less dense than surrounding cold water – this is why the ocean temperature is not uniform. The heat rises and concentrates at the surface in a layer that is well mixed by wind and waves and ranges from some 120m deep at the equator to mere metres poleward. It is literally a warm layer floating on a cooler sub-surface layer. The heat is not hiding in the deep oceans – it was always there and it quickly moves to the surface layer.

        The oceans are warmed to from mere metres to more than a hundred metres by SW from the Sun. They cool from the top microns in the IR and by evaporation and conduction.

        The oceans must shed the energy they gain – over a period energy in must equal energy out but it is not easy to measure. The oceans are not an accumulator. The oceans are in max. ent. – they are losing energy as fast as they can. The rate depends on day and night, winds, sea surface temperature, upwelling of turbulent sub-surface currents, clouds, ice, biology, etc.

        There is a seasonal thermal (not energy) equilibrium between the atmosphere and oceans – and that is used in determining surface temps over oceans. So there is a coupled ocean/atmosphere system that gains and looses heat instantaneously – although the bulk of thermal inertia is with the oceans. It seems a better conceptual model than the heat is in the pipeline.

      • It seems a better conceptual model than the heat is in the pipeline.

        In general yes. However when there’s a steady rise the pipeline model becomes appropriate because even though a few hundred meters down it’s cooler there than at the surface, the whole system keeps warming, meaning that all levels are accumulating heat.

        If the surface stops warming that heat will then come back out at you. But even if it doesn’t the “pipeline” continues to provide rising “back pressure” to the constant downflow of heat as all levels continue to rise in temperature.

      • Chief Hydrologist

        I think you have missed an essential point in that heat is transported to depth but doesn’t stay there – it floats buoyantly to the surface layer where there is a store of heat that is much of the thermal inertia of the coupled ocean atmosphere system. It is very unphysical to think of heat lurking at the bottom.

      • Vaughan, Chief would be better at explaining the impact of the 4C thermal/density boundary layer than me. Basically, turbulent mixing attempts to drive heat down but the density of salt water at 4C changes the buoyancy of the water tending to cause it to flow upwards. At the 4C and lower than freezing surface air temperature,, laminar flow replenishes the 4 C layer from the surface near the poles, South pole primarily because there is more water area versus land and colder average temperatures.

        Different mechanisms but similar impact as the tropopause.

      • the whole system keeps warming, meaning that all levels are accumulating heat.

        Right you are, CH and cd, it doesn’t work the way I said at al (blush). Increasing surface temperature can only increase the top 200 or so meters and/or push down the thermocline. Currently I’m thinking that the former is the primary effect, which in turn must drive the latter because the top of the thermocline has to track the increasing temperature above it, making the thermocline longer (on graph paper). Is that description consistent with either of your understandings?

      • Chief Hydrologist

        I don’t know Vaughan – start here – http://www.srh.noaa.gov/jetstream/ocean/layers_ocean.htm

        All I am saying is that the oceans/atmosphere is a coupled system. If the atmosphere cools the ocean cools. Instead we have this concept that if the atmosphere stops warming from CO2 the oceans will continue to increase the temperature of the atmosphere for some time after. The heat in the pipeline. This is unphysical rubbish.

      • Fred Moulton
        Re “unkindness . . .posting one of your less technical meditations on hydrology and climate.”
        On the contrary, Curry properly highlights a crucially important message to the climate community of the dangers of hubris (aka arrogance).

        “The arrogant species is fooling itself if we think we can ‘project’ the state of the climate in 50 years or 100 years, “

        Confronting the mounting evidence of poor performance and higher uncertainty is a vital message that must not be swept under the rug. The greatest danger is the arrogance of claiming far more than the evidence justifies. The political and funding bias driving this trend is a very dangerous practice that severely undermines the integrity of science.
        Re: “I think we have an excellent handle on natural variability, which includes both natural forcings (solar and volcanic), and internal climate modes. . . .have positive evidence rejecting a dominant role for natural variability averaged over the past six decades, and we also have separate evidence for the dominant role of forced trends.” “The forcing data show that GHG positive forcing strongly outweighed other positive forcings for that era, and the GF08 paper is one of many sources for that evidence.” (I presume Grant and Foster 2008 J. Geo.Res. 113-D23105.)
        I see that as an example of the problem of arrogance raised in this post:
        The analysis of hydrological and thermal data by Koutsoyiannis and his group and their quantification of the Hurst Kolgomorov dynamics show much higher natural variability than expected by current global climate models. Furthermore, they and others show the IPCC models have “no skill” in quantitative forward prediction!
        Cloud uncertainty
        Clouds are the largest source of uncertainty in quantifying climate feedbacks and sensitivity. See the 2011 NIPCC Interim Report and the 2009 Report. Look especially at the uncertainties in the climate models.

        “In reality, therefore, we probably do not know the current atmosphere’s aerosol radiative forcing to anything better than +/- 100%, which does not engender confidence in our ability to simulate earth’s climate very far into the future with state-of the-art climate models.

        etc.

        No Skill: Analyzing 17 models, Reifen and Toumi (2009) find

        “no evidence of future prediction skill delivered by past performance based model selection” noting “there seems to be little persistence in relative model skill.” . . . “feedback strength and forcing is not stationary . . .”.

        Reifen, C. & Toumi, R. 2009. Climate projections: Past performance no guarantee of future skill? Geophysical Research Letters 36: 10.1029/2009GL038082.

        With cloud uncertainties so great that not even the sign is known with confidence, and with models that have “no skill”, how can you claim that “we have an excellent handle on natural variability”? Is that not an example of the “arrogance” that this post highlights?

        Please enlighten us as to which global climate models of 1997 predicted the current decade of flat temperatures, and in what publications. They mostly seem to be playing catchup with explanations after the fact. See Lucia’s: GISTemp Anomaly: January lower than December.

        The trend since 2001 is 0.006C/dec a decade and is positive but below the nominal multi-model mean trend of 0.2C/dec. If we use “red noise” to model the residuals from a linear fit, and test the hypothesis that the true trend is 0.2C/decade we would reject the a trend of 0.2C/decade as false based on falling outside the 2-σ confidence intervals.

        For a popular version, see the WSJ
        Concerned Scientists Reply on Global Warming “The authors of the Jan. 27 Wall Street Journal op-ed, ‘No Need to Panic about Global Warming,’ respond to their critics.”

        When predictions fail, we say the theory is “falsified” and we should look for the reasons for the failure. . . . the data strongly suggest a much lower CO2 effect than almost all models calculate. . . .follow the motto of the Royal Society of Great Britain, one of the oldest learned societies in the world: nullius in verba—take nobody’s word for it. . . .Many proxy indicators show that the Medieval Warming was global in extent. And there were even warmer periods a few thousand years ago during the Holocene Climate Optimum. . . . the IPCC has greatly underestimated the natural sources of warming (and cooling) and has greatly exaggerated the warming from CO2. . . .Since CO2 is not a pollutant but a substantial benefit to agriculture, and since its warming potential has been greatly exaggerated, it is time for the world to rethink its frenzied pursuit of decarbonization at any cost.

        Note that IPCC projections are systemically high compared to 1989=2011 temperature.

        Furthermore, Singer (2011) shows that almost an order of magnitude more runs are needed for current models to overcome chaotic variations. e.g., 20 runs of 20 years each are required. The IPCC models only have 1-5 runs.
        Contrast Nicola Scafetta who uses half the historic data to predict the other half and vice versa. J.Atm.Sol.Terr.Phys (2011)
        Short term data may well appear to be increasing exponentially – until you realize you are only looking at a small rising portion of a sinusoid!
        For a dose of reality, I strongly recommend systematically reading through the papers by Koutsoyiannis et al. and the reviews of the NIPCC. The evidence is so diverse, and models so far from “prime time” that claims of “excellent” understanding are a growing embarrassment to sound climate science.

      • David Hagen – One of the problems I think some of us face is that there is so much misinformation on the Web – from WUWT, NIPCC, and elsewhere – that refuting it would become a full time job with no time left for any productive activities. Now it may be unfair to you, but it’s my sincere impression that you have been a purveyor of some of that through comments and links you bring to this blog from those other sources. My sense, unfair or not, is that you don’t have a true interest in understanding climate or climate change, but only in gathering ammunition to attack the conclusions that most climate scientists draw about what is happening in the real world. Long after everyone else gave up on Miskolczi, for example, you continued to cite him, and I’ve seen the same thing with materials from WUWT and other sources on ocean acidification, the magnitude of solar contributions, and many other phenomena where the physical principles and the uncertainty margins are reasonably well known, but the Web sources I mention attack them with arguments that are clearly spurious.

        Regretfully, therefore, I won’t engage in showing once again why the claims you cite are wrong or exaggerated, because it’s been done many times before, but also for another reason. That reason is that I have no reason to believe that you’re interested in improving your knowledge. If you were, there are plenty of resources with better credentials than I can offer who can help you dispel your misconceptions and to begin to understand some of the basics. I could only help at the margins. If at some future time, I’m convinced you want to learn, I’ll be happy to engage in discussions with you, or more importantly point you to expert sources where you can find the details you need. Until then, I probably won’t address the general claims you make, but perhaps respond on occasion to very specific points that need clarification. There aren’t enough hours in the day for me to do more than that.

        One sign that you’ve started will be the evidence that you’ve stopped reciting claims that appear on WUWT. That blog, like many others, may occasionally be an interesting place to visit for individuals already well conversant with climate science principles. For the uninitiated, however, it probably represents time wasted that could have been spent enhancing their knowledge rather than reading material that misrepresents our state of knowledge. It may be a place for partisans seeking the ammunition I mentioned, but not, based on my experience, for readers who seeking an accurate understanding of climate behavior.

  48. Chief Hydrologist

    This narrative of Fred’s is total nuts. The assumption that albedo doesn’t change can’t be questioned or the whole damn leaky edifice comes crashing down. And the assumption that it doesn’t change is – frankly – looney tunes.

    Earthshine and ISCCP –

    Wong 2006 – ERBS

    • Robert – Albedo changes. Get a life.

      • Chief Hydrologist

        It is just total nuts because of the quantum of change. I can’t be bothered to have a serious conversation with you because you drop in with a turgid narrative that is inevitably long winded and with negligible intelligible content. It is warminista crapola with nuts on.

        The SW reflected declined several watts/m2 from 1984 to 2000 – and then jumped back up in the 1998/2001 climate shift.

        Get a brain that works for God’s sake – or not – I really don’t give a rat’s arse. I am just over all warminista crapola – and as the warmisita crapola sinks slowly in the west as temperatures continue not to rise – well, you are a laughing stock and it will only get worse. You wan’t to defend it? But don’t expect that I will accomodate your repetitive nonsense.

        Oh but it will get warmer eventually? Because it must? Whatever you reckon – Freddo

      • I can’t be bothered to have a serious conversation with you

        Robert – Don’t think I’m not grateful.

      • Chief Hydrologist

        But if you wan’t to have a non-serious conversation – pilgrim – then I’m your cowboy.

        Because if you remove the ENSO component in the period it did warm – 1976 to 1998 – you’re left with 0.1 degree C/decade. That’s 50% right there. Then if either of these major NASA satellite products is anywhere near correct – you’re whole narrative is blow to smithereens. I think that’s north of Dapto. If we add in CERES – the whole thing is just an absurd miscalculation which certain fools, who shall remain Freddo, wan’t to perpetuaute because – let’s face it – they are the warminista equivalent of space cadets waiting for the space ship to arrive. Don’t drink the kool-aid Freddo.

        Then there is the 0.2 degree rise/decade in the early 21st century. How’s that working out for you Freddo? It is an impossible joke. Are we expected to take that seriously? There is an underlying negative warming (that’s cooling for the mathematically challenged) in the IR to 2000 – and you expect what? What do you expect over the next 8 years as both the Sun cools and the Pacific cool?

        You got that wrong but somewhere over the rainbow – blue birds sing and the Earth will warm? You are a joke. You are Saturday night on the Ed Sullivan show. You are the whole bunch of clowns in the little red car.

      • Because if you remove the ENSO component in the period it did warm – 1976 to 1998 – you’re left with 0.1 degree C/decade.

        Australia or Luxembourg, Robert?

        Going by HADCRUT3VGL, globally the temperature rose 0.77 C between 1976 and 1998, of which 0.335 C was ENSO, leaving 0.435 C.

        How on earth would you able to get remotely near 0.1 C, Robert? ENSO would have to have been 0.67 C in that period, which would have been insane. It never varies that much.

      • You got that wrong but somewhere over the rainbow – blue birds sing and the Earth will warm? You are a joke. You are Saturday night on the Ed Sullivan show. You are the whole bunch of clowns in the little red car.

        Damn you, Robert, you just crashed the theorem prover I submit all these arguments to. Now I’ll have to check your reasoning by hand. So far changing Saturday to Thursday and red to green has had no impact whatsoever on my theorem prover’s soundness metric. Please simplify your reasoning so that it’s easier to check.

      • Chief Hydrologist

        Vaughan old buddy – I m feeling benign tonight – so I won’t just haul of and call you an idiot. Here’s one I prepared earlier – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ensosubtractedfromtemperaturetrend.gif

        Removing ENSO I said – sheesh.

        Kyle Swanson’s idea is that ya remove the 1976/77 and 1997/98 ENSO events – these are ENSO dragon-kings or noisy bifurcation at climate shifts – extreme events in other words. What ya’all are left with is WHAT HE POSITS IS THE TRUE RESIDUAL WARMING SIGNAL. I don’t believe him – I think it was caused by decadal cloud change but we will let that go for the the sake of my warm inner glow.

        The total change in those years is something like 0.4 degrees C.

        Seriously – if you can’t get 1 little thing right. Sheesh.

      • Chief,
        I seem to remember reprimanding you previously on over flowery language but recently I’ve noticed that you’ve changed. You’re keen to display your ocker credentials, the phrase “rat’s arse” seems to be part of you current lexicon, and you’re a lot more grumpy. You’ve even managed to fall out with Fred who is a model of decorum on this blog.
        What’s up? Have these “warministas” (or should that be “warmistas”?) been rattling your cage a bit too loudly recently.

      • Chief Hydrologist

        TT,

        Freddo is a model of a passive/agressive space cadet. Of course – it makes sense that a know nothing troll like you emerges like a louse from the woodwork to make unsurprising observations on nothing much in particular.

  49. Chief Hydrologist

    @ oneuniverse

    ‘Two potentially important papers by Wielicki et al. (1) and Chen et al. (2) dealt with aspects of how clouds and radiation vary and change, and whether climate models simulate the changes correctly. There is ample
    prior evidence suggesting that models have difficulties in correctly simulating clouds, and clouds are regarded as the biggest source of uncertainty in reports by the Intergovernmental Panel on Climate Change (IPCC) (3). However, an alternative interpretation of the disagreements shown between observations and models is that the analyses of the observations may be flawed.’

    The data certainly was and is flawed – the 2006 Wong correction for ERBS and the 2007(?) ISCCP-FD data series are more accurate and agree. There is an imporant result for 1998/2001 that is as well supported by Project Earthshine data. CERES shows dominant SW variability.

    Any paper pre 2006 is based on flawed data.

  50. Chief Hydrologist

    Stochastic analysis in hydrology is far from new. The problem with this approach emerges when there are multiuple strange attractors for hydrological regimes. Hydrological cycles are well known in many parts of the world and much of this emerges from the Pacific Ocean with variability on interannular to millenial scales.

    Here for instance is an 11,000 year ENSO proxy – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ENSO11000.gif – in it can be seen the drying of the Sahel 5000 years ago and the demise of the Minoan civilisation 3,500 ago.

    There is no long term average – there is no interannular average – there is no multidecadal average – there is no centennial or millenial average that makes any sense at all. One can’t average La Nina and El Nino – a bifurcated phase space in their own complex system – and expect any meaningful information to emerge – except that one state is likely to be wetter than the other in many places across the globe. Stochastics in rainfall has a limited place where the data from shifting regimes can be stratified. The rainfall regime emerges from the properties of the volume of phase space occupied by the system at any period.

    Regimes change abruptly and unpredictably as climate shifts from one volume of phase space to another. If this is not understood – nothing is understood about hydrology.

    Robert I Ellison
    Chief Hydrologist

  51. Why are the climatic oscillations (ENSO, PDO, AMO) considered “internal unforced variabilities”? What is the evidence?

    Isn’t there evidence of correlation between solar variability and these oscillations?

  52. The concept of humanity’s arrogance only makes sense if a God exists, and furthermore in a theistic rather than a deistic understanding of the term.

    If the Heavens and the Earth, including all the fossil fuel deposits, really have been created for our benefit then surely we don’t have to worry about CO2 emissions. Do we?

    • tempterrain

      We had the discussion several years ago as to whether we could only exist in an atmospheric soup consisting of precise proportions of such gases as co2. (280ppm)

      If we can’t stray outside that precise mix without dire consequences is it because we were never intended as a species to develop beyond the need for the Adam and Eve type existence within a pristine state of the earth?
      tonyb

      • The implication of the Venus narrative is that nature operates on a continuum and climates fall on that continuum. Raising co2 levels on earth puts us on that continuum. That’s what Carl Sagan was explaining all those years ago.

    • If the Heavens and the Earth, including all the fossil fuel deposits, really have been created for our benefit then surely we don’t have to worry about CO2 emissions. Do we?

      One fanciful objection a century ago to the use of petroleum-based fuels was that God had put the oil there to destroy the world when the time came, and that we therefore shouldn’t touch it.

      What that argument did not include was how God planned to extract and ignite it. Just as natural selection seems in retrospect a more intelligent approach to designing life than making Adam from clay and Eve from a rib, albeit much slower, perhaps radiative forcing is a more intelligent cooking method for using carbon to cook the world, albeit much slower than extracting and igniting the world’s fossil-based fuels in one apocalyptic conflagration.

      Let me remove my tongue from my cheek for the next nine paragraphs in order to put this flight of fancy on a quantitatively rigorous basis.

      Just as primates in general and humans in particular bear witness to the efficacy of natural selection (however unlikely that mechanism seemed to many when first written up by Darwin), so does Venus bear witness to the efficacy of a radiative forcing oven compared to a conventional gas-fired one (however unlikely many today find radiative forcing as an effective insulator). Whereas the ignition point of paper is 451 F, the surface of Venus is around 870 F (737 K). Moreover it would remain at that temperature were Venus to be relocated to Earth’s orbit, since the temperature is regulated primarily by lapse rate, with insolation at the surface adding only around 20 W/m2 to the ambient 16000 W/m2 attributable to lapse rate.

      Whereas alkaline metals like sodium and potassium and halogens like chlorine and bromine unite stably like man and wife to form the salt of the earth (if not pillars of the community like Lot’s wife), materials in between like carbon, silicon, germanium, and gallium arsenide lend themselves to bistability, even at room temperature. Carbon is an effective basis for slow organic computers while silicon, germanium, and gallium arsenide have proved more suitable for fast inorganic computers.

      But carbon has a second more basic mode of bistability illustrated by its occurrence on Earth and Venus. At Earth temperatures the various bonds carbon can form with other elements like hydrogen and oxygen give it the quality of a rechargeable battery. CO2 is the carbon atom’s completely discharged form, but it can be recharged slowly by geological processes and much more quickly by photosynthesis (thank god for plants!) by forming a great variety of hydrocarbons, starting with methane which fuels the humble gas stove and on up through the zoo of hydrocarbons of various degrees of volatility on which life is based. However recharged, most of Earth’s carbon remains locked up in Earth’s crust in a stable, relatively charged, and cold state.

      At high temperatures in the presence of oxygen the carbon battery quickly discharges to CO2, and cannot be recharged until either cooled off or violently parted from its beloved pair of oxygen atoms. (Even when run down carbon is a bigamist; if forced into monogamy as CO it poisons life.) Every carbon atom still below the surface of Venus, if any, is even hotter than the ~740 C surface temperature, though by an increasing amount per km of depth that is likely about half Earth’s rate of 20 K/km. The mass of carbon in Venus’s atmosphere is around 130 petatonnes, which is close to our best estimate of the total carbon on Earth, suggesting that essentially all of Venus’s carbon may be in its atmosphere.

      Venus and Earth illustrate the two stable states of carbon, hot and cold. When cold, carbon retreats to the rocks and does little to heat the planet. When hot, it turns to CO2, forming an insulating blanket that retains what little heat creeps in from the Sun. This is a mere 20 W/m2 at Venus’s surface, feeble by comparison with Earths’ 1000 w/m2 at the surface, even though at the top of their respective atmospheres (TOA) Earth receives 1366 W/m2 and Venus 2577. Earth absorbs 70% and Venus 10%; furthermore most of the latter’s 10% is absorbed in clouds at 60 km altitude while much of Earth’s absorbed 70% is directly absorbed at the surface.

      If Venus were shielded from the Sun’s heat with a large parasol it would gradually cool off until it switched to the cool state and the carbon returned to the ground. This would be an extremely slow process however, in part because CO2 is such a wonderful insulator, especially when 100 km thick, and in part because the emissivity at Venus’s TOA is around 0.1, permitting very little heat to leak to space.

      Venus’s pathologically low TOA emissivity contributes only to insulation and not directly to temperature, which is maintained at Venus’s surface primarily by two mechanisms, the opacity of CO2 to infrared and the lapse rate.

      Numerically both these quantities are the same on Venus as on Earth. One difference comes from CO2 comprising nearly all of Venus’s atmosphere. This traps the intense ocean of photons in the same way that our oceans trap water molecules, whose mean free path is too short to permit rapid evaporation. Thought of as an ocean, our atmosphere is comparable to the top 3 m of our oceans. Venus’s atmosphere is 100x more massive and more closely resembles the top 300 m of our oceans. Photons evaporate to space very easily from our atmosphere, but barely at all from Venus’s, both because of their much shorter mean free path and the much greater depth of Venus’s atmosphere.

      The other difference is that the altitude at which Venus’s atmosphere is at a temperature equal to Earth’s occurs around 45 km higher, thanks to its much greater thickness. The lapse rate on both Earth and Venus is in the neighborhood of 10 K/km, making Venus’s surface temperature some 450 degrees hotter than ours.

      Returning my tongue to my cheek, if God is working outwards, we may be next after Venus. If destroying the Earth was part of God’s plan, then natural selection would be a simple way to initiate this, relying on the greed of the end product of natural selection to run the whole of Earth’s carbon battery into the ground, more precisely into the atmosphere, thereby putting Earth into the same bistable state as Venus. q_0 = COLD, q_F = HOT.

      Fear can slow down greed, but it is a good question just how long it can keep it in check. However by the time the question becomes relevant to our present CO2 situation I expect it will have become moot as we’ll have been weaned onto inertial confinement fusion power (tongue definitely not in cheek there). The bulk of Earth’s underground carbon reserves will be forgotten about and left untouched.

      Unless some genius comes up with some brand new use for massively many carbon atoms, that is. Natural selection is like the spell cast by the Sorceror’s Apprentice on the broomstick permitting it to reproduce, which only the Sorceror himself can turn off.

      • Markus wrote:

        “Correct hotterritory, we don’t have to WORRY..”

        And of course Markus completely missed the implication of what Vaughan brilliantly laid out.

        The implication of Vaughan’s Venus narrative is that nature operates on a continuum and climates fall on that continuum. Raising CO2 levels on earth puts us directly on that continuum. One can say that rising levels will somehow start a cooling trend but that is not in keeping with the least action principle of physics. We have several points on the CO2/GHG continuum — the planets Venus, Earth, and Mars — and unless there is some low energy state corresponding to a local valley in that continuum, nudging CO2 levels up will push the Earth toward Venus. And that’s what Carl Sagan was explaining all those years ago when he studied Venus and Arrhenius’ GHG theory for his PhD thesis, and what the rest of the climate scientists ran with.

      • “…If destroying the Earth was part of God’s plan, then natural selection would be a simple way to initiate this, relying on the greed of the end product of natural selection to run the whole of Earth’s carbon battery into the ground, more precisely into the atmosphere, thereby putting Earth into the same bistable state as Venus.” says Vaughn. But there is one problem with using such an analogy with Venus: it did not get its atmosphere the same way we did. There is no plate tectonics on Venus and that changes the game. Radioactive heat generated in the interior of the Earth is vented by plate boundary volcanism. On Venus it has nowhere to go and begins to undermine the crust. Eventually the crust weakens, cracks, its pieces sink into the interior, and are replaced by an entirely new crust. This has been happening on Venus approximately at 300 to 500 million year intervals to judge by impact crater density. These periodic renewals created its atmosphere by out-gassing. There never was an ocean or an earth-like atmosphere on Venus at any time that could have somehow evolved into its present day gaseous envelope. Remind that to Hansen when he brings up his Venus analogy again as he periodically has been doing. Someone who was an astronomer on the Pioneer Venus probe should have learnt this much by now.

    • Markus Fitzhenry

      Correct hotterritory, we don’t have to WORRY..

  53. TonyB,

    Just remind me. What is the name given to the type of logical fallacy which is given to a deliberate misrepresentation of an opponent’s position?
    You know, like when I imply we need to be concerned about CO2 emissions, because the CO2 concentration is now 390ppmv and will rise to something well over 500ppmv if nothing is done to prevent it, and you interpret that as meaning we need to stick to 280ppmv?

    • TonyB, you’re supposed to say “straw man” here, aren’t you?

      Or didn’t you know that?

      • Vaughan

        Where tempterrain is concerned, I have a little device that automatically tweets a specially written ‘Strawman’ song as soon as the first three letters of his name appears here.
        tonyb

  54. Maurice Ewing and William Donn had valid climate theory over 50 years ago. After that, climate theory got on the wrong track and peer review and consensus kept the path true to the wrong course. Now that we are watching from space, the correlation between Low Arctic Sea Ice and High Snow Extent Events and Cooling Events, it will not be a lot longer until Ewing and Donn will be proven correct.

  55. tempterrain

    Read it again. I said nothing of the sort. I was referring to an earlier discussion in which we philosophically discussed if we should leave the atmosphere alone or would there be dire consequences, which fitted in with your musings immediately above.

    By the way, what DO you believe a safe limit to be and since you mentioned it first, wouldnt it be better to stick to the supposed pre industrial levels?
    tonyb.

    • tonyb, there are no safe limits, only safe rates of change. Driving a car at 200 mph is only dangerous if you run into something.

  56. Chief Hydrologist

    Pekka,

    You said upthread that the stochastic analysis is not used to predict future conditions. But this is the purpose of stochastic analysis – especially in hydrology. In one definition a stochastic process is one for which a probability distribution can be devised. Thus in hydrology a concept of average return intervals (the inverse of probability of occurence) for floods can be used to determine the risk of future flooding to specific levels.

    Flooding is of course not a stochastic process – there is cause and effect that can be simulated in models for a week or so in advance. The stochastic analysis is a matter of curve fitting to data assuming that the flooding events are independant. But fitting a curve is not the same as understanding the processes and is much more likely than not to be misleading if wrong assumptions about the system is made.

    Robert I Ellison
    Chief Hydrologist

    • Rob,

      My statement was perhaps not quite complete as there are models that contain stochasticity and have some predictive power. That predictive power does then come from the other parts of the model.

      Where stochastic properties are used extensively is in foreseeing the likely range of outcomes. The Hurst type of stochastic models tell specifically that autocorrelation is important and can lead in hydrology to more severe situations. It tells that such situations are more common than other models might predict, but stochastics cannot tell, when they occur.

      Stochastic models are great for risk management, but not for forecasting.

      • “My statement was perhaps not quite complete as there are models that contain stochasticity and have some predictive power.”

        Whew, I am glad you modified that statement! The entire semiconductor industry is based on a model of stochastic flow of carriers precariously triggered by a gate. By implication these transistors drive computers which require absolute predictive guarantees.

        Of course someone could say this is more of an outcome of the law of large numbers, which came up earlier in the ergodicity thread. But that is not to say that we can’t say the same thing about aspects of the climate system. For example, wind energy is deemed to be unpredictable, but if you look at the accumulated wind speeds collected instantaneously throughout the world, the total kinetic energy may turn out to be very close to a predictable constant.

        Getting meta for a moment, but relying on a single wind turbine or regional wind turbine for power is like relying on one or several electrons to drive a transistor. So with stochasticity, we get the predictive power of the collective.

        And I agree they are excellent tools for risk management as well.

      • Chief Hydrologist

        Pekka,

        Well that is what I said. But I have been rethinking the fundamental premise of the article above and the random walk on water article. There is certainly a problem with the characterisation of climate models. We are certainly not anywhere near a point where the resolution of processes or the accuracy of the data is sufficient to constain inherent deterministic chaos in the Navier-Stokes pde. There seems a problem as well on the level of natural philosophy. Roulette seems to me to be relentlessly an exercise in classical mechanics – it is all mass, force and vector. It is susceptible to statistical analysis (probabilities that are there in the with the returns printed on the table – plus a slight edge to the house) – so it is stochastic in that sense. But in principle if we knew all of the forces and vectors involved – we could predict the outcome with certainty. Still less is it deterministically chaotic – as each parameter in a perfect gamblers world is perfectly constrained.

        In climate when we think deterministic chaos it is a matter of sub-systems influencing other sub-systems in a chain of causality. A phase that caught my attention was of tremendous energies cascading through powerful mechanisms. UV influences sea level pressure at the poles pushing storms, winds and ocean currents deeper into lower latitudes, influencing snow andice in the Labrador Sea and thermohaline circulation, influening upwelling in the eastern Pacific with biological, wind and cloud feedbacks. This is the single spatio-temporal chaotic Earth system that abruptly shifts in a non-linear fashion to more or less long lived and stable climate regimes.

        The Earth system is not random at all but deterministic to the quantum level – as Tomas has said. Like roulette and climate models – it is simply not well constrained and so climate shifts remain difficult to predict. Simply assuming randopmness in climate rather than stochasticity in a probabalistic sense seems wrong. Climate is not intrinsically impossible to predict rather it is merely impossibly difficult.

        Robert I Ellison
        Chief Hydrologist

      • Rob,

        My latest comment was still incomplete as it remained contradictory with what I have written before in the sense that I have emphasized also the possible role of stochasticity in making dynamics more, not less, predicable in some cases where the deterministic equations are chaotic. This is certainly a difficult issue that i cannot describe in any detail and whose importance I don’t know. (Whether the claim is true at all may depend on, how the problem is framed as it is not clear, how one should define the deterministic dynamics to compare with certain stochastic dynamics or vice versa. Just adjusting the strength of the stochastic term may not be the best way.)

        Where I most certainly disagree is the following:

        The Earth system is not random at all but deterministic to the quantum level. Like roulette and climate models – it is simply not well constrained and so climate shifts remain difficult to predict.

        We may of course imagine the Quantum Mechanical wave function of the whole universe and say that everything is determined by that. That’s a philosophical view comparable with religion. I don’t buy it, but I don’t want to propose any alternative metaphysics for that. Rather I leave the fundamental metaphysics unspecified.

        Whatever we think about the determinism on the level of the whole universe, a subsystem of that like the Earth is stochastic, if we don’t claim to know also all its interactions with the rest of the universe. The interactions with the sun may appear rather predicable, but we know that they include many unpredictable details that have a discernible influence even assuming that the lowest estimates of their influence are accepted. They are stochastic input with the Earth system and they have influenced the Earth as long as the Earth has existed. The influence of that stochastic input has spread to all parts of the Earth system and destroys the determinism of the system. Any smaller subsystem of Earth is continuously influenced directly or indirectly by that stochasticity. The Earth system is stochastic both at the fundamental level and at the practical level.

        The stochasticity is certainly very important, if we try to build detailed models that start from micro scale and extend to global scale phenomena based on first principles. I don’t believe that any approach that does not involve stochasticity can be made even remotely realistic in such an ambitious exercise. On some levels of such an analysis stochasticity is one of the dominating factors. If it’s not included explicitly it must be included through some equivalent parameterizations.

        Simply assuming randomness in climate rather than stochasticity in a probabalistic sense seems wrong.

        I don’t understand the meaning of the above sentence.

        Climate is not intrinsically impossible to predict rather it is merely impossibly difficult.

        Or perhaps it is on some level impossible, but on a lesser but still very significant level it’s not impossible at all. It’s not excluded that rather good predictions are achievable in near future measuring nearness in units of several years, perhaps of a decade, not in years. We really don’t know the answer for that.

      • Chief Hydrologist

        Pekka,

        You misunderstand me entirely. The quantum mechanical wave – interesting as it is – merely propagates wave/particle uncertainty through time. But no – I was very definitely not thinking on a sub-molecular scale. For all I know the universe of wave/particle duality and quantum entanglement might be entirely random. I did say down to the quantum level – and classical mechanics clearly has no application there. Perhaps I should have specifically excluded quantum effects.

        As far as climate models go – stochastic parametisation may indeed be an effective way to constrain the dynamic equations. It is a simple concept – the less the range of plausible values of initial conditions and the better the estimate of boundary conditions the less the scope for non-linear divergence in calculations. That is trivially true – and we are nowhere near defining inputs with sufficient accuracy to eliminate sensitive dependence and structural instability in models. The dynamic equations are not constrained sufficiently well to preclude large variations in solutions from necessary but non-unique choices in parameter values. It is simply the way it is at this time.

        To claim that the Sun-Earth system is deterministic in principle is very different to claiming that we know all the details of the interactions down to – but excluding – the quantum level. The distinction is clear on the level of hydrology. There is clearly cause and effect – complex as this is. The cycle starts with evaporation and the moisture is transported in the atmosphere to be precipitated. We clearly can’t follow every molecule – but everything must have a physical cause and effect. There is no physical principle that suggests that raindrops move without a physical cause in the atmosphere. Rainfall frequency distributions may be obtained by statistical methods – but that is not the same as actual physical processes in the atmosphere being random. There is a distinction to be made between statistical stochasticism and real world determinism without which understanding is impossible.

        Climate is not intrinsically impossible to predict rather it is merely impossibly difficult? I was merely playing with words here. It is what is called a conceit in English poetry. Prediction is extraordinarily difficult both from deterministic chaos in climate equations and from the extraordinarily complex dynamics of the Earth system. We have not succeeded at climate prediction to date.

        We may need to disagree that random solar effects – if these indeed exist – destroys determinism on Earth. Even if there are random solar effects – seemingly unlikely as that is – the Sun is just the prime causality in the Earth system.

        Robert I Ellison
        Chief Hydrologist

      • “There is no physical principle that suggests that raindrops move without a physical cause in the atmosphere. Rainfall frequency distributions may be obtained by statistical methods – but that is not the same as actual physical processes in the atmosphere being random. “

        That’s funny because that’s exactly what I can do in a few lines of argument. The energy contained in a growing cloud follows a Gibbs distribution, aka maximum entropy, and will release a torrent of rain proportional to that energy. Holding that much moisture aloft is a potential energy argument. So when that potential energy is converted to kinetic energy of rainfall, that turns directly into the rainfall rate distribution we see. And that is the distribution that matches the empirical data with such parsimony as to be totally remarkable.

        Occam’s razor suggests that a one parameter fit (that also happens to be the mean value of rainfall rate) is both the descriptive measure (statistical) and the explanation (a Gibbs probability measure) for how nature operates. To improve on this stochastic model will require you to provide an alternative model with an improved information criteria measure. In information theory terms, my rainfall model has a very low perplexity.

        That’s why chief is completely out of his element when he spouts nonsense like “Simply assuming randomness in climate rather than stochasticity in a probabalistic sense seems wrong. “. My suggestion is to treat everything that Chief says with a grain of salt. He’s a perplex player with the FUD he spouts.

      • Chief Hydrologist

        Webby – you as usual make no sense at all – you are an insane egomaniac with delusions. You obsessively fit curves to data – any data at all – with no rhythm or reason and claim a fundamental insight into nature. You have fitted a probability distribution of rainfall at a specific location from a graph and fitted a curve that is of no interest at all because it fails to reproduce the most interesting hydrological behaviour being extreme events. It is absurd in the extreme – but it is just a curve fitting exercise.

        We have water evaporating from surfaces, rising into the atmosphere convectively, cooling and condensing into a droplet and falling from the sky typically within a few days. It is driven by insolation – evaporation equals precipitation – it can be a drizzle or a downpour – there is great spatial and temporal variability. We are looking for cause and effect, determinism, rather than a simple frequency distribution, so that we may understand the physical processes that are in play as they must be.
        One may indeed fit a truncated normal distribution to rainfall data (aka Gibbs and max. ent. of a non-equilibrium system) – but it is just statistics. It doesn’t explain why rainfall falls in a place or why there are droughts and floods.
        ‘Occam’s razor, also known as Ockham’s razor, and sometimes expressed in Latin as lex parsimoniae (the law of parsimony, economy or succinctness), is a principle that generally recommends that, from among competing hypotheses, selecting the one that makes the fewest new assumptions usually provides the correct one, and that the simplest explanation will be the most plausible until evidence is presented to prove it false.’

        Webby – you as usual make no sense at all. You obsessively fit curves to data – any data at all – with no rhythm or reason and with astonishing egotism claim a fundamental insight into nature. It is all about you and what you have done. You have fitted a probability distribution of rainfall at a specific location from a graph and fitted a curve that is of no interest at all because it fails to reproduce the most interesting hydrological behaviour being extreme events. It is absurd in the extreme – but it is just a curve fitting exercise. You are incapable of discussing anything rationally at all – just your particular monomania. It is disturbing and irrational behaviour.
        To correct your specific misapprehension (as if that would do any good at all) – what we have is water evaporating from surfaces, rising into the atmosphere convectively, cooling and condensing into a droplet and falling from the sky typically within a few days. It is driven by insolation – evaporation equals precipitation – it can be a drizzle or a downpour – there is great spatial and temporal variability. We are looking for cause and effect, determinism, rather than a simple frequency distribution of rainfall at a point, so that we may understand the physical processes that are in play as they must be.
        One may indeed fit a truncated normal distribution to rainfall data (aka Gibbs and max. ent. of a non-equilibrium system) – but it is just statistics. It doesn’t explain why rainfall falls in one place and not another or why there are droughts and floods.
        The truly parsimonious explanation is that there is physics driving the phenomenon and there is not just a magical macro-state that can be described statistically.

      • Chief Hydrologist

        whoops – cutting and pasting without checking.

        Webby

        You obsessively fit curves to data – any data at all – with no rhythm or reason and with astonishing egotism claim a fundamental insight into nature. It is all about you and what you have done. You have fitted a probability distribution of rainfall at a specific location from a graph and fitted a curve that is of no interest at all because it fails to reproduce the most interesting hydrological behaviour being extreme events. It is absurd in the extreme – but it is just a curve fitting exercise. You are incapable of discussing anything rationally at all – just your particular monomania. It is disturbing and irrational behaviour.

        To correct your specific misapprehension (as if that would do any good at all) – what we have is water evaporating from surfaces, rising into the atmosphere convectively, cooling and condensing into a droplet and falling from the sky typically within a few days. It is driven by insolation – evaporation equals precipitation – it can be a drizzle or a downpour – there is great spatial and temporal variability. We are looking for cause and effect, determinism, rather than a simple frequency distribution of rainfall at a point, so that we may understand the physical processes that are in play as they must be. A pattern of warm water here associated with rainfall there. A sea level pressure there causing storms to spiral off the polar vortices. A jet stream there channeling moist air across land here. I make no apologies for poetry – the hydrological cycle is the essence of life, the realm of the Dragon Kings, beauty and awesome power combined.

        One may indeed fit a truncated normal distribution to rainfall data (aka Gibbs and max. ent. of a non-equilibrium system) – but it is just statistics. It doesn’t explain why rainfall falls in one place and not another or why there are droughts at one time and floods another.

        The truly parsimonious explanation is that there is physics driving the phenomenon and there is not just a magical macro-state that can be described statistically.

        Robert I Ellison
        Chief Hydrologiat

  57. Chief @ 1.10 am.
    Chief, that was fathoms deep, I try to understand.

    Always interaction of cycles, nature’s cycles acting as gyres within gyres. Diurnal cycles of day, followed by night, seasonal cycles of calm and storm, the behaviour of clouds, ocean currents driven by surface winds and the earth’s spin, create the changing ‘now.’ And in the ocean depths,
    the mysterious conveyer belt currents flow like great arteries, up welling and down welling to imperatives of salinity and heat.

    How do models capture this? Too much for a simple cowgirl, well, she thinks she’s a cowgirl, to understand :-(

    • Soulmate, CH? Or Geographically Impossible? ;) Odds are not great, most people live north of the equator.

      • Chief Hydrologist

        Vaughan – old buddy – got the hang of that 3rd law of motion yet? Despite my best efforts you are not on top of the becoming a better you project. It’s one step forward and three back with you.

        You must realise that it’s not possible for you to sully a beautiful meeting of souls between a cowgirl and a cowboy – but the attempt says volumes about you. I am very disapointed. We will have to work harder.

      • I’m just the messenger, CH. If you find yourself meeting more cowgirls by shooting more messengers, more strength to you. That’s how it works in the movies. Just saying.

  58. Chief Hydrologist

    Hi Beth,

    It is just my way of making a poem of the world. Someone once said that truth is beauty and beauty truth – and I took it to my simple cowboy’s heart.

    I loved your poem of dynamism – but there there is also a stillness at the centre of a storm

    ‘In physics, a standing wave – also known as a stationary wave – is a wave that remains in a constant position.’

    So climate states persist in the Earth system as standing ‘waves’ in the atmosphere and oceans – and then they shift. Inter alia – ENSO, the PDO, SAM and NAM. The ENSO pattern is illustrative.

    Take care and I will look for you on the Santa fe trail.

    Robert I Ellison
    Chief Hydrologist

    PS – I became Chief Hydrologist because of Springfields Chief Hydraulical and Hydrological Engineer Cecil (he spent four years in clown school – I’ll thank you not refer to Princeton like that) Terwilliger. Cecil opined that this was a sacred vocation in some cultures. I hope you don’t think less of me.

  59. Thank you Chief, and I do Keep an eye on ENSO.
    Beth.

  60. Forgot to add, yes you do have a sacred vocation. Everyone can respect that.
    ( Even you, Vaughan.)

  61. Vaughan said, “The 4C boundary is by no means confined to the Antarctic, cd. At a kilometer or deeper you can find that temperature just about anywhere in the world.”

    I was taking about the ocean/atmosphere conductive impact at the 4C boundary layer. The 4C layer and the thermohaline current primary source.
    If the rate of deep ocean warming from the upper ocean mixing layer is greater than the replenishment of the 4C and below, the oceans are warming and there is more thermal inertia “in the pipeline”.

    Since Antarctic is still growing, that is an indication that the 4C layer is being replenished at a greater rate than the Arctic, which should be offsetting some portion of the deep ocean warming.

    The change in the average depth of the 4C layer in the tropics should be an indication of the rate of warming due to the increase in the surface mixing layer heat transfer.

    Which one is winning?

  62. Chief Hydrologist

    ‘That’s why chief is completely out of his element when he spouts nonsense like “Simply assuming randomness in climate rather than stochasticity in a probabalistic sense seems wrong. “. My suggestion is to treat everything that Chief says with a grain of salt. He’s a perplex player with the FUD he spouts.’

    I am a lot bored with Webby. He explains climate with a few graphs where he fits a curve to a curve. There is no uncertainty, no complexity and no reality. I think he is quite literally insane and is cyber stalking me. As of now I declare that I am giving up trying to decipher the pretentious drivel that passes for thought processes. I know – I will just not read anything by Webby ever again.

    Determinism applies in hydrology – in contradiction to the central assumption of ths post. Where water evaporates there is cause and effect – energy from the sun causes water to change from a iquid to gas phase. It is not random but governed by physics. Stochasticity is a statistical approach of applying probabilities to data. Stochasticity is a statistical methodology and evaporation, convection or precipitation are physical mechanisms and subject to cause and effect.

    It seems simple enough.

  63. I don’t much like the “indulgence” analogy. It doesn’t ring historically true. The medieval Christians practiced sacramental confession, wherein sins were forgiven and a post-forgiveness penance was given, sometimes lasting years. Indulgences remitted the penance associated with confession so that the penitent could return to communion; it was not supposed to remit the sin itself. I like the sentiment, but an analogy should be accurate, and it isn’t accurate to state that indulgences in the middle ages were a way to buy one’s way out of hell.

    http://books.google.com/books?id=n2WnNX13tsYC&pg=PA94&lpg=PA94&dq=medieval+theology+of+indulgences&source=bl&ots=ofNbZ4pjX2&sig=Rd3m_s1y8HBfu3WmL4W3rPmDm7A&hl=en&sa=X&ei=fmVIT5XSFM-ugQfU9bWNDg&ved=0CEcQ6AEwBQ#v=onepage&q=medieval%20theology%20of%20indulgences&f=false

  64. May I interject a modest point?

    Equations do not create reality. They are simply one way we use to communicate something about reality – ie they are merely mental constructs made objective on paper or screens. In effect, all equations are wrong because they are only descriptions rather than the thing described. In the same way and for the same reason that all simulations give invalid results because they are not the thing being simulated. Only the thing or process itself can be exactly correct.

    However, some equations and simulations are more useful than others in that they allow us to predict the consequence of a given set of circumstances close enough for our limited purposes. The real question then becomes: is our limited purpose valid/moral/the right thing? We thereby leave the realm of math, technology, and engineering and enter into the realm of philosophy and ethics for which all of your equations and verbal confabulations about them are totally without value or application.