Climate model discussion thread

by Judith Curry

My perspective on climate models (uncertainty monster, DOE presentation, RS presentation) have been regarded as outside the ‘mainstream’.  Here are some new papers by leading climate modelers that provide new evidence and arguments on the concerns that I have been raising.

Tuning the climate of a global model 

Mauritsen T. , B. Stevens E. Roeckner T. Crueger M. Esch M. Giorgetta H. Haak J. H. Jungclaus D. Klocke D. Matei U. Mikolajewicz D. Notz R. Pincus H. Schmidt L. Tomassini

Abstract.  During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system. These desired properties are observables, such as the radiation balance at the top of the atmosphere, the global mean temperature, sea ice, clouds and wind fields. The tuning is typically performed by adjusting uncertain, or even non-observable, parameters related to processes not explicitly represented at the model grid resolution. The practice of climate model tuning has seen an increasing level of attention because key model properties, such as climate sensitivity, have been shown to depend on frequently used tuning parameters. Here we provide insights into how climate model tuning is practically done in the case of closing the radiation balance and adjusting the global mean temperature for the Max Planck Institute Earth System Model (MPI-ESM). We demonstrate that considerable ambiguity exists in the choice of parameters, and present and compare three alternatively tuned, yet plausible configurations of the climate model. The impacts of parameter tuning on climate sensitivity was less than anticipated.

Citation:  Journal of Advances in Modeling Earth Systems 4 , doi: 10.1029/2012MS000154 , [link]

This paper is also discussed at the Blackboard.  Here is the key point from my perspective, as discussed by Lucia:

The MPI-ESM was not tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen. Yet, we were in the fortunate situation that the MPI-ESM-LR performed acceptably in this respect, and we did have good reasons to believe this would be the case in advance because the predecessor was capable of doing so. During the development of MPI-ESM-LR we worked under the perception that two of our tuning parameters had an influence on the climate sensitivity, namely the convective cloud entrainment rate and the convective cloud mass flux above the level of nonbuoyancy, so we decided to minimize changes relative to the previous model. The results presented here show that this perception was not correct as these parameters had only small impacts on the climate sensitivity of our model. Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results.
.

Lucia’s comment: This seems to be an admission that modelers have known their models would be compared to 20th century data early on. So, early models were tuned to that. We are now in a situation where models can — mostly– match 20th century data. So, the good match in the hindcast for the historic surface temperatures in no longer a very good metric for determining which models are good or bad.

JC comment:  This supports my circular reasoning argument in the uncertainy monster paper, whereby tuning (implicit or explicit) to 20th century time series of global average temperature anomalies makes these models not useful for 20th century attribution studies.

Lucia also pulled this tidbit related to simulations of Arctic sea ice:

We usually focus on temperature anomalies, rather than the absolute temperature that the models produce, and for many purposes this is sufficient. There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K, and details such as the years of cooling following the volcanic eruptions. Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present. To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended. Prime examples are processes involving phase transitions of water: Evaporation and precipitation depend non-linearly on temperature through the Clausius-Clapeyron relation, while snow, sea-ice, tundra and glacier melt are critical to freezing temperatures in certain regions. The models in CMIP3 were frequently criticized for not being able to capture the timing of the observed rapid Arctic sea-ice decline [e.g., Stroeve et al., 2007]. While unlikely the only reason, provided that sea ice melt occurs at a specific absolute temperature, this model ensemble behavior seems not too surprising

JC comment: I first became aware of the problem two years ago, when Tim Palmer presented (at Fall AGU) a plot of absolute global temperatures simulated by the CMIP3 climate models for the 20th century.  The impact of this on the model thermodynamics (notably the sea ice) is profound; model tuning then works around this problem, so that n wrongs make a ‘right’, almost certainly torquing feedbacks in unfortunate ways.

Simulating regime structures in weather and climate prediction models

A. Dawson, T.N. Palmer, S. Corti

Abstract.  It is shown that a global atmospheric model with horizontal resolution typical of that used in operational numerical weather prediction is able to simulate non-gaussian probability distributions associated with the climatology of quasi-persistent Euro-Atlantic weather regimes. The spatial patterns of these simulated regimes are remarkably accurate. By contrast, the same model, integrated at a resolution more typical of current climate models, shows no statistically significant evidence of such non-gaussian regime structures, and the spatial structure of the corresponding clusters are not accurate. Hence, whilst studies typically show incremental improvements in first and second moments of climatological distributions of the large-scale flow with increasing model resolution, here a real step change in the higher-order moments is found. It is argued that these results have profound implications for the ability of high resolution limited-area models, forced by low resolution global models, to simulate reliably, regional climate change signals.

Citation:  GEOPHYSICAL RESEARCH LETTERS, VOL. 39, L21805, 6 PP., 2012doi:10.1029/2012GL053284  [abstract]

Excerpts from the Conclusions:

Understanding gained from studies of low-dimensional dynamical systems suggests that the response to external forcing of a system with regimes is manifested primarily in changes to the frequency of occurrence of those regimes. This implies that a realistic simulation of regimes should be an important requirement from climate models. We have shown that a low resolution atmospheric model, with horizontal resolution typical of CMIP5 models, is not capable of simulating the statistically significant regimes seen in reanalysis, yet a higher resolution configuration of the same model simulates regimes realistically. This result suggests that current projections of regional climate change may be questionable.

This finding is also highly relevant to regional climate modelling studies where lower resolution global atmospheric models are often used as the driving model for high resolution regional models. If these lower resolution driving models do not have enough resolution to realistically simulate regimes, then then boundary conditions provided to the regional climate model could be systematically erroneous. It is therefore likely that the embedded regional model may represent an unrealistic realization of regional climate and variability.

The models studied here used observed SSTs for boundary conditions. However, the coupled atmosphere–ocean models typically used for climate prediction have an interactive ocean model, complete with its own errors and biases. It seems unlikely that one would see such a large improvement moving from T159 to T1279 in a coupled scenario simply due to errors in the ocean model and the two-way interactions between the atmospheric and oceanic model components. The coupling process often involves a certain amount of model parameter tuning, which may also decrease the impact of improved atmospheric resolution noted here. However, there is evidence that with modest improvements to oceanic resolution one can reduce some of the large SST biases that affect global circulation, suggesting improved atmospheric resolution may still provide considerable benefits.

JC comment:  This is a very good paper that uses an interesting technique for understanding the model circulations.  I have long argued that using low resolution global models to force regional climate models is a pointless exercise. I disagree somewhat with the authors that increasing the model resolution will solve these problems in coupled atmosphere/ocean models; I suspect that there are more fundamental issues at play in coupling of two nonlinear, chaotic fluids.

Communicating the role of natural variability in future North American climate

Clara Deser, Reto Knutti, Susan Solomon, Adam Phillips

Abstract.   As climate models improve, decision-makers’ expectations for accurate climate predictions are growing. Natural climate variability, however, poses inherent limits to climate predictability and the related goal of adaptation guidance in many places, as illustrated here for North America. Other locations with low natural variability show a more predictable future in which anthropogenic forcing can be more readily identified, even on small scales. We call for a more focused dialogue between scientists, policymakers and the public to improve communication and avoid raising expectations for accurate regional predictions everywhere.

Citation:  Nature Climate Change, published online 26 October 2012  DOI: 10.1038/NCLIMATE1562  [link]

The abstract and title hides some important scientific results.  Excerpts:

Model projections are inherently uncertain. But the results shown here suggest that often models may disagree because future changes are within the natural variability. Such natural fluctuations in climate should be expected to occur, and these will augment or reduce the magnitude of climate change due to anthropogenic forcing in many parts of the world. Such intrinsic climate fluctuations occur not only on interannual-to-decadal timescales but also over periods as long as 50 years.

Through an examination of a large ensemble of twenty-first century projections produced by the CCSM3 climate model, we have illustrated that even over the next 55 years, natural variability contributes substantial uncertainty to temperature and precipitation trends over North America on local, regional and continental scales, especially in winter at mid and high latitudes. Such uncertainty and regional variation in projected climate change is largely a consequence of the chaotic nature of large-scale atmospheric circulation patterns, and as such is unlikely to be reduced as models improve or as greenhouse-gas trajectories become more certain.

It is worth noting that downscaled information derived statistically or dynamically from global climate model output will add local detail, but remains dependent on the overlying larger-scale field, and cannot mitigate the uncertainty of projected climate trends due to natural climate variability.

JC comment:  The significance of this paper is the use of a large ensemble of simulations from a single model.  The large ensemble produces greater variability (not surprisingly).  While the simulations show that all this natural variability averages out globally (not surprising, since these simulations were not initialized) and produces a steady trend in global average temperatures, the regional and continental scale averages  clearly show the dominance of the natural internal variability.  Again, this paper argues that regional downscaling from these global model simulations is not useful, and that there is very large uncertainty on regional scales, even on scales as large as the continental U.S.  It is good to see these authors paying attention to natural internal variability, and not just relegating it to ‘noise’.

JC summary

Each of these papers makes an important contribution to understanding how we should use climate models and make inferences from climate model simulations. The manner in which climate models have been tuned makes them of dubious use in 20th century attribution studies.  In most regions, the climate models have little skill on regional projections of climate change on time scales of 50 years.

Nevertheless, we have the recent recommendations from the NRC National Strategy for Advancing Climate Models that assumes that climate models are useful for these applications and the focus is on supporting decision making.  These papers provide further evidence that climate models are not fit for these purposes, and that we are not currently on a path that is likely to improve this situation.

378 responses to “Climate model discussion thread

  1. 1) The energy equation of the climate models lacks an important energy term-the potential energy of the atmosphere.
    2) The concept of backradiations inherently “creates” energy in the atmosphere, which reflects in the energy balance equation. The concept is incorrect because energy cannot be created.
    These are some of the causes of uncertainty in climate models.There is a lot of fixing that need to be done.

    • I agree with the late Dr. Michael Crichton:

      “I am thoroughly sick of politicized so-called facts that simply aren’t true. It isn’t that these “facts” are exaggerations of an underlying truth. Nor is it that certain organizations are spinning their case to present it in the strongest way. Not at all–what more and more groups are doing is putting out is lies, pure and simple. Falsehoods that they know to be false.” ~Michael Crichton

      http://evilincandescentbulb.wordpress.com/2012/11/08/real-choice-in-the-disinformation-age/#comments

      Now our society is disintegration, and our only hope is a most unpleasant truth for some: Rigorous honesty – and nothing less:

      https://omanuel.wordpress.com/about/#comment-1743

    • @Nabil Swedan
      2) The concept of backradiations inherently “creates” energy in the atmosphere

      No it doesn’t. It simply moves energy from the air/co2 to the land & oceans.

      • David Springer

        No it doesn’t move energy from air to surface unless the air is warmer than the surface that’s rarely true. What it does is restricts the rate of radiative cooling of the surface. Like a layer of insulation. If there’s water to evaporate the restriction does nothing as it just forces more evaporation. Evaporation is the primary means of surface cooling and sunlight is the only means of surface warming. Write that down.

      • A cooler atmosphere can still radiate heat to a warmer earth.
        But of course it cannot conduct or convect it.

        And this doesn’t mean the net radiative effect must be from cooler atmosphere to warmer earth.

        And backradiation is anyway not the basic mechanism of AGW – which is that the general cooling of the earth to the atmosphere, is slowed down by the atmosphere being less cool than before, due to CO2 trapping heat that previously radiated out.

      • “And back-radiation is anyway not the basic mechanism of AGW ”
        Handel,
        In substance, there is no difference between back-radiations and greenhouse gas effect. Both concepts “create” energy in the atmosphere. This is incorrect. Energy cannot be created.
        It does not matter whether these concepts are real or not. What counts at the end of the day is the climate energy model. So far, the models based on the concepts of back-radiation and greenhouse gas effect have not done us any good. A lot of model fixing is needed.

      • “No it doesn’t. It simply moves energy from the air/co2 to the land & oceans.”
        Handel,
        You need external force or energy to transfer energy from the colder atmosphere to the warmer surface. Where is this force? We inherently crated it in the atmosphere by the concept of back-radiations.

      • Particular Physicist

        Nabil Sweden

        Clearly the atmosphere as a whole is presently cooling considerably.

        Is this really true?
        Do you have a reference on this?

      • Particular Physicist,
        Below are some references. The internet is full of similar article. You just need to do a simple math as explained. Yes the atmosphere is cooling considerably, it is not warming as they say.

        1) David W. J. Thomson, Susan Solomon: Recent Stratospheric Climate Trends as Evidence in1444 Radiosonde Data: Global Structure and Tropospheric Linkage. American Meteorological Society, 2005

        2) William Randel, Keith Shine, Dian Seidel, Dave Thompson, John Austin, Nathan Gillett, Jim Miller,1477 Philippe Keckhut, Carl Mears, Stratospheric Temperature and Water vapor Trend , The National Center
        1478 for Atmospheric Research, NCAR, 2007.

        Sea also figures and data posted on my website. Just click on my name, got to references page 33 and figures page 38 of Article-2, Conservation of the earth’s exchanged energy.

      • Particular Physicist

        Nabil Sweden : Clearly the atmosphere as a whole is presently cooling considerably

        So what do you claim are the temperatures at various altitudes, compared to what they were before?

      • Particular Physicist,
        We do not have data that goes back to 1750. The only way is to calculate based on theory or extrapolate the data that we have. It is an approximation. At the lower troposphere we have two values of surface temperature rise. One is over land, which is contaminated by human activities and another over water, which is really what matters. Over land the total surface temperature rise may be 0.6-0.8 degrees C. Over water, it is about 0.4 to 0.5 degrees C. At the center of gravity, roughly 5.5 km above surface, there is no change, and there can be no change. The temperature of the lower stratosphere has probably decreased by 19 degrees C. Only one study by a Greek researcher estimated the stratospheric cooling at 17 degrees C in 2005. At the menopause, the reduction is about 0.05 degrees C, which is negligible.
        As you can see, there will come a time when the upper atmosphere is cold enough to deposit carbon dioxide as dry ice. When this happens, the effect of carbon dioxide on global warming ceases naturally.

      • Please correct menopause instead of menopause. Sorry.

      • Please correct again, mesopause instead of menopause. Sorry, my word processor does not have mesopause and kept giving the wrong word. I will fix the problem.

      • Nabil Swedan | November 17, 2012 at 10:46 pm |

        So essentially you’re claiming that

        the temperature of the lower stratosphere has probably decreased by 19 degrees C ….. [and] there will come a time when the upper atmosphere is cold enough to deposit carbon dioxide as dry ice. When this happens, the effect of carbon dioxide on global warming ceases naturally.

        When did this alleged cooling start, and what is driving ?
        And where is the data for all this?

      • Erica,
        This is not an alleged cooling; it is a fact that scares the proponents of the main-stream science. The cooling would be impossible to explain with the greenhouse gas effect. I gave couple of references above. You can also research on your own, and there is plenty of studies that document the cooling. The cooling of the atmosphere is an undisputed fact, and the atmosphere is not warming as they say.
        Take a look at the record of the earth as documented in the Antarctic Ice core samples. Surface warming due to CO2 has had an upper limit of 16 to 18 degrees C. It always stopped, naturally, and this present warming will stop as well when carbon dioxide deposits as dry ice. The cooling as well as the upper limits can be calculated mathematically. If you are interested, click on my name and go to Article-2, conservation of the earth’s exchanged energy.

      • Nabil, If you are going to claim the atmosphere is cooling, you need to provide the evidence, not just tell people to go and find the evidence themselves. Otherwise it’s just a bald claim, merely alleged cooling.

      • Erica,
        I copied and paste from the comment above. Here are references:

        1) David W. J. Thomson, Susan Solomon: Recent Stratospheric Climate Trends as Evidence in1444 Radiosonde Data: Global Structure and Tropospheric Linkage. American Meteorological Society, 2005

        2) William Randel, Keith Shine, Dian Seidel, Dave Thompson, John Austin, Nathan Gillett, Jim Miller,1477 Philippe Keckhut, Carl Mears, Stratospheric Temperature and Water vapor Trend , The National Center
        1478 for Atmospheric Research, NCAR, 2007.

        There is also a large number of similar papers available on the internet.

      • Hi Nabil

        Re: Papers supposedly supporting your claim that the saying the atmosphere is cooling.

        1. Can you provide some actual relevant quotes from those two papers you mention ?

        2. And some of the links you say exist to online papers? Also with relevant quotes.

      • Stratospheric cooling is part of the standard picture of GHG. You can find the papers of Thompson and Solomon (2005) and Randel (2008) discussed here

        http://scienceofdoom.com/2010/04/18/stratospheric-cooling/

        The simple expectation is that additional CO2 leads to warming of the troposphere and surface but cooling of the stratosphere as noted already in 1967 by Manabe and Wetherald, whose results can be seen in the link I give above.

      • Pekka,
        Thank you for the link. I knew that one, most likely you, will reference that line of Manabe and Wetherald (1967). The paper does not quantify or predict the extent of the cooling. They just guessed that the lower stratosphere will be cooling, when in fact the cooling covers all of the upper half of the atmosphere, from the center of gravity up. When the atmosphere as a whole is considered, the atmosphere is going through an enormous amount of cooling, not warming as Manabe and Wetherald suggested based on the greenhouse gas theory.

      • Nabil,

        It’s clear that Manabe and Wetherald produced only the first crude estimate of the effects but the papers of Manabe and his coauthors succeeded in creating a qualitatively correct description of the physics of the atmosphere.

        Whether atmosphere as whole is warming or cooling is not really essential, what’s essential is the energy balance of the Earth system including oceans and top soil. Based of the empirical data about 80% of the mass of the atmosphere has been warming and 20% cooling. A few percent have been cooling so much that the total energy of the atmosphere has not changed very much. Looking visually at the data a little warming seem more likely than a little cooling. What can clearly be excluded is massive cooling.

        That the atmosphere may be cooling while the Earth system including the oceans is warming is not at all contradictory, because the heat capacity of the oceans is much larger than that of the atmosphere and because the temperature of the troposphere cannot rise much more than that of oceans while there are no similar tight restrictions for the cooling of the stratosphere. The stratosphere influences only weakly the energy balance of the troposphere. It’s temperature is influenced a little more by troposphere and the surface, but the most important factor is the local balance of absorption of solar radiation (UV in particular) and emission of LWIR by GHG’s.

      • “That the atmosphere may be cooling while the Earth system including the oceans is warming is not at all contradictory, ”

        Pekka,
        This statement of yours is correct and no one can dispute it based on observation. Mathematics is in agreement as well. If you take the time to analyze the data using a simple calculator you will find that the amount of cooling in the atmosphere is exactly equal to surface warming. The earth as a whole gains absolutely no energy with climate change. And I repeat the word absolutely, because its the law of nature to conserve the total exchanged energy.

      • Hopefully it is understood by everyone and not just Dr. Dr. Pielke that when the oceans are cooling there is not global warming. And, according to Dr. Spencer there is no end to the coooling in sight.

      • Nabil :
        So the summary is this then :-

        1. The atmosphere as a whole is not cooling (only the stratosphere is cooling); its heat content is roughly stable.

        2. This does not disprove that the planet as a whole is warming; it is possible for the atmosphere to remain at the same temperature while the planet warms (or cools, I suppose).

      • Erica,
        I will plot the observed energy trend of the earth subsystems engaged in climate change. They will be displayed on my website next week, a picture is worth a thousand words. For now I will use the symbols. If -ΔH is the cooling of the atmosphere, Joules; -M g ΔZ is the reduction in the potential energy of the atmosphere as a result of this cooling, Joules, where Z is the height of the center of gravity of the atmosphere; ΔHs is the enthalpy change of surface, Joules, which includes surface temperature rise and the energy associated with ice melting, then -ΔH-M g ΔZ=ΔHs. The total energy of the earth ΔH+M g ΔZ +ΔHs=0. The earth having at its disposal the work M g ΔZ, transfers the heat ΔH from the atmosphere to the surface. The net total energy of the earth is equal to zero. Global warming only moves energy within the earth from the atmosphere to the surface and it has the work to do the transfer. No radiative energy is exchanged. Math is in agreement. The concept of greenhouse gas effect or back radiations cannot explain energy transfer from the colder atmosphere to the warmer surface because the concepts assume that the atmosphere is transparent to solar radiations and inactive in the energy balance of the earth. This is a big mistake and need to be fixed in the climate models. The concepts create, “radiative forcing”, from nowhere to explain the warming. Energy cannot be created. This is unacceptable.

      • @Nabil
        The net total energy of the earth is equal to zero

        Only if there is no global warming. Which nobody really knows

        Global warming only moves energy within the earth from the atmosphere to the surface

        No it doesn’t. Global warming is when the atmosphere warms due to GHGs, and so slows the cooling of the earth into to it.

        The concept of greenhouse gas effect or back radiations cannot explain energy transfer from the colder atmosphere to the warmer surface
        It does not attempt to. See comment above.

      • “The total energy of the earth ΔH+M g ΔZ +ΔHs=0″
        Memphis,
        I am quoting myself above and want to correct as follows
        The total change in the energy of the earth ΔH+M g ΔZ +ΔHs=0
        Or the the total energy of the earth H+M g Z+Hs=Constant
        I know that you read my mind without correction.
        Yes, based on observations, the total energy of the earth remains constant. Global warming only moves energy within the earth and no change in the total energy of the earth. It is a simple math that anyone can do.

      • Nabil
        Yes, based on observations, the total energy of the earth remains constant

        Neither you nor anyone else knows this to be either true or false.

        Global warming only moves energy within the earth and no change in the total energy of the earth.
        Eh?? No, you are thinking of something like global-moving-around-of-energy. Whereas “global warming” refers to a specific process whereby the globe as a whole warms. And whether or not this in fact happening, is a completely separate question.

        It is a simple math that anyone can do.
        Math has nothing to do with any of the above.

      • Memphis,
        Math is the absolute truth; It is the only tool we have to address the climate issue realistically and cost effectively.

      • @Swedan
        You need external force or energy to transfer energy from the colder atmosphere to the warmer surface. Where is this force? We inherently crated it in the atmosphere by the concept of back-radiations.

        To think of back-radiation as creating energy is a fundamental mistake. Radiation — back- or in any other direction — merely transfers energy. And – unlike with conduction – there is absolutely nothing preventing radiant energy that leaves a cooler body, from reaching a warmer body.

      • Handel,
        Based on observations, the upper half of the atmosphere is presently cooling at about 2 deg/decade. The lower half of the atmosphere is warming, and the lower troposphere rate of warming is about 0.1 deg/decade. The average warming in the lower half of the atmosphere is therefore 0.05 deg/decade. Or the ratio between cooling and warming is 40 to1. Clearly the atmosphere as a whole is presently cooling considerably. Where does this cooling go and how? Can the concept of back-radiaitons or greenhouse gas effect explain what is happening or should we look for something else?

      • Swedan,

        Any net cooling of the earth system as a whole – assuming this to be the case – can only be via radiation out to space.

        And there is no reason this could not happen at the same time as back-radiation to the earth is happening.

        Perhaps the point to make here, though, is that AGW is
        not a matter of extra heat being coming into the system,
        but rather one of less heat leaving it, owing the absorption properties of (increasing) greenhouse gasses.

      • @Nabil Swedan
        In substance, there is no difference between back-radiations and greenhouse gas effect.

        Greenhouse gasses absorb and re-transmit IR radiation, a fraction of this re-transmitted radiation (emanating from atmosphere at the lowest altitude) coming “back” to earth.

        But this back-radiation is not the basic mechanism of AGW. Most of the re-radiation goes to heat adjacent greenhouse gas particles, owing to the fact that the deeper you go into greenhouse gas clouds, the less the likelihood of there being a free path for radiation to escape without first coming into contact with other greenhouse gas particles.

        A lot of model fixing is needed.

        Yes, but not because in this scenario is energy ever imagined to be “created”. It is only ever radiated from one place to another.

      • Dear Handel,
        Radiation, temperature, and energy are the same thing. You can calibrate an Infrared thermometer to indicate any of these. When the concept of back-raditions or greenhouse gas effect is considered, we inherently raise the temperature of the atmosphere above that of surface without effectively applying external energy. This “created forcing” from the atmosphere has no thermodynamical justifications and it creates errors in the climate models. There must be a better way.

      • Nabil you making a fundamental error. Energy is not being created anywhere; it is merely moving from one location to another.

        If we start at the point where the earth radiates out IR , this is a cooling of the earth. Greenhouse gasses absorb some of this, and in turn radiate some of that back to earth (from the bottom of the atmosphere), and some out to space (from the top of the atmosphere). In the middle sections of the atmosphere, the radiation from greenhouse gasses is essentially trapped there, bouncing around from one molecule to another nearby, the mean free path to either space or the earth being reduced the further the distance from the top or bottom of the atmosphere.

        Each loss is matched by a gain somewhere else. There is no creating or destroying of energy suggested anywhere.

      • Handel,
        Based on observations, the upper half of the atmosphere is presently cooling at about 2 deg/decade. The lower half of the atmosphere is warming, and the lower troposphere rate of warming is about 0.1 deg/decade. The average warming in the lower half of the atmosphere is therefore 0.05 deg/decade. Or the ratio between cooling and warming is 40 to1. Clearly the atmosphere as a whole is presently cooling considerably. Where does this cooling go and how? Can the concept of back-radiaitons or greenhouse gas effect explain what is happening or should we look for something else?

  2. MattStat/MatthewRMarler

    I don’t know how I get here first, if I still am.

    This was a good post on what looks to me like a good development. As always, I look forward to the next 20 years of modeling improvement.

  3. Judith Curry

    I seem to have read several of these papers before. Maybe Chief or Capt’nDallas gave the links, I don’t remember although the links in context may prove informative to this discussion. Maybe they will comment.

    My walk-away thought:

    “the good match in the hindcast for the historic surface temperatures is no longer a very good metric for determining which models are good or bad.”

    was formulated several years ago by me probably by a discussion elsewhere and my response then is the same today: tear the models apart and start all over again.

    I can only think of a few reasons why the standard practice in science of tearing apart something that doesn’t work or was constructed wrong was not followed. The answer seems to be Political: EPA’s endangerment finding would have no science foundation. Bypassing Congress and bypassing science leaves EPA kinda out on a limb and they keep sawing.

    How does the mantra: “all models are wrong…” within the context of the NRC’s recommendations fit, if they do at all? not fit for purpose?

    • David Springer

      Actually the models can’t do the first 13 years of the 21st century without ditching the anthropogenic CO2 forcing. See Dessler 2012. The models were created in the late 20th century, tweaked to hindcast it, got lucky in forecasting the last decade of the century, and fell apart in forecasting the 21st century. Warming from anthropogenic CO2 was added using a classic logical fallacy ‘argument from ignorance’. Nothing they tried tuning the models with other than CO2 could hindcast twencen warming. That yields two explanations: 1) CO2 was the correct missing factor or 2) the modelers don’t know the correct missing factor.

      Failure of the model with the CO2 tweaks to duplicate the 21st century pause in warming is strong evidence that the second explanation is correct and there’s something else at work that explains twencen warming.

  4. A fan of *MORE* discourse

    Lucia’s comment: (as quoted by Judith) “We are now in a situation where models can — mostly– match 20th century data. So, the good match in the hindcast for the historic surface temperatures in no longer a very good metric for determining which models are good or bad.”

    It is fascinating that Lucia’s comment is so constructed as deny even the possibility that — via the normal scientific processes of evolutionary improvement — *all* the mainstream models are now pretty good.

    Three Conclusions

    • The Bayesian likelihood of the proposition “James Hansen’s 1981 climate-change worldview is essentially correct” has increased monotonically for the past thirty-one years.

    • The evolving capabilities of mainstream climate models broadly and increasingly affirm this likelihood.

    • It is reasonable to foresee that further evolutionary improvements in model accuracy, achieved by better theory and by tuning against larger, better, longer datasets, will continue to affirm the proposition “Hansen was right.”

    ———–

    Remark  The striking degree to which the cognition of climate-change denialism — as exemplified by Lucia’s comment — is incapable of assimilating the increasingly strong pro-Hansen scientific evidence, strikingly resembles the extent to which the cognition of conservative pundits could not rationally assimilate the evidence of scientific polling.

    WUWT, eh?   ;)   :smile:   :grin:   :lol:   :!:

    • Quinn the Eskimo

      Given (1) the regional rather than global pattern of warming (2) the absence of “global” warming since 1997, (3) the Bigfoot/Loch Ness Monster nature of the upper tropospheric hot spot – whose existence is refuted by a multiple independent and consistent datasets, (4) the vast panoply of other failed model predictions, from increasing GAST to multiple other climate parameters, (5) the circular reasoning and other logical fallacies embedded in modeling, (6) the true, rather than purposely suppressed extent of natural variability, and (7) the extensive scientific and political misconduct surrounding the promotion of the IPCC line on CAGW, perhaps the cognition that is not assimilating is your own.

      A non-fan of more blather,

      • A fan of *MORE* discourse

        Assessments founded upon cherry-picking have proved strikingly likely to be dead-wrong, eh Quinn?

        Why might that be, the world wonders?   ;)   :smile:   :grin:   :lol:   :!:

      • Quinn the Denialist Eskimo said:

        “the absence of “global” warming since 1997…”

        Really? September 2012 was the warmest global temperatures for September ever recorded (tied with 2005, which also is after 1997), and yet we have an absence of “global” warming?

        Is this as meme that is just stuck in your neurons and you can’t shake free the the actual reality of the data befor you? Reminds me of reations by people a few nights ago…

      • He said global warming, not global warmth. It’s basic logic.

      • Brandon Shollenberger

        Small amounts of data mean nothing as far as climate goes… until it makes for a good sound bite for global warming concerns. It just goes to show, cherry picking is bad if done by those who disagree with you.

      • R. Gates

        “the absence of “global” warming since 1997…”

        Really? September 2012 was the warmest global temperatures for September ever recorded (tied with 2005, which also is after 1997), and yet we have an absence of “global” warming?

        Yeah. An’ it was hotter’n billy goat hell in McAllen, Texas on July 25, 2012…

        But what’s that got to do with the global temperature trend?

        Nada. Rien. Nothing.

        Max

      • RG,

        How about talking about fan’s usual blathering than picking on perhaps the one of 5 points Quinn makes?

    • Fanny

      There’s a fourth conclusion you left out

      - While it has warmed over the period 1975 to 2000 at a rate around one-half that predicted by the models of James Hansen, there has been an unexplained pause in the warming since then, despite continued unabated GHG emissions

      [This one is actually what happened.]

      Max

      • Sorry, manny, this about reality, not denialist fantasy.

      • Robert

        You misspelled fanny (it starts with a “f”)

        Max

      • I know Robert, we can increase taxes to 67% if you tune that model too.

      • I’m glad you brought up the link between your fantasies about the climate and your fantasies about the economy. In both cases, you are hiding from reality rather than make a case for the policies you support.

        Yes, the climate is warming. No, magical supply-side fairies do not determine our economy’s health.

        Feel free to join the rest of us in the real world any time.

      • The reality Robert is there is a right way and a progressive way. There are plenty of reasonable ways to increase tax revenues without having a huge economic impact and plenty of ways to mitigate and adapt without breaking budgets. I just happen to think you are an idiot.

      • Capt, your inability to face simple facts leaves you in no position to judge the intelligence of others.

        Your cowardice in the face of reality makes you a moron by choice. If it soothes you to project your stupidity onto others, feel free.

        But the facts are the facts.

      • Robert, “cowardice in the face or reality?” Oh Really?

      • Really. It’s time for you to gather your courage and face reality.

      • More brain dead-comments from Robert. His opinions are driven by his ideological and political beliefs. His comments contribute about as much as Fans. Perhaps Robert could add smileys too make it easy to see which comments to skip over.

      • John DeFayette

        No, Robert, this is about models, just like the thread’s title says. I know there are a lot of folks who think models, especially climate models, are more real than reality….

      • A fan of *MORE* discourse

        manacker asserts: “There has been an unexplained pause in the warming”

        Manacker, please note that Hansen and colleagues have analyzed the period in question, and have issued a no-quibble prediction: Ascendency of ice melt leads us to anticipate acceleration of the rate of sea level rise this decade.

        A No-Quibble Question  What is the probability that Hansen and colleagues will be proved right?

        My No-Quibble Answer  My estimate is 75%.

        What is *your* “no-quibble” one-number answer, manacker?

        Do other Climate Etc regulars care to offer “no-quibble” one-number answers?

        Thanks and good wishes are extended to all who answer, regardless of the number you pick!

          ;)   :smile:   :grin:   :lol:   :!:

      • Fanny

        In the 1980s Hansen’s models “predicted” global warming at a rate that was exaggerated by a factor of 2.

        Since the end of the 20th century there has been a “pause” in the global warming, despite unabated GHG emissions.

        These are unassailable facts.

        But, hey, if you want to stick your head in the sand – feel free.

        Max

      • No, Fanny, I do not bet on the weather, the climate – or Arctic ice melt.

        I suspect late-summer Arctic sea ice will continue to decline over the next few years, but I’m not too worried about it – are you?

        Max

      • A fan of *MORE* discourse

        manacker says “I’m not too worried about [ice-melt] – are you?

        Over the next ten years, it’s “no worries, mate!”

        Over the next ten centuries, plenty worried.

        Yeah, ten centuries (and longer!) is how long I shipped-for (including my kids, and their kids, etc).

        How long a voyage did *you* ship for, manacker?

        Yah care to offer Climate Etc readers a no-quibble one-number length-of-voyage answer?

        Why not try it? (that is, no-quibble answering) Yah might like it!   ;)   :smile:   :grin:   :lol:   :!:

      • I don’t have a specific probability, given how complex the system is, I can’t even begin to decide on a suitable prior distribution, much less boil it down to a vague “sea level rise will accelerate or not” statement.

        Oh, I know!

        http://magic8ballonline.com/8ball/index.cfm?go=yes

        “Reply hazy, try again.”

      • Fanny

        You are seriously worried about (late-summer Arctic sea ice) “ten centuries from now”?

        Wow!

        Sorry. I can’t help you.

        [Cuddle up to a polar bear.]

        Max

        PS To answer your question, I personally believe that late-summer Arctic sea ice will recover every winter, as it has for hundreds of years – but there may be late summer periods of very low sea ice extent if global warming resumes its past trend and winds plus sea currents help accelerate the trend. But I am not going to “worry” about it, because there is nothing I can do about it.

      • What is the probability that Hansen and colleagues will be proved right?
        Minus 100%

      • When the models are always more wrong, every year, for 15 years, the expectation is that they will continue to get more wrong, every year, for the next 15 years.

      • Particular Physicist

        Robert | November 9, 2012 at 2:28 pm |

        Yes, the climate is warming.

        Indeed. In the alarmist fantasy world, this can happen even when the thermometers disagree for, ooh, sixteen years.

    • Steven Mosher

      Fan,

      you misunderstand the point that Lucia makes which is essentially a point about testing models and NOT a point about climate.

      There are 20+ climate models. When you look at the results there is quite a spread. lately, there has been the realization that the ‘democracy’ of models may not be such a good thing. That is, rather than including all the models results ( which leads to a big spread in projections ) we ought to select the best models.

      One proposal, of course, is to use the models that hind cast the SAT the best. Lucia’s point is that tuning to that metric precludes using that metric as a downselection criteria. She’s right.

      That doesnt deny there is warming. That doesnt deny that models are improving. That doesnt deny that we are forced to use models to understand the future and plan. what she says is that if you want to down select models ( some people argue for this ) that you probably should not use hindcast skill of SAT as your downselect criteria.

      As a lover of models i have to say she is correct.

      • Steven,

        Fan of more emoticons knows everything and would never misunderstand something that someone else wrote. His job is to show us all how smart he thinks he is. And how un-skeptically he accepts any pablum others produce which is CSA (climate science approved).

      • A fan of *MORE* discourse

        Steven Mosher posts  “There are 20+ climate models. When you look at the results there is quite a spread. lately, there has been the realization that the ‘democracy’ of models may not be such a good thing.”

        Steven Mosher, first let me say, that your post is outstandingly well-reasoned and respectfully-expressed. It was both a pleasure to read, and a genuinely valuable contribution to Climate Etc. Thank you!   ;)   :smile:   :grin:   :lol:   :!:

        Your post expresses a Great Truth, whose opposite also is a great Truth.

        Namely, however much we may all long to greater accuracy, a ‘democracy’ of models may perhaps be no bad thing:

        Mark Twain/Pudd’nhead Wilson  “It were not best that we should all think alike; it is difference of opinion that makes horse races.”

        Winstron Churchill  “Democracy is the worst form of government except all those other forms that have been tried.”

        Learned Hand  “Justice is the tolerable accommodation of the conflicting interests of society, and I don’t believe there is any royal road to attain such accommodation concretely.”

        Almost surely, it will be many decades before climate-models approach certainty and unanimity in their predictions (if they ever do!). Perhaps we ought not to be too impatient for that day to arrive!   ;)   :smile:   :grin:   :lol:   :!:

      • steven,

        I’m curious.

        Would you send your kids to fan for an education?

        I have a great deal of respect for most posters here. I am inclined to respect fan as a college professor. Yet he repeatedly proves that such an achievement means less than one might normally assume.assume.

        Though to be fair, he has a long way to go before he matches Australian Professor Steven Lewandowki’s achievements in showing how to debase a PhD.

      • @Mosher
        One proposal, of course, is to use the models that hind cast the SAT the best. Lucia’s point is that tuning to that metric precludes using that metric as a downselection criteria. She’s right.

        Racking my resources to understand this problem. You pick the models that best hindcast some empircal data X, then tune them to hindcast X even better. Then see what the physical implications of this tuning suggest. Why is this bad?

      • Mosh said:

        “There are 20+ climate models. When you look at the results there is quite a spread. lately, there has been the realization that the ‘democracy’ of models may not be such a good thing. That is, rather than including all the models results ( which leads to a big spread in projections ) we ought to select the best models.”

        Also consider the alternative models from all the skeptics. Just from commenters to the Climate Etc site, I have tabulated about 40+ alternative models that have been proposed.

        What happens if we take the aggregate of the 40 models of these Climate Etc-sanctioned models and see what comes out the other end?

        As suspected, what comes out the other end is a pile of crap.

        Democracy of models is not such a good thing, and it takes policing by people that actually care to separate the wheat from the chaff. The owners of this blog do not care , as they seem to actually desire adding more FUD in the mix. This seems contrary to to the objectives of “taming the uncertainty monster”, but it makes perfect sense to those of us who understand the principles of framing. The general idea — borrowed from political campaigning –is that if you take control of the argument, speak as an authority, then you can condone the crap that comes out of your end, while you hammer the opposition as evil wrong-doers (see Rove, Karl). In other words, the controllers of the meme never have to face up to eating their own dog food, or defending their own failings

        So the maintainers of this blog will never ever say anything bad about the crackpot models that come from its dearly beloved “denizens”. You see, the crackpots are the lifeblood that feeds the uncertainty monster.

    • Fan,
      It is fascinating that Lucia’s comment is so constructed as deny even the possibility that — via the normal scientific processes of evolutionary improvement — *all* the mainstream models are now pretty good.

      There is no such process as “evolutionary improvement”. Evolution is not a matter of “improvement” but of adaptation. Evolution operates in the moment and has no eye to the future. The confounding of evolution and progress is an error that hearks back to the 19th century and continues to be a straw man that creationists love to flog. Human purposes may seek to improve things – better axes, better knives, better climate models. Evolution has one sole purpose and that is the production of the next generation. That new generation is slightly refined to better meet the environment of their parents. If that environment changes, then any “better” becomes moot.

      Current model quality has to be assessed now based upon whether they can reasonably reflect emergent climate. They have been parametrized to the point that matching 20th C data is no longer a measure of quality. They are all pretty good there. What we would like to see is those models converging with present emerging empirical climate data (without the need to significantly manipulate that data through adjustment first), where the sole parameters that are used are the dates and magnitudes of volcanic events and the annual contribution of CO2 from human sources. That would indicate that the models in use reflect a very profound understanding of how climate works in detail.

      • A fan of *MORE* discourse

        Duster, your comment (like Steve Mosher’s comment) is well-reasoned and a genuine pleasure to read.

        Your comment and Steve’s amount to the nearest thing that Climate Etc sees to a consensus view: in the coming decade, the observational confirmation/disconfirmation, of individual model predictions/nonpredictions, of Hansen-style “acceleration of the rate of sea-level rise this decade“, will be (arguably) the single most crucial model validation test of the coming decade.

        Assuming for the sake of argument, that Hansen’s predictions are confirmed, by both observation and model’s, then we arrive at:

        An Interesting Question  If Hansen and his colleagues can make correct predictions based upon global thermodynamic arguments, does this mean that computarional models are of secondary significance, to a climate-change debate in which (let’s face it!) James Hansen’s views already have prevailed?

        The world wonders!   ;)   :smile:   :grin:   :lol:   :!:

      • I actually like Hansens work. on one view the simpler models are good enough to drive policy while more sophisticated models are how we advance the science.

        there is a good argument there that fair minded folks could have.

    • Fan: I am so glad to see you back in fine form. I thought that the cowardly teabagging deniers had managed to quiet your voice. I seem to be most violently opposed to the suppression of ideas I find the most objectionable. I am sorry to have underestimated your mettle. Cheers.

      You are quite correct about the repugs believing their own “happy-talk” data. That miscalculation could just as easily be applied to the catastrophic gloom-doom of Trenberth’s latest model muddle. Since his ’97 K-T diagram has recently been proven bogus, we can only chuckle at his latest sales pitch disguised as science. TTFN

      • A fan of *MORE* discourse

        Cheers to you in return, Howard! :)   :)   :)   :)   :)

        Yes, the finest quality public discourse results from strong skepticism assessing strong science.

        As for ideology-driven cherry-picking skepticism that attacks weak and/or boring science … well regrettably, there’s seldom any shortfall in *that* variety of discourse, eh? :(   :cry:   :(  

      • The actual problem though is ideology-driven cherry-picking credulousness that advances weak science of the Consensus eh Fan.

    • Fan,

      From where come the “longer data sets” ? :) I suppose some could be derived from historical records or proxies, but these multiply the uncertainty, regardless of the quality of the model.

    • @ Fan
      The Bayesian likelihood of the proposition “James Hansen’s 1981 climate-change worldview is essentially correct” has increased monotonically for the past thirty-one years.

      Except the last 16 of course.

      It is reasonable to foresee that further evolutionary improvements in model accuracy, achieved by better theory and by tuning against larger, better, longer datasets, will continue to affirm the proposition “Hansen was right.”

      Why is it “reasonable” to foresee that improvements will confirm rather than change present thinking? That’s just wishful alarmosphere hype.

    • Berényi Péter

      fan, the arXiv papers you’ve linked have nothing to do with “James Hansen’s 1981 climate-change worldview”, they are completely unrelated. This unfortunate slip shows nicely how meticulously you stick to facts.

      Anyway, let me help you out this time.

      SCIENCE 28 August 1981
      Volume 213, Number 4511
      Climate Impact of Increasing Atmospheric Carbon Dioxide
      J. Hansen, D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, G. Russell

      But the hopeless epistemological confusion you live in will not go away with a correct reference. Uninhibited talk about the “Bayesian likelihood of a proposition” will still stick to you.

      I don’t even try to convince you, the rest is only for more enlightened minds.

      1. A proposition is either true or false, there is no likelihood involved whatsoever, Bayesian or otherwise.
      2. The Bayesian thing only comes in as related to the perception of a proposition, not to its genuine truth value.
      3. The scientific enquiry is not interested in how propositions are perceived (outside psychology).
      4. We are not talking about psychology right now, do we?
      5. Folks worrying about perceived truth (as opposed to genuine one) are politicians, activists & salesmen.
      6. For a proposition to be a scientific one, it is not enough to have a well-defined truth value.
      7. It also has to be arrived at by a scientifically viable method.
      8. To claim 100030001 is a prime number, because it has a nice pattern, is utterly unscientific.
      9. A heat engine can’t be discussed without relying on concepts like entropy or entropy production.
      10. Hansen’s 1981 paper does not even mention entropy.
      11. The climate system is a heat engine.
      12. Hansen in 1981 relied on computational climate models instead, which became paradigmatic in that field since then.
      13. Ancient greeks were capable to construct working mechanical models of the solar system.
      14. Those models even had some utility value, could be used in astrology.
      15. Until Newton no one understood how the solar system actually works, lunar movements were a nagging enigma.
      16. Newton’s discovery has not come from modelling, but from analysis of laws governing motion, based on experiments.
      17. Fitting multiple models of high Kolmogorov complexity to a single run of a unique physical instance is utterly unscientific.

  5. Judith, you write “Nevertheless, we have the recent recommendations from the NRC National Strategy for Advancing Climate Models that assumes that climate models are useful for these applications and the focus is on supporting decision making. These papers provide further evidence that climate models are not fit for these purposes, and that we are not currently on a path that is likely to improve this situation.”

    This, surely, is a key issue. Most learned scientific societies have, unconditionally, bought into the hoax of CAGW. Unfortunately, as evidence becomes available that this position is really untenable, as these papers show, these learned societies carefully ignore this new information that tends to show that their support for CAGW is unwarranted.

    “Where have all the flowers gone……..When will they ever learn, when will they ever learn”.

  6. This opening sentence has a word missing between ‘suggesting’ and “that provide” –> “Here are some new papers by leading climate modelers suggesting that provide new evidence and arguments on the concerns that I have been raising.”

    Suggesting what? Or perhaps “suggesting” should come out?

    Beyond that the fact that the models are all tuned to the mean temperature estimates of the surface statistical models means they are unrealistic to begin with. Their unsupportable assumption is that these estimates are accurate. So here we have one set of models trying to explain the output of another set of models.

    But the fact that they are finally taking natural variability seriously is indeed good news. Maybe someday they will even figure it out.

    • thx, wording fixed

    • How is the catchall excuse for inaccuracy “natural variability” not equivalent to “operation of factors we don’t understand”? This is where time, funds, and efforts should be focussed, rather than just using it as a convenient “chaos” bucket for disposal of error.

  7. Is this study relevent?

    http://www.sciencemag.org/content/338/6108/792

    It would seem to suggest that the models are probably hopelessly conservative.

    Perhaps that might help to explain why climate scientists, the science community and bodies like the IPCC have erred so much on the side of caution, and failed to adequately sound the alarm…

    http://www.sciencedirect.com/science/article/pii/S0959378012001215

    • Kevin ‘the missing heat’ Trenberth?

    • That second paper is hilarious.

      Abstract

      Over the past two decades, skeptics of the reality and significance of anthropogenic climate change have frequently accused climate scientists of “alarmism”: of over-interpreting or overreacting to evidence of human impacts on the climate system. However, the available evidence suggests that scientists have in fact been conservative in their projections of the impacts of climate change. In particular, we discuss recent studies showing that at least some of the key attributes of global warming from increased atmospheric greenhouse gases have been under-predicted, particularly in IPCC assessments of the physical science, by Working Group I. We also note the less frequent manifestation of over-prediction of key characteristics of climate in such assessments. We suggest, therefore, that scientists are biased not toward alarmism but rather the reverse: toward cautious estimates, where we define caution as erring on the side of less rather than more alarming predictions. We call this tendency “erring on the side of least drama (ESLD).” We explore some cases of ESLD at work, including predictions of Arctic ozone depletion and the possible disintegration of the West Antarctic ice sheet, and suggest some possible causes of this directional bias, including adherence to the scientific norms of restraint, objectivity, skepticism, rationality, dispassion, and moderation. We conclude with suggestions for further work to identify and explore ESLD.

      • Brandon Shollenberger

        Edim, I thought you were joking, perhaps making a parody, when I first read this. I didn’t think anyone could write a scientific paper with that first sentence.

    • “seem to suggest that the models are probably hopelessly conservative”

      Aside from the weasly “seem” “suggest” and “probably”…

      Are you impyling that Our Best Knowledge of The Climate is not included in The Models?

      Andrew

    • idunno

      You ask:

      “Is this study relevant?”

      Probably not.

      Max

    • I hope you are being sarcastic but sadly you probably are serious. Not going to click on your links as I’m late for happy hour. Erred on the side of caution and failed to sound the alarm in the same way that Romney had principled positions and failed to change his mind enough.

  8. Judith Curry

    As someone who has been rationally skeptical that models are useful for projecting 21st century global or regional climate trends, your conclusion confirms for me that this is still the case:

    Nevertheless, we have the recent recommendations from the NRC National Strategy for Advancing Climate Models that assumes that climate models are useful for these applications and the focus is on supporting decision making. These papers provide further evidence that climate models are not fit for these purposes, and that we are not currently on a path that is likely to improve this situation.

    Max

  9. Trenberth says hot models are better. Here is the NCAR press release:

    BOULDER—Climate model projections showing a greater rise in global temperature are likely to prove more accurate than those showing a lesser rise, according to a new analysis by scientists at the National Center for Atmospheric Research (NCAR). The findings, published in this week’s issue of Science, could provide a breakthrough in the longstanding quest to narrow the range of global warming expected in coming decades and beyond.

    NCAR scientists John Fasullo and Kevin Trenberth, who co-authored the study, reached their conclusions by analyzing how well sophisticated climate models reproduce observed relative humidity in the tropics and subtropics.

    The climate models that most accurately captured these complex moisture processes and associated clouds, which have a major influence on global climate, were also the ones that showed the greatest amounts of warming as society emits more greenhouse gas into the atmosphere.

    “There is a striking relationship between how well climate models simulate relative humidity in key areas and how much warming they show in response to increasing carbon dioxide,” Fasullo says. “Given how fundamental these processes are to clouds and the overall global climate, our findings indicate that warming is likely to be on the high side of current projections.”

    The research was funded by NASA.

    —–Moisture, clouds, and heat—–

    The world’s major global climate models, numbering more than two dozen, are all based on long-established physical laws known to guide the atmosphere. However, because these relationships are challenging to translate into software, each model differs slightly in its portrayal of global climate. In particular, some processes, such as those associated with clouds, are too small to be represented properly.

    The most common benchmark for comparing model projections is equilibrium climate sensitivity (ECS), or the amount of warming that eventually occurs in a model when carbon dioxide is doubled over preindustrial values. At current rates of global emission, that doubling will occur well before 2100.

    For more than 30 years, ECS in the leading models has averaged around 5 degrees Fahrenheit (3 degrees Celsius). This provides the best estimate of global temperature increase expected by the late 21st century compared to late 19th century values, assuming that society continues to emit significant amounts of carbon dioxide. However, the ECS within individual models is as low as 3 degrees F and as high as 8 degrees F, leaving a wide range of uncertainty that has proven difficult to narrow over the past three decades.

    The difference is important to reconcile, as a higher temperature rise would produce greater impacts on society in terms of sea level rise, heat waves, droughts, and other threats.

    Clouds are one of the main sticking points, say the NCAR authors. Although satellites observe many types of clouds, satellite failure, observing errors, and other inconsistencies make it challenging to build a comprehensive global cloud census that is consistent over many years.

    However, satellites perform better in measuring water vapor, and estimates of the global distribution of relative humidity have become more reliable. Relative humidity is also incorporated in climate models to generate and dissipate clouds.

    Fasullo and Trenberth checked the distribution of relative humidity in 16 leading climate models to see how accurately they portray the present climate. In particular, they focused on the subtropics, where sinking air from the tropics produce very dry zones where most of the world’s major deserts are located.

    The seasonal drying in the subtropics and the associated decrease in clouds, especially during May through August, serve as a good analog for patterns projected by climate models.

    “The dry subtropics are a critical element in our future climate,” Fasullo says. “If we can better represent these regions in models, we can improve our predictions and provide society with a better sense of the impacts to expect in a warming world.”

    —–More accurate humidity yields higher future temperatures—–

    Estimates based on observations show that the relative humidity in the dry zones averages between about 15 and 25 percent, whereas many of the models depicted humidities of 30 percent or higher for the same period. The models that better capture the actual dryness were among those with the highest ECS, projecting a global temperature rise for doubled carbon dioxide of more than 7 degrees F. The three models with the lowest ECS were also the least accurate in depicting relative humidity in these zones.

    “Because we have more reliable observations for humidity than for clouds, we can use the humidity patterns that change seasonally to evaluate climate models,” says Trenberth. “When examining the impact of future increases in heat-trapping gases, we find that the simulations with the best fidelity come from models that produce more warming.”

    The authors focused on climate models used for the 2007­-08 assessment by the Intergovernmental Panel on Climate Change. The next-generation models being used for the upcoming 2013­-14 IPCC assessment were found to behave in a similar fashion, as described in a preliminary analysis by the authors in a supplement to their paper.

    “In addition to providing a path forward and focus for improving models, results strongly suggest that the more sensitive models perform better, and indeed the less sensitive models are not adequate in replicating vital aspects of today’s climate,” write the authors in the paper.

    The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

    • Trenberth says “hot models are better”

      To quote Mandy Rice-Davies of earlier fame:

      “Well, he would, wouldn’t he?”

      Max

  10. I always heard that climate models ‘are ten years away’ …. and that was more than 10 years ago.

    • To say that climate models are ten years away is a couple of orders of magnitude wrong. What they are forecasting did happen more than one hundred thousand years ago. There is no actual data that supports their extreme warnings of much higher temperatures and rising oceans in our future. Forget the models and look at the actual data. If the data ever gets outside the bounds of the past ten thousand years, then we might begin to worry. While the actual data stays well inside the bounds of the past ten thousand years, we should expect a climate that will stay well inside the bounds of the past ten thousand years. A manmade fraction of a trace gas can only make a fraction of a trace of a difference. Natural variability has ruled climate for billions of years. CO2 has been in the thousands during climate periods that did cool. Actual calculations of warming due to CO2 have been done by scientists on different sides and both sides say that the actual number for doubling of CO2 is only one degree C. The model forecasts have got hotter every year for 15 years while earth temperature has not exceeded the high in 1998. The king has no clothes on.

  11. ” The impacts of parameter tuning on climate sensitivity was less than anticipated.”

    It hardly sounds like the authors agree with you.

    Rather, the business of testing and improving models goes on, mostly ignoring your efforts to exert yourself in areas where you lack the relevant expertise.

    Do any of these papers cite you? Just curious.

    • Steven Mosher

      I’ll suggest you read the entire paper so that you can understand the sentence. well, at least there is a hope you might.

      The business of testing the models gets harder as the paper points out because of the tuning process. And as the testing gets harder the improvements get harder to demonstrate.

      I’m going to go out on a limb and suggest that you have probably never worked on large physics models, either building them, running them, or analyzing data from them.

      That would put you down their with the worst of the skeptics when it comes to people who should be listened to about models.

      • Based on some of Robert’s comments, I’m not sure he has ever worked on anything.

      • While I am pretty certain this is not accurate, it is funny.

        Robert has a knack for sounding like he has a loose grip on understanding. But then he goes and says something totally reasonable.

      • David Springer

        You think testing climate models is bad. You ought to have a go at a new version of Windows. That’s where my experience in model testing comes in. Thousands of people spread over hundreds of companies for about a year participate in the first round of testing. The second round millions of customers. We try to get everything working in the first round but inevitably miss a lot.

        Count your blessings.

      • Gerald Browning

        I have training in partial differential equations and numerical analysis and have built general circulation models. They are a piece of junk.The
        hydrostatic equations are ill posed, the parameterizations are not physical, and the cascade of enstrophy is kept under control by excessively large dissipation.

      • Gerry, glad to see you here. I attended a seminar of yours at NCAR 35 years ago. Big fan of your work. I know how frustrating it can be to get modelers to pay attention to the math

  12. Where is Fred to explain how models are not tuned when ya need him?

    With AQUA indicating an effective SST (skin layer which is what a photon would see) of ~21.1 C and BEST estimating a global land surface temperature average of 8.9 C, the benefit of anomalies is shot to heck if your model is tuned to an average 288 to 289 K average surface temperature with 17C average SST. Pretty much all of the Fans of climate science quo are still avoiding that little uncertainty monster.

  13. Models are nice
    Like sugar and spice
    They’re not for prediction
    Only for fiction

    Max

  14. No comment about this paper just released yesterday?:

    https://www2.ucar.edu/atmosnews/news/8264/future-warming-likely-be-high-side-climate-projections-analysis-finds

    Uncertainty runs both ways.

    • Yes it does, that is why experts get the big bucks or the boot.

      Kinda like if you model sensitivity without considering natural variability being amplified by land use and CO2, you come up with an outrageously high estimate just before you update your resume. Oh wait, that is the real world. How is serial over estimation handled in academia? “NCAR scientists John Fasullo and Kevin Trenberth..”

      • Poor little Cap. Pissing his pants in fear of a little empirical science.

        Time to put on your big boy pants and face the data.

      • You’re nuts. K,T and F papers have become a sad joke. Remember 0.9 +/- 0.18 Wm-2?

      • -.9 +/- .18 w/m2 at the TOA remains a pretty good estimate– probably just a tad high and needing larger uncertainty band. Should probably be -.8 +/- .2 w/m2. But certainly Trenberth and Fasullo were not that far off. We can find somewhere around .5 w/m2 just in the ocean heat content increases.

      • K&T 0.9 +/- 0.18 Wm-2 would have been a fair TOA estimate. But they thought it was a good surface estimate and stuck in smack dab on the bottom of their cartoon right by the missing 20 Wm-2.

      • The K, T + F figure of 0.9 Wm-2 was rounded up from a Hansen et al estimate, I believe, and plugged in. It is what is known as a “plug” number, since it could never be actually found between all those big numbers and error ranges.

        Max

      • Max, yep is was based on Hansen’s model. That is not a problem. Now stating that that is accurate to +/- 0.18 Wm-2 is a problem. Even Hansen thought it was wrong, closer to 0.6 +/- .5 or there about. Then to take that model estimate of TOA imbalance and paste it right on top of the ocean in their cartoon, was For Effect along with the oversize DWLR arrow. I have to admit though, that cartoon did create a stir along with the 20Wm-2 that was missing.

        Then Trenberth has the balls to publicly bitch about a relative small drift error in the UAH data set while he privately confesses that he can’t figure out where all the missing heat went to. It is like some BBC farce.

      • R. Gates | November 9, 2012 at 2:58 pm

        “……. We can find somewhere around .5 w/m2 just in the ocean heat content increases….”

        Yep, that precisely measured 0.09 degrees C temperature rise for the upper 2000 meters of the entire world’s oceans …. over 55 years.

        I marvel at the precision of our instruments and the power of our statistics…..

      • Particular Physicist

        Then Trenberth has the balls to publicly bitch about a relative small drift error in the UAH data set while he privately confesses that he can’t figure out where all the missing heat went to. It is like some BBC farce.

        It’s a new genre, doncha know – the BBC Climate Farce.

      • David Springer

        Gates, why should subsequent estimates be deemed more reliable than earlier estimates? If the error margin was wrong before why is the new estimate any better?

        Here’s what’s going on with most people who can’t pretend to be expert enough to make the estimate but must rely on the estimates for policy decisions. By policy decision I’m including average people in the voting booth.

        “Fool me once, shame on you. Fool me twice, shame on me.”

        I was never fooled but a great number of people were. Now they’re less inclined to believe the people who fooled them the first time. You really only get one chance to build a reputation. Alarmists blew that one chance by either pretending to be or actually believing to be far more confident about their conclusions than was warranted. The average voter doesn’t know when some expert is right or wrong about what might happen in the future but they sure as f*ck know when someone makes a prediction and is proven wrong by the passage of time.

      • I would suggest that it is bit too early to say how the models might fair over a full range of natural variability. With the majority of natural variability factors set to the cool side of things over the past decade, and even those that aren’t running quite week (i.e. solar), I think we need a bit more time before judging how tright he models might be actually getting the dynamics. It has been all too easy for the denialsts to dismiss Foster & Rahmstorf and others who have made a very good attempt to try and isolate the natural variability and show the underlying warming from CO2 that is still present, we will need the natural variability to align once more to the warm side of things to see if the underlying .14 to .17C of warming per decade from greenhouse gas increases is a valid finding from FR 2011.

        The models will never be right about the actual temperatures and warming, as they can never predict what natural variability will do, but if FR 2011 and other attribution studies are found to be valid going forward as they make specific findings, then those attributions can be matched up to the models and the underlying dynamics (which is really what the models are all about) can be validated.

      • The models will never be able to predict natural variability. They can be consistent though if the physics are sound. That is the object, consistency and learn from the anomalies. There is a consistent 0.4C per century trend in the Tmin from at least 1900. That is more likely a long term natural variability and/or land use trend. The most consistent CO2 trend is over land masses. That is an indication of amplification. With less tuning and more learning, the models can reduce what uncertainty that is reducible.

      • Captn. said:

        “The models will never be able to predict natural variability.”
        __
        Exactly. Hence they will always be wrong, and should not be compared to short-term weather forcasting. The models are about dynamics. Dynamics dynamics dynamics. FR 2011 and other papers are valuable because they show the basic dynamics are essentially correct (yep, CO2 continues to be a positive forcing on the climate, duh), and even show the percentage of unknown forcing in the climate that we have yet to understand.

      • But Gates the model are not consistent. The more sensitive models happen to almost match the unmodelable natural variability and they assume that means those models are more correct. Some might call that cherry picking.

        Now where are ALL the model consistently wrong?

      • Gates, Here I will give you a couple of hints, ACC and SSW, Wall energy transfer.

      • You are right Captn. the models are not consistent as each makes different underlying assumptions about the dynamics. Each assumption has some justification and so none of them can necessarily be seen as more valid than another. Therefore an ensemble mean is the best way to approach looking at a good snapshot of the underlying dynamics which includes a wide selection of assumptions. Using a ensemble mean we can then get a very good feel for where there is broad agreement on how much forcing any giving element is causing relatvie to the others. In doing such we see that CO2 (and increasingly methane and N2O) are causing an underlying positive forcing of somewhere around .17 C +/- .04C of warming per decade. The biggest unknowns remain aerosols and clouds, though this past year a lot has been learned and future climate models will begin to get the aerosol and cloud forcing even better.

      • Gates, then use an ensemble and don’t cherry pick. You can’t do both.

      • Captn,

        I very familiar with each and have actually been in personal discussions with Trenberth on the SSW issue specifically. Turns out he did a paper about SSW’s for his thesis. Still trying to get an estimate of how much energy (in Joules) was lost by the Earth’s energy system to space during the large 2009 SSW event. Anyone having any credible estimate of this or method we could use to determine a good estimate please chime in. The EP Flux for this episode in January 2009 shows a huge energy burst moving from troposphere into the stratosphere and then into the mesophere…but quantifying it seems to be elusive. By the way, there is a close association between SSW’s and ENSO, the QBO, and the MJO, which few people realize. Quite fascinating really.

      • Gates, ” Anyone having any credible estimate of this or method we could use to determine a good estimate please chime in. ”

        I don’t think so. The 20Wm-2 that K&T initially missed was mainly Arctic mixed phase clouds. So there is a rough ballpark. Palivoni ? and Solomon did a Tropical Strat ozone depletion paper recently. That seems to be a part of the same phenomenon. There was another paper on the ocean energy shift from the below the equator to above, that appears to be part of the same regime change. It looks like the start of the AMO in my opinion, but we didn’t have satellites in the mid 40s.

      • R. Gates

        They’re not there yet, but some day climate models may be able to “predict” a piece of the piece.

        But they will always be limited due to their myopic fixation on anthropogenic climate forcings to the exclusion of everything else.

        A perfect example of this limitation is the projection in both TAR and AR4 of global warming of around 0.2C per decade – when, in actual fact, the past decade showed slight cooling despite unabated GHG emissions and concentrations reaching record levels.

        Oops!

        Max

      • Reposted from below:

        It seems to be a difficult estimate to make. I’m also in correspondence with Dr. Karen Smith who specializes in SSW events, and she’s working on it as well.

        For those who are not familiar with the issue, during many NH (and some SH) winters, there are large sudden stratospheric warming events that essentially, through a variety of means, transfer a large amount of energy from the tropics, northward and upward, into the stratosphere and then even up into the mesophere and out into space. The polar vortex is disrupted by these events, either being greatly offset or actually split, often bringing very cold outbreaks to lower latitudes (this happened in both 2009 and 2012).

        What we are trying to figure out is some approximation as to how much enery is actually released from the Earth’s climate system during one of these events. There is some data on temperature increases and EP Flux. Could these be enough to at least get a ballpark figure on energy released?

      • F&R had a nice example, it may be true, but on the other hand, what they really did was show that if it had not cooled, it would have warmed. Natural variability and the role of clouds and aerosols are here in the real world, not to be modeled away.

      • Gerald Browning

        Ass soon as you perform a convective adjustment, you are no longer solving the dynamics of the atmosphere. And adding an artificial boundary layer parameterization invalidates the dynamics near the surface. Using unrealistically large dissipation destroys the correct
        physical cascade of enstrophy. Can you cite the 3 conditions needed for a numerical method to converge to the solution of a partial differential
        equation (Lax equivalence theorem)?

      • Cap, you’re scientifically illiterate, and your attacks on the professionals are merely pathetic.

        Learn some basic — and I’m talking high-school level — science, and you might start to be in a position to attack the science that scares you without completely embarassing yourself, as is the case here.

    • Steven Mosher

      what’s required is specifying the tests ahead of the runs. pretty basic.
      Those tests should be driven by metrics we need to know to calculate impacts.. sea level etc.

      Finding matches after the fact is a disasterous test methodology.
      been there. fooled myself.

      • Agree Steven. Confirmation bias is the devil we all face. But FR2011 has now specified exactly what tests it used and can be run for any year, and other such attribution studies have done the same. They’ve laid down exactly what the metrics to be used.

      • Brandon Shollenberger

        I like how R. Gates says we know “exactly what… metrics to” use. The FR2011 paper he references is relatively simple curve-fitting with opportunistic choice of parameters. Part of the process requires looking for the “best” lags for each forcing with no regard for any physical theory. Not only that, the lags for each forcing can be different for each temperature series. In fact, they can be different if you just use different subsets of the data!

        Apparently we know “exactly” what to do because we know to “take the answer that looks best without any concern for physical basis.”

      • Actually F&R 2011 is pretty close to complete dreck.

        You can remove variability from a “squiggly line” just by fitting to enough random series.

        Just because you’ve recovered a straight line doesn’t mean you’ve removed the bias in trend from the straight line.

        ad F&R not completely blown the way they modeled ENSO and aerosol forcings, there might have been hope that this study produced meaningful results.

        The fundamental flaw in their method is the assumption you can remove ENSO and aerosol effects by simple weighted subtraction of lagged indexes. You can see this easily by correlating e.g ENSO 3.4 against zonally (latitudinally) averaged temperature bands.

        This is what you find.

        This is just the tip if the iceberg. There are so many problems with the paper, that a comment that fully addressed all of them would be longer than the original paper. It is very likely that the particular slope was achieved by tuning their parametric model (exhibition of confirmation bias), which means the “key result” of this paper is nothing but a silly curve fitting exercise, devoid of any meaning.

      • Brandon Shollenberger

        Carrick:

        It is very likely that the particular slope was achieved by tuning their parametric model (exhibition of confirmation bias), which means the “key result” of this paper is nothing but a silly curve fitting exercise, devoid of any meaning.

        What makes you say this? I agree about the paper being rubbish, but I don’t know of anything to suggest their calculated slope came about due to tuning. Their slope is basically the same as you’d get if you did a simple linear regression. The extra parameters have a negligible effect on it.

        In effect, all F&R did was apply a linear regression with a bunch of extra trendless parameters. By using trendless parameters, they ensure their slope cannot be different than without those parameters. That means their decisions regarding the model (which I think are absurd) can’t have any real effect on the slope.

        If you want to have some fun, you should try rerunning F&R’s program with tweaks. For example, try removing parameters from their model. There are eight parameters, plus however you count their opportunistic lag fitting, so there’s a lot you can play around with. But what you’ll find is nothing you do will change their calculated slopes. All you can change is the squiggles.

      • Brandon:

        But what you’ll find is nothing you do will change their calculated slopes. All you can change is the squiggles.

        Um, no. If you change the lag or which parameters you include in the regression, you change the calculated value of the slope ( yes I checked, and if you look at the math, only case where it won’t is when the series in question is orthogonal to the temperature series, a case where it’s pointless to regress against in any case).

        If you mean something else besides what you wrote–namely that the amount of change of the calculated slope will fall inside of the uncertainty bounds of the original linear regression, that’s true. But that doesn’t tell you very much, because it doesn’t address the question of how much the uncertainty in the fitted trend is affected by the F&R procedure.

      • Brandon Shollenberger

        Carrick, when I removed the lag fitting entirely, there was almost no change to the slopes. When I removed the seasonal parameters, there was almost no change to the slopes. The same is true for when I removed the ENSO, volcanic and solar parameters. It was “basically the same” in all cases.

        I think the largest change I saw was 15%. That’s hardly notable. I could be misrembering, or I may have messed up somewhere, but if not, I think it’s fair to say the extra parameters have no real effect on the calculated trends.

        And that’s to be expected. The F&R model is just a linear regression. You can estimate the calculated trend just by examining the parameters. Any that don’t have a trend are going to have a negligible impact on the parameter that has a trend.

      • Brandon, I compared my numbers to what they report in Table 3 and get pretty similar results. The effects of the individual components (MEI, AOD, TSI) is about 10% of the total trend. Changing the lag results in comparable changes see this.

        You might argue that this change isn’t important (with these three variables it can of course change the results by as much as ±30%, and30% effects are usually considered to be a “big issue”, even 10% effects in most fields are considered to be whoppingly large), but that isn’t the same thing as what you stated earlier, which was “nothing you do will change their calculated slopes.”

        As I said:

        If you mean something else besides what you wrote–namely that the amount of change of the calculated slope will fall inside of the uncertainty bounds of the original linear regression, that’s true. But that doesn’t tell you very much, because it doesn’t address the question of how much the uncertainty in the fitted trend is affected by the F&R procedure.

        To calibrate a10% effect, that’s roughly the estimated effect of UHI on land surface temperature trends. There are plenty of papers written on UHI, so people in the field apparently think this is important.

      • Brandon Shollenberger

        Carrick, while I did say “nothing you do will change their calculated slopes,” I also said the slopes remain “basically the same,” the “parameters have a negligible effect,” and changes to them “can’t have any real effect.” I obviously wasn’t denying some change would happen.

        As for lags, if you do F&R’s model for GISS, varying just ENSO’s lag, you get a change of less than 5% if you limit the lag to no greater than one year. That’s not much, and I think it’s unreasonable to suggests lags of greater than a year. Aside from difficulty justifying it on physical grounds, it’s basically data-mining for bad lags (whereas F&R did so for good lags).

        Rather than hunt for bad lags, you can just remove the lags. Or you can just remove ENSO all together. You get changes of less than 10%. In fact, the greatest trend you can get from GISS by removing parameters is only ~10% higher than the smallest. That means there’s only a ~10% total change (for GISS).

        This doesn’t mean F&R’s results are good. It just means their method has no real effect on the calculated trend. It does no better than a linear regression.

      • Brandon Shollenberger

        I should point out a change of 10% in a linear regression is largely irrelevant since the period examined is a parameter, and it has as at least as much of an effect (in a series as noisy as this one). If F&R’s method is basically just a linear regression, there is already at least that much variance in it.

        In fact, I just did a quick check, and if one keeps the lags and number of months constant, but changes the start and end points, you get a change of 15%+ with GISS. And obviously, you get much larger changes if you keep the endpoint constant but change the starting point (partially because shorter periods will have greater variability).

        F&R’s approach is basically just a linear regression. That means the slopes they calculated were pretty much a given. They could change a bit, but there was a certain level of precision guaranteed by the approach. What wasn’t guaranteed was the effect of the “squiggle removal.” Unfortunately, F&R did nothing to show their squiggle removal was actually right. Since their narrower certainty ranges were based upon that unvalidated squiggle removal, the key results of their paper are unvalidated.

      • Steve

        Could you expand on the 4 or 5 metrics that you believe are most important.

      • Steven Mosher

        I’d have to look through WGII but off the top
        of my head I’d say.

        1. SAT
        2. Sea level
        3. Drought/Flood stats.
        4. Ocean Ph.

        Basically you want to improve the fidelity of the metrics that drive the cost impacts

        Youd have a second set of metrics used for downselect ( say humidity, sea surface sality, OHC)

        and a third set of metrics used for tuning.

        Since they are all inter related its rather nightmarish.

        Its reached the stage however at which folks need to think seriously about test designs, model selection, model scoring, and a more systematic approach. AR5 approach was a big improvement from what I see in the SOD

      • Steve
        I agree with your top three, but might not agree that #4 is as important to model; although I do agree it would be interesting science.
        Another interesting question is what margin of error the models should be expected to perform within over what timescales to be considered “of value”.

      • The Skeptical Warmist (aka R. Gates)

        Hmm Steve, I’m surprised you would not include ocean heat content given the large energy reservoir there.

      • Steven Mosher

        The accuracy of the metric is driven by the cost curves.
        In my mind they started this whole thing backwards.
        You want to start with the cost curves ( say the cost of sea level rise)
        If that cost curve is pretty linear low slope its a tractable problem to work out how accurate of a prediction you need. If the cost curve is wickedly exponential, then you need to know the answer to a finer accuracy OR you might just need to know threshold crossing.

        So you work the problem backwards to grind out your accuracy specs.

        You could even obviate the need for any detailed modelling based on the cost/damage function. For example, if your damage assesment was such that you could show a 50 cm sea level rise was a catastrophe, there isnt much point in noodling around to get 1cm accuracy. there is a scietific desire to noodle around, but pragmatically, you’d be wise to take notice if any model crossed the threshhold.

        Basically by working from the cost damage side first you can actually assign a dollar value to the uncertainty/information and say

        ” knowledge about sea level is worth 1billion dollars per mm.”
        This allows you to focus your research on reducing the uncertainties that carry the highest cost.

        The way we do it now is bass akwards. its not a science path optimized to provide answers that policy makers need.

        its driving in that directions, but still a huge disconnect between the end user and the designer

      • Steven Mosher gets a big plus 1

      • Bravo Mosher!!! At other blogs/bars, you’d get eighty-sixeded for those comments, but here, Judith and many others welcome those lines of thought!

      • r.gates i left out ohc as i dont see that as an input to an
        impact assesment. now since it drive steric component of sea level rise its important. maybe one you tune against.

        without a list of actual inputs to impact models im guessing.
        its more the approach im talking about.

      • I agree that a much better “process” would have been to 1st define what is important to measure or monitor based on the potential impact to humanity and then to determine what accuracy is required for each measure over what time scale in order to have relevant data for policy making.

        Imo, that would make for a interesting post for Judith to run.

  15. It is good to see these authors paying attention to natural internal variability, and not just relegating it to ‘noise’. … – JA Curry

    The forward calculations raise the possibility
    that total forcing from preindustrial
    times to the present (right axis in the figure)
    has been small or even negative. If this
    is correct, it would imply that climate sensitivity
    and/or natural variability (that is,
    variability not forced by anthropogenic
    emissions)
    is much larger than climate
    models currently indicate. … Knutti, 2003

    The climate system, however, is
    characterized by natural variability
    that produces states that are only statistically steady. … Knutti, 2002

    Optimal fingerprint methods used for
    detection and attribution of climate change have shown
    that the observed atmospheric warming can neither be
    explained by natural variability, nor by an anthropogenically
    forced increase in greenhouse gases alone, but
    is best matched when taking into account different
    anthropogenic forcings (greenhouse gases and sulfate
    aerosols) together with natural forcings
    (variations in
    the solar irradiance and stratospheric volcanic aerosols)
    (Santer et al. 1996; Tett et al. 1999; Stott et al. 2000,
    2001; Hegerl et al. 2000). … – Knutti, 2003

    At the numerator, eT, a measure of
    natural variability, ensures that models whose bias and deviation are not large relative to natural fluctuations would not be unjustly discounted.…- Knutti, 2007

    This type of model simulates the
    circulation of the atmosphere and ocean
    as a whole. It produces natural variability in
    weather and climate, on timescales from
    hours to centuries,
    similar to that observed;
    and it incorporates the main feedback mechanisms
    that are thought to have determined
    the climatic response to natural influences
    (such as variations in solar output and the
    occurrence of explosive volcanic activity), as
    well as to anthropogenic greenhouse-gas
    and aerosol emissions during the twentieth
    century1. These models are expensive to run,
    and so only comparatively few simulations
    of past and future climate can be performed. … – Knutti, 2002

    Now I will go take a look for “noise”.

  16. Keith Jackson

    The authors of the Mauritsen, et al paper state that:

    “The MPI-ESM was not tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen. Yet, we were in the fortunate situation that the MPI-ESM-LR performed acceptably in this respect….”

    I question the validity of this statement. If we compare the mean 11-year period around 2005 with a similar period around 1878 (before the Krakatoa volcano) we find from the Climate Explorer Website that the observed HadCRUT4 global temperature increased by about 0.67 degrees C, while the MPI-ESM-LR Model predicted an increase over the same period of about 1.08 degrees C – approximately 60% more than observed. In addition, the net TOA energy imbalance for the period around 2005 was predicted by the MPI-ESM-LR Model to be more than 1 W per square meter while current ocean-heat-content measurements imply a value half that or less. Simple energy-balance estimates (making use of nominal CMIP5 aerosol forcing values) then imply an MPI-ESM-LR climate sensitivity that is 70% or more too high. That’s a big difference when attempting to project the future effects of global warming. Many of the CMIP5 GCMs appear to have similar or worse difficulties. Kind-of-sort-of getting temperature trends over the 20th century that vaguely resemble the observations is, IMHO, inadequate to justify the enormously expensive changes to the global economy that are being advocated by many. I’m concerned about the all-to-typical mindset that allowed the authors to consider that the model, “performed acceptably in this respect …”

  17. http://judithcurry.com/2012/11/05/uncertainty-in-observations-of-the-earths-energy-balance/#comment-265753

    http://judithcurry.com/2012/11/05/uncertainty-in-observations-of-the-earths-energy-balance/#comment-265825

    Their models are null and void because their energy budget is falsified – because they have successfully hidden the real beam heat energy from the Sun, thermal infrared aka longwave infrared aka radiant heat, which does reach us and actually can and does heat land and oceans – in order for them to pretend the Sun has nothing to do with rising temps by attributing all increase in measurements to “rising carbon dioxide levels and backradiation/blanketing from greenhouse gases”.

    • Hopefully the next four years will see the strengthening of our woefully inadequate mental health system.

      • Steven Mosher

        hmm.

        I would not be too sure of that.

        As i explained to you, pain shifting is the perferred approach taken by businesses and institutions.

        So, you recall the options I said companies had when faced with a tax.

        option 2.
        cost cutting through offshoring staff or RIF. see the neat thing is when you tax something like a medical device, you get the revenue, but you lose control over how the people who give it to you ( companies) go about shifting their pain to others.

        Now, you can deny facts. But you dont actually need to. You just need to figure out a better way of taxing people to fund the things you want to fund.

        http://www.washingtontimes.com/blog/watercooler/2012/nov/8/picket-companies-plan-massive-layoffs-obamacare-be/

      • If so, beware of guys in white coats carrying nets. They’re after you Robert.

    • Uh…we have a precise measurement of exactly how much solar irradiance reaches the Earth across all wavelengths as well as how reaches the ground at various elevations across the surface of the Earth. But this notion of “real beam heat” energy is…is…nuts.

    • The models claim that shortwave from the Sun heats the Earth, that thermal infrared direct from the Sun either doesn’t exist or can’t get through some unexplained invisible greenhouse glass like ceiling and so plays no part in heating the land and oceans.

      What they are saying, in real world terms, is that no direct heat energy reaches us from the Sun, which is thermal infrared, aka radiant heat, aka longwave infrared.

      The models claim that “the Earth is heated by shortwave from the Sun, mostly visible”, and “the only thermal infrared is in the upwelling from the heated Earth and in this downwelling from the atmosphere” – the standard AGWScienceFiction meme “shortwave in longwave out”.

      There is no heat at all from the Sun in the models – shortwave from the Sun can’t physically heat land and water and they’ve taken out the direct thermal energy of the Sun on the move to us, thermal infrared, the Sun’s real heat.

      There is no weather in the models – it takes intense heating of land and water at the equator to get us our huge equator to poles winds and weather systems. And they don’t have any heat from the Sun to effect these.

      This AGWScienceFiction Greenhouse Effect is based on a science fraud, by modelling an imagined word with its own creation of fake fisics basics, which are impossible in the real world.

      There is no Water Cycle in the models and there is no rain in their Carbon Cycle.

      There are no real gases in the models, substituted by the imaginary ideal gas without volume, attraction, weight, and not subject to gravity.

      Which means, these models have also disappeared the whole real gas atmosphere, the heavy, voluminous, fluid gas ocean of mainly nitrogen and oxygen and a variable 1-5% water, and substituted empty space.

      So they have no convection, and no sound.

      This science fraud Greenhouse Effect has no real physics basics in it at all, it’s modelled on an entirely fictional world of impossible fisics through the looking glass with Al.

      These models are gobbledegook nonsense.

    • Particular Physicist

      Hopefully the next four years will see the strengthening of our woefully inadequate mental health system.

      With a special ward where Myrrh and Robert can chat.

    • Myrrh, So IR from the sun has been “hidden”, and this means rising temperatures can them be blamed on Co2. Your implication then is that rising temperatures are instead caused by rising IR from the sun. Please point us to the data for this.

      • I gave an example of how the climate science clowns use it here:

        http://judithcurry.com/2012/11/05/uncertainty-in-observations-of-the-earths-energy-balance/#comment-265825


        Here’s an example of how this science fraud is used, from Skeptical Science:

        “An enhanced greenhouse effect from CO2 has been confirmed by multiple lines of empirical evidence. Satellite measurements of infrared spectra over the past 40 years observe less energy escaping to space at the wavelengths associated with CO2. Surface measurements find more downward infrared radiation warming the planet’s surface. This provides a direct, empirical causal link between CO2 and global warming.”

        The AGWSF claim, because they say “no longwave infrared reaches the Earth from the Sun/the Sun doesn’t produce any”, is that all measurements taken are therefore “downwelling from the atmosphere as a result of greenhouse gases”, when should properly be attributed to increased direct thermal infrared from the Sun.

        Also In that post is a link to some cloudless sky measurements from Switzerland showing decadal increases in longwave infrared, as

        “Key Points
        •Significant increase of cloud-free downwelling long-wave radiation”

        What is critical to get one’s head around here, is that they have corrupted traditional physics by this simple sleight of hand of substituting the real thermal energy from the Sun to us, heat, for the Sun’s light which cannot heat up matter.

        When the simplicity of this swap around is understood it is quite mind blowing that they’ve managed it, they did this by introducing it into the general education system and by those like NASA changing from giving traditional teaching to giving this fake fisics AGWScienceFiction Greenhouse Effect meme of “shortwave in longwave out”.

        There’s another post from me in that discussion in which I’ve given an example of how NASA changed its teaching to conform to this fake fisics meme.

        Myrrh | November 9, 2012 at 8:13 am
        http://judithcurry.com/2012/11/05/uncertainty-in-observations-of-the-earths-energy-balance/#comment-265743

        Traditional physics of course still teaches real world, but the general public has been dumbed down in order to promote this scam.

  18. These abstracts have got to be some of the most tediously written English I have ever read. ‘Climate scientists’ live in a world of their own, where they only communicate with each other in their own special form of gobble-de gook.

    I fall into the camp of: If you have something to hide, then make it as complicated as possible to understand, refuse to release the raw data, have it pal reviewed and then smear and deride anyone who dares to question your findings. Such is the current state of climate modelling.

    As for the models, I fail to understand how you can accurately model something which can best be called long term weather chaos, combined with the uncertainty of natural climate cycles. Add to that, no one yet has a clue how to accurately model the effects of clouds and their response to any rise in temperature and/or humidity.

    The perpetrators of climate models have pointlessly cost the world’s economy hundreds of billions of dollars, by peddling the scary, but totally unfounded, hypothesis of CAGW.

    I belong to the most sceptic group of all scientific groups, namely private sector geologists. Hell, what do we understand about what influenced climates of the past and what they mean in today’s world?

    OK trolls, let’s hear your whining rants and tantrums.

    • Shorter Peter Miller: “…I fail to understand…”

    • The Skeptical Warmist

      Peter,

      You said: “As for the models, I fail to understand how you can accurately model something which can best be called long term weather chaos, combined with the uncertainty of natural climate cycles.”
      _____
      You can accurately create climate models that show dynamics as we understand the nature of physical nature of the forcings involved. Long-term “weather chaos” (whatever that term means) has nothing to do with climate.

      • Chief Hydrologist

        ‘The global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial. The large-scale climate, for instance, determines the environment for microscale (1 km or less) and mesoscale (from several kilometers to several hundred kilometers) processes that govern weather and local
        climate, and these small-scale processes likely have significant impacts on the evolution of the large-scale circulation (Fig. 1; derived from Meehl et al. 2001)…

        The accurate representation of this continuum of variability in numerical models is, consequently, a challenging but essential goal. Fundamental barriers to advancing weather and climate prediction on time scales from days to years, as well as longstanding systematic errors in weather and
        climate models, are partly attributable to our limited understanding of and capability for simulating the complex, multiscale interactions intrinsic to atmospheric, oceanic, and cryospheric fluid motions.’ Hurrell et al 2009

        ‘Atmospheric and oceanic forcings are strongest at global equilibrium scales of 10^7m and seasons to millennia. Fluid mixing and dissipation occur at microscales of 10^−3m and 10^−3s, and cloud particulate transformations happen at 10^−6m or smaller. Observed intrinsic variability is spectrally broad band across all intermediate scales. A full representation for all dynamical degrees of freedom in different quantities and scales is uncomputable even with optimistically foreseeable computer technology. No fundamentally reliable reduction of the size of the AOS dynamical system (i.e., a statistical mechanics analogous to the transition between molecular kinetics and fluid dynamics) is yet envisioned.’

        ‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior. Plausibility criteria are qualitative and loosely quantitative, because there are many relevant measures of plausibility that cannot all be specified or fit precisely.’ McWilliams 2007

        A posteriori solution behaviour? They pull it out of their arses?

      • Yes, I agree you can accurately recreate some forcings in climate models, but that leaves the forcings which are imperfectly understood, the feedback mechanisms, which are even less well understood, natural climate cycles, which are usually ignored and, of course, the impact of AGW.

        Yes, like most sceptics I believe AGW exists and that it is a mildly interesting, largely beneficial, phenomenon and should never be confused with CAGW, which is a totally fallacious theory.

        Anyhow, let’s take a quick peep at AGW. I believe there are six factors here – alarmists tend to concentrate on just one of them, they are:

        1. Land usage – possibly the biggest AGW effect, rarely discussed, but obviously has a big effect and is never included in climate models. For example, where a peasant has chopped down some rain forest and planted some crops, there will be a significant impact on the amount of energy reflected back into space – and then there is the seasonality factor to consider.

        2. Irrigation – the huge increase in irrigation over the past century has very definitely had an effect, especially locally, on humidity levels and therefore – according to classic alarmist theory – will have caused a rise in average land surface temperature. Another factor never included in models.

        3. UHI – a factor always underestimated and usually derided by alarmists, but very useful when the regular manipulation downward of bygone temperature statistics are undertaken.

        4. Soot – another factor not included in climate models – the absorbtion of heat by these particles, when on ice, snow and glaciers accelerates their melting and thereby reduces the amount of radiation reflected back into space.

        5. Aerosols, such as suspended sulphate particles. Only ever used by alarmists to explain when their models’ predicted warming refuses to appear.

        6. Increasing levels of carbon dioxide – usually the only AGW factor ever considered and increasingly it looks like the impact of this is routinely exaggerated in climate models.

        Next, we have the feedback effect of clouds – and that is another story altogether. The impact of our variable star, with its variations in energy emissions – albeit not a huge factor in the satellite era, but we don’t really know with any degree of accuracy what was happening here 50, 100 or even more years ago.

        Throw in the effects of the ever changing ocean currents, the eccentricities of the Earth’s orbit around the sun and the fact that climate models are designed and operated by people who have a vested interest in maintaining their comfortable lifestyles (by producing alarming results which require ever more research), you can only reach the obvious conclusion that climate models are a complete crock.

        Examples of this include Mann’s notorious Hockey Stick and even more recently Trenberth’s assertion that rising temperatures would cause less clouds – so increased evaporation and higher humidity levels would cause less clouds?!?

        Of course, if you are diligent enough you can fiddle with a multitude of factors and data to reproduce the temperature records of the recent past. However, if you choose to ignore important influences, because you don’t understand them or they are inconvenient, then those models will have no predictive power. Such is the state of climate modelling today, the result of the notorious pal review process of ‘climate science’.

        PS Perhaps, I am unduly influenced by the fact that I have a friend who writes climate model programs for ‘climate scientists’ and has tears of laughter whenever he describes what he has to do to ‘fix’ the programs to produce the results that are wanted.

      • What you don’t understand is that the CO2 forcing is so large that it dominates over all those other factors you mention.

        All I see here is a typical climate denier trying to downplay the significance of human CO2 emissions by pointing at anything and everything else and pretending it is being ignored when it isn’t.

        As for being a geologist. Well many of those turn out to be blowhards on climate who make stupid statements about climate because they wrongly think they have some insight. They aren’t climatologists. They don’t.

        The fact is the models are just fine and what they are showing is unwavering dominance by CO2 forcing, which is entirely consistent with other lines of evidence. Indeed the massive 3.7wm-2 forcing from a doubling of CO2 itself routinely dwarfs any other estimation of forcing from any other source.

      • lolwot | November 10, 2012 at 5:47 am |
        What you don’t understand is that the CO2 forcing is so large that it dominates over all those other factors you mention.

        The problem being that there is no way yet of measuring this, it is just an assumption plugged in to give a desired conclusion.

        What we see here is a typical alarmist trying to play up the significance of human CO2 emissions by presenting opinion as fact.

      • “The problem being that there is no way yet of measuring this, it is just an assumption plugged in to give a desired conclusion.”

        No, it’s a well founded calculation based on a host of atmospheric and lab measurements. The fact is the evidence for the domination of the CO2 forcing is immensely strong. What other forcing can you suggest that could conceivably approach a sustained 3.7wm-2 this century?

      • The fact remains is there is no known way to objectively measure the global CO2 effect. It is just assumed and implied (largely for the purposes of a given outcome, under conditions of political funding).

        As for other factors, ever heard of natural variations? If and when these are quantified, we can then get a better handle on the big picture

      • “What other forcing can you suggest that could conceivably approach a sustained 3.7wm-2 this century?”

        lolwot, nobody’s buying that wild speculation anymore.
        http://i1159.photobucket.com/albums/p623/Oefinell/30yrLR.jpg

        Even the 30-year linear trend started dropping in ~2005 and it looks like it will be 0.0 in 2020.

      • Calculations might not be measurements, but if the calculations are upon quantifies that have been measured then they are far more substantial than some plucked-out-of-the-air assumption.

        Climate skeptics want to pretend that anything that isn’t measured is pure assumption, eg made-up out of thin air. There’s a great story on realclimate about people dismissing calculations in this way.

        What natural factors could conceivably produce the equivalent of a sustained 3.7wm-2 forcing this century? None are known or feasible.

      • “lolwot, nobody’s buying that wild speculation anymore.”

        3.7wm-2 forcing for a doubling of CO2 isn’t speculation. Its based on well founded calculation upon well founded measurements. Of course climate deniers will deny the evidence any way they can, including pretending that solid science is “wild speculation”.

        “Even the 30-year linear trend started dropping in ~2005 and it looks like it will be 0.0 in 2020.”

        The real wild speculation appears to be your own.

      • lolwot, you write “Climate skeptics want to pretend that anything that isn’t measured is pure assumption, eg made-up out of thin air. ”

        Maybe, but that is not my position. I claim that what is required by the scientific method, is that the estimations that have been made by the proponents of CAGW need to be confirmed by actual measurements before they can be accepted as valid. That is, we need an actual measurement of total climate sensitivity from modern temperature/time data, before we can agree that the hypothesis is valid.

        This measurement is missing. There is no CO2 signal in any modern temperature/time graph. How long do we have to wait for a CO2 signal to appear, and when one does not appear, do we conclude that no CO2 signal is ever going to appear?

      • what would a CO2 signal look like?

      • lolwot

        You ask

        What would a CO2 signal look like?

        It depends on the assumed 2xCO2 climate sensitivity, of course.

        At the mean CS of 3.2C, as assumed by the IPCC GCMs, it would have shown an observed decadal warming rate of around 0.2C (as IPCC projected).

        There was no observed warming since 1998 (or 2001), so there was no actual “CO2 signal” (or possibly a “CO2 signal” corrected for a low climate sensitivity).

        Max

      • lolwot, you write “what would a CO2 signal look like?”

        I have tried to explain this before. Let me try again. From http://bit.ly/V19Im8 we can see that global temperatures have been rising at about 0.06 C per decade since around 1850. From http://www.vukcevic.talktalk.net/MidSummer-MidWinter.htm we can see that something similar has been happening since the middle of the 17th century. This is a background rise in temperature that is caused by unknown natural variations, and has nothing to do with additonal CO2 in the atmosphere. There is no reason to suppose that this rise in temperature has ceased….. yet.

        If global temperatures are going to rise to unacceptable levels by the end of this century, because of CAGW, then at some point the rate of rise of gloabl temperatures must be considerably in excess of 0.06 C per decade for a long period of time. At the end of the 20th century, there was a period of a couple of decades when the rate of rise of temperature was in excess of 0.06 C per decade. This was wrongly interpreted as being due to CAGW. We have now had a period of some 17 years of virtually no rise in temperature. This has now restored the rate of rise back to it’s steady value 0f 0.06 C per decade. So temperatures seem to be still rising at the same rate that they have done for many decades.

        What a CO2 signal would look like is a long period of time when the rate of rise of temperature was significantly greater than 0.06 C per decade; which is a sine qua non of there being a substantial rise in global temperatures by the end of the century. We are supposed to have had several decades of CAGW, and no CO2 signal has appeared.

        So I repeat my question. How long do we have to wait for a CO2 signal to appear, and when one does not appear, do we conclude that no CO2 signal is ever going to appear?

      • Lolwot : What natural factors could conceivably produce the equivalent of a sustained 3.7wm-2 forcing this century? None are known or feasible.

        Restating the question so that you are no longer blithely assuming what you are required to prove :
        What natural factors could conceivably produce warming in the latter half of the last century?

        Answer : the same ones that produced an equivalent warming in the first half of the last century, whatever they were, acknowleged even by warmist advocacy groups such as the IPCC to be down to factors other than man.

        “None are known or feasible.” How can you say a factor we are ignorant of, cannot feasibly do X ??

      • lolwot | November 10, 2012 at 7:13 am |
        what would a CO2 signal look like?

        How about a measured ongoing positive radiative flux imbalance at the top of the atmosphere, that was clearly predictable from CO2 increases ?

        (The former being something we don’t yet have the technology for, as I understand it).

  19. Steven Mosher

    “JC comment: I first became aware of the problem two years ago, when Tim Palmer presented (at Fall AGU) a plot of absolute global temperatures simulated by the CMIP3 climate models for the 20th century. The impact of this on the model thermodynamics (notably the sea ice) is profound; model tuning then works around this problem, so that n wrongs make a ‘right’, almost certainly torquing feedbacks in unfortunate ways.”

    That chart was pretty devastating the first time you see it.

  20. It seems to be a difficult estimate to make. I’m also in correspondence with Dr. Karen Smith who specializes in SSW events, and she’s working on it as well.

    For those who are not familiar with the issue, during many NH (and some SH) winters, there are large sudden stratospheric warming events that essentially, through a variety of means, transfer a large amount of energy from the tropics, northward and upward, into the stratosphere and then even up into the mesophere and out into space. The polar vortex is disrupted by these events, either being greatly offset or actually split, often bringing very cold outbreaks to lower latitudes (this happened in both 2009 and 2012).

    What we are trying to figure out is some approximation as to how much enery is actually released from the Earth’s climate system during one of these events. There is some data on temperature increases and EP Flux. Could these be enough to at least get a ballpark figure on energy released?

    • There can also be so longer term cooling associated with the SSW events. That was part of the Solomon part I mentioned, since the 1998 El Nino was the mother of all SSW events and strat water vapor was impacted.

      • The Skeptical Warmist (aka R. Gates)

        It would make sense that there might be some short-term cooling associated with SSW events, but given that they occur every few years, I doubt if there can be long-term cooling. If anything, considering that warmer tropical and subtropical events precondition planetary waves and the MJO for the SSW event, it would seem more likely that SSW are reactions to warmer conditions, releasing energy from the system much like a bubble rising up and popping in a boiling pot. These do cool the system in the short-term of course, perhaps even creating the conditions for a La Nina to begin, but with a constant source of forcing, they won’t create any long-term cooling.

      • I think it would depend where they happen. The high latitude events are easy to spot, but the same conditions in the mid and low latitudes, wall energy imbalances, would cause more entrainment of water vapor.

        http://www.agu.org/pubs/crossref/2012/2012JD017719.shtml

        I caught some grief over this before, but I am pretty sure the events are related.

  21. R Gates wrote :”Quinn the Denialist Eskimo said:

    “the absence of “global” warming since 1997…”

    Really? September 2012 was the warmest global temperatures for September ever recorded (tied with 2005, which also is after 1997), and yet we have an absence of “global” warming?

    Is this as meme that is just stuck in your neurons and you can’t shake free the the actual reality of the data befor you? Reminds me of reations by people a few nights ago…”

    R Gates: It seems you’re the “denialist.” Look within and ask yourself why you found it necessary to obviously misinterpret what was said. “Warmth.” “Warming.” Two different words with distinctly different meanings. Unless English is not your first language, you might profitably look at your own lack of honesty.

    • The Skeptical Warmist (aka R. Gates)

      pokerguy,

      There was no misinterpretation. He said there was a absence of “global” warming since 1997, and I gave him two very specific examples (2012 and 2005) in which we saw that there was in fact warming (not just warmth) over 1997.

      Given the anemic solar cycle and the cool phase of the PDO and the increase in aerosols since 1997, how do skeptics of AGW account for the warmth this September, or more importantly, the fact that 2011 was the warmest La Nina year (typically cooler years) on record? Oh yeah, “natural variability”…nevermind.

      • Chief Hydrologist

        Ocean warmth – which buffers atmospheric heat content over what may be seasons to decades – being at the peak of the solar cycle, a weak El Nino over most of the year, minor increases in sulpur which is local and mixed with black carbon – http://www.nature.com/ngeo/journal/v3/n8/abs/ngeo918.html

        Oh yeah – satellite data that shows cooling in the LW and warming in the SW.

        Now you are getting really boring.

      • R. Gates

        The was an absence of a warming trend since the start of the new century.

        There may have been days or even months that were warmer than before, but the trend has been one of slight cooling.

        Them’s the facts.

        Max

      • The Skeptical Warmist

        So you decided to add the word “trend” and then blame me for not reacting to the word “trend” that was not in the original post.

        And then I’m being called a “denialist”.

        Interesting logic.

      • That’s right Gates, just you go on and ignore pesky natural variability whenever your required conclusion needs you to. Just put it in scare quotes, that’ll define it out of existence, for sure.

      • In a noisy series you can almost always find a series of negative trends regardless of the long term rise in temperatures.
        Likewise, in a noisy series you can almost always find a series of positive trends regardless of the lack of continued rise in temperatures.

      • lolwot | November 9, 2012 at 7:45 pm | Reply
        There has been warming since 1997

        Even though the temperature hasn’t increased. Life in the big rock candy alarmosphere mountain is great, isn’t it?.

      • The point is that a trend since 1997 doesn’t tell you whether temperature has increased or not. It’s the wrong method. It is susceptible to telling you temperature hasn’t increased even when it has. This is easy to show using some dummy data that we know has continuous rising.

      • OK I get it now .lolwot … looking at the temperature trend, is a bad way of assessing the the temperature trend.

      • Would you compare last years temperatures to this years and conclude that because it was warmer last year therefore warming has stopped?

        I hope not.

        The same mistake can still be made with longer periods.

  22. The validity of parameterization is not shown just because it is useful—i.e., because we have no other choice. Our pitiful shortcomings are not proof of our ability to accurately simulate real-world climate conditions. We know we there is not enough computing power on Earth if we actually knew all of the variables and how they are related.

    “Independent Summary for Policymakers – IPCC Fourth Assessment Report” says, “[i]t is not formally known if today’s climate models are a suitable basis for projecting climate.” Why? For starters, what assurance do we have that the “tunable parameters” do not exceed “the number of degrees of freedom in the observational testing scheme for the GCMs”? (See pg. 40)

    The UN-IPCC should simply admit that the grid blocks that are used in the constructions of GCMs are too large to accurately simulate conditions such as thunderstorms, hurricanes and other natural processes that transfer huge amounts of energy from the surface of the Earth to the stratosphere. Without that simple admission of ignorance their incompetence must be presumed.

    • Wagathon

      You mentioned grid block size and poor parameterization in the current GCMs.

      A 2006 study by Wyant et al. uses a cloud-resolving model with much finer resolution. This model finds weaker climate sensitivity than the conventional models, with overall cloud feedbacks that are negative with warming, rather than positive.
      ftp://eos.atmos.washington.edu/pub/breth/papers/2006/SPGRL.pdf

      “[Abstract]
      [1] The climate sensitivity of an atmospheric GCM that uses a cloud-resolving model as a convective superparameterization is analyzed by comparing simulations with specified climatological sea surface temperature (SST) and with the SST increased by 2 K. The model has weaker climate sensitivity than most GCMs, but comparable climate sensitivity to recent aqua-planet simulations of a global cloud-resolving model. The weak sensitivity is primarily due to an increase in low cloud fraction and liquid water in tropical regions of moderate subsidence as well as substantial increases in high-latitude cloud fraction.”

      “[11] The global annual mean changes in shortwave cloud forcing (SWCF) and longwave cloud forcing (LWCF) and net cloud forcing for SP-CAM are -1.94 W m-2, 0.17 W m-2, and -1.77 W m-2, respectively.”

      And

      “Shortwave cloud forcing becomes more negative at all latitudes, except for narrow bands near 40N and 40S, indicating more cloud cover and/or thicker clouds at most latitudes. The change in zonal-mean longwave cloud forcing is relatively small and negative in the tropics and stronger and positive poleward of 40N and 40S, where it partly offsets the shortwave cloud forcing change. Thus the net cloud forcing change is negative at most latitudes, and it is of comparable size in the tropics and the extra-tropics.”

      “[Discussion and Conclusions]
      [20] We have presented the first climate sensitivity tests using superparameterization embedded within a conventional GCM. The overall climate sensitivity of SP-CAM for the Cess-type perturbation is relatively weak compared to other GCMs, but fairly similar to the climate sensitivity derived from limited duration aqua-planet simulations of the NICAM global CRM.”

      “[21] This weak sensitivity of SP-CAM is associated with negative net cloud forcing changes in both the tropics and the extra-tropics. In the tropics these are primarily due to increases in low cloud fraction and condensate in regions of significant mean mid-tropospheric subsidence. In the extratropics these are caused by a general increase in cloud fraction in a broad range of heights, and a strong increase of cloud liquid water path in the lower troposphere.”

      “[22] SP-CAM’s major advantage over conventional GCMs is the ability to resolve cloud motions at a much finer scale, allowing deep convective processes and cloud fraction to be represented more naturally than standard GCM parameterizations allow. In addition to the advantages over CAM documented in K05, comparisons with other US GCMs have shown that the SP-CAM has more realistic cloud properties. Comparisons using an ISCCP-simulator show more realistic fractional cloudiness at most heights and optical thicknesses than the GCMs compared by Wyant et al. [2006]. SP-CAM reduces some of the common shortcomings of GCM clouds such as under-prediction of middle-level cloud fraction and thin cloud fraction, though they are still present. These weaknesses are probably related to the coarseness of vertical resolution common to all these models. Also the SP-CAM generally under-predicts cloud fraction in subtropical stratocumulus regions (K05).”

      “[23] Bony and DuFresne [2005] point out that the high degree of dependence of simulated climate feedback on low cloud response is a common feature of climate models, and conventional models diverge greatly in their low cloud responses. Clearly the representation of ubiquitous low clouds and small scale convection is still a weak point with current vertical resolution of SP-CAM, and low clouds produce a dominant part of the net global cloud forcing change predicted by the model. Thus the overall climate sensitivities produced by the model must be regarded with caution. A next step with SP-CAM is to couple it to a slab ocean model so that cloud responses in more realistic climate change scenarios can be evaluated.”

      “The climate sensitivity of an atmospheric GCM that uses a cloud-resolving model as a convective superparameterization is analyzed by comparing simulations with specified climatological sea surface temperature (SST) and with the SST increased by 2 K.”

      And

      “The global annual mean changes in shortwave cloud forcing (SWCF) and longwave cloud forcing (LWCF) and net cloud forcing for SP-CAM are -1.94 W m-2, 0.17 W m-2, and -1.77 W m-2, respectively.”

      And a related report by Bretherton:
      http://www.usclivar.org/Newsletter/VariationsV4N1/BrethertonCPT.pdf

      “The world’s first superparameterization climate sensitivity results show strong negative cloud feedbacks driven by enhancement of boundary layer clouds in a warmer climate.”

      And:

      “The CAM-SP shows strongly negative net cloud feedback in both the tropics and in the extratropics, resulting in a global climate sensitivity of only 0.41 K/(W m-2), at the low end of traditional AGCMs (e.g. Cess et al. 1996), but in accord with an analysis of 30-day SST/SST+2K climatologies from a global aquaplanet CRM run on the Earth Simulator (Miura et al. 2005). The conventional AGCMs differ greatly from each other but all have less negative net cloud forcings and correspondingly larger climate sensitivities than the superparameterization.”

      It appears that superparameterization gives better data on how clouds really behave than the conventional GCMs.

      Max

  23. Chief Hydrologist

    ‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space.’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)

    Any model must be evaluated for ‘irreducible imprecision’ (http://www.pnas.org/content/104/21/8709.full) over a systematically designed model family – and the output expressed as a probability density function.

    Until that happens the use of models is based on ignorance and sham – probably in equal measure.

  24. “Climate change ‘likely to be more severe than some models predict’

    Scientists analysing climate models warn we should expect high temperature rises – meaning more extreme weather, sooner.
    Climate change is likely to be more severe than some models have implied, according to a new study which ratchets up the possible temperature rises and subsequent climatic impacts.

    The analysis by the US National Centre for Atmospheric Research (NCAR) found that climate model projections showing a greater rise in global temperature were likely to be more accurate than those showing a smaller rise. This means not only a higher level of warming, but also that the resulting problems – including floods, droughts, sea level rise and fiercer storms and other extreme weather – would be correspondingly more severe and would come sooner than expected.

    Scientists at the NCAR published their study on Thursday in the leading peer-reviewed journal Science. It is based on an analysis of how well computer models estimating the future climate reproduce the humidity in the tropics and subtropics that has been observed in recent years. They found that the most accurate models were most likely to best reproduce cloud cover, which is a major influence on warming. These models were also those that showed the highest global temperature rises, in future if emissions of greenhouse gases continue to increase.

    John Fasullo, one of the researchers, said: “There is a striking relationship between how well climate models simulate relative humidity in key areas and how much warming they show in response to increasing carbon dioxide. Given how fundamental these processes are to clouds and the overall global climate, our findings indicate that warming is likely to be on the high side of current projections.”

    Extreme weather has been much in evidence around the globe this year, with superstorm Sandy’s devastating impact on New York the most recent example. There has also been drought across much of the US’s grain-growing area, and problems with the Indian monsoon. In the UK, one of the worst droughts on record gave way to the wettest spring recorded, damaging crop yields and pushing up food prices.”

    • Chief Hydrologist

      Like I said – ignorance and sham – probably in equal measure.

    • Bob

      This is just another version of “it’s worse than we thought…”

      Yawn!

      Max

    • Model output has had no agreement with actual data since 1998. Model !output for the future should be just a good. NOT good at all.

    • Bob,

      So models that best match up with one parameter must be accurate and therefore all of the other parameters they predict must be accurate as well?

      That does not exactly sound like a sound foundation.

  25. Judith,

    This part is {nearly) certainly true –> ” I suspect that there are more fundamental issues at play in coupling of two nonlinear, chaotic fluids.”

  26. I can’t get past the circular logic of producing sensitivity values from models, and then having the output of the models depend on the value of the sensitivity. If feedback to CO2 forcing were the only unknown, then estimating it from a model would make sense. With multiple sources of uncertainty, ‘tuning’ comes across as cheating.

  27. Mark B

    With multiple sources of uncertainty, ‘tuning’ comes across as cheating.

    Egad!

    How could that be?

    Max

    PS “Circular logic” is not new here. Check out the logic in the famous (Hansen et al., 2005) hidden “in the pipeline” paper.

    Ring around the rosies
    A pocket full of posies…

  28. “Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper ”

    It is unfortunately true that you can fit almost any climate model to any period by fiddling with parameters. There is onlt one way to validate a model of high complexity like climate. That is to validate each process separatly and only put the complete model together after you have done that. You must have confidence in all the parts before you can have confidence in the whole.

    One part of the climate models that need much more attention, in their spectral components of models, is the vibration modes of the CO2 molecule. This should reveal the extraordinary power and limits of that molecule to absorb and retransmit energy. That the IPCC refused to look at what happened in 1940 is now history, but today’s modellers need to look very closely at that time and make sure that their models faithfully portray it. They should know at exactly what temperatures a photon of long-wave radiation is emitted and absorbed by CO2. See my website above.

  29. “Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased”

    Judith, Question: This has bothered me since I read on Lucia’s site some time ago that many of the models run hot by degrees K or cold by degrees K but exhibit the same anomalies as the actual temperature record how that could be thermodynamically possible? Does it comply with the second law of thermodynamics for a “model atmosphere” to exhibit the same temperature change in response to a given forcing, such as a given increase in atmospheric CO2, while having an absolute temperature 2K above or below the actual atmospheric temperature?

  30. Berényi Péter

    The fundamental issue with computational climate models is an epistemological one. Fitting multiple models, and computational ones of high Kolmogorov complexity at that, to a single run of a unique instance is not science, never was and never will be. The very paradigm of climate modelling, as it is practiced in the last several decades, is flawed, should be abandoned fast.

    The proper approach is to seek a simple theory, that fits many experimental runs of multiple physical instances, but GCMs are as far away from this requirement, as anything ever can get.

    Therefore it should be realized, there is no such thing as “climate science” as an autonomous field, independent of the general and as yet unresolved problems of physics.

    Non-equilibrium thermodynamics of complex systems (with a vast number of non-linearly coupled degrees of freedom) in the presence of significant radiant heat is one of the few areas of semi-classical physics, where little progress is seen, basically because of difficulties around proper lab models of such systems. That is, we still do not understand on a general level what’s going on.

    But terrestrial climate is just one example of such systems. Why would anyone at her right mind expect to understand it better than the general case?

    Go back to the lab, guys and do actual experiments on real physical objects. Not on a simulacrum of Earth of course, because that’s impossible. Study other objects, filled with semi-transparent fluids of complex chemical composition, on a rotating table to induce as much turbulence as possible. Send a vigorous flow of free energy through it with a high rate of entropy production, isolate it from its environment in all respects except radiative coupling. Put it into a vacuum chamber whose walls are kept at a low temperature, by liquid nitrogen perhaps. Have its effective temperature as high as permitted by construction materials (at severel hundred centigrade at least). It’s even better if some component of the fluid have phase transition close to the operating temperature of the device.

    As soon as such a system is understood adequately, that is, you can get into a position when you are able to construct a computational model of it based on full theoretical insight, that can predict (not project!) its behavior reliably in multiple experimental runs, even if it is perturbed in any number of ways, notably by changing the optical depth of fluid filling it in various spectral bands, then, and only then, you can return to climate.

    That’s the way science is done, not the other way around.

    Please note that this requirement is not applicable to collecting adequate climate data. That’s a must, because later on, even with more insight, measurements missed in the past would still be missing, forver.

    • You have no idea how much I agree with you.
      As I have asked before; what would the ‘average’ temperature of an Earth be that was rotating at either twice or half present speed?

    • Rutherford famously declared that ‘if your experiment depends on statistics, you should have thought of a better experiment’. I’ve come to the conclusion that the only proper purpose of climate models is to aid in the designing of such an experiment – not, as at present, as experiments unto themselves.

    • Therefore it should be realized, there is no such thing as “climate science” as an autonomous field, independent of the general and as yet unresolved problems of physics.

      Yes there is, and “climate science” is entirely independent of the rest of science. It is a science fraud in its entirety.

      It has mangled real physics basics to create a completely imaginary world.

      When will people here wake up to that fact and stop arguing piecemeal about it? These arguments need to be put into the correct context. The whole package of the Greenhouse Effect is built up of fictional fisics basics.

      Their models are of a completely fictional world.

      These models are based on a world which has no atmosphere. They don’t know what fluid dynamics is because they don’t have the heavy fluid subject to gravity voluminous real gas atmosphere of the real world around us – they have empty space with the imaginary ideal gas which has no properties, no volume, no weight, no attraction. That’s why all their arguments are about “radiation from the surface direct to empty space”. They have no convection because they have nothing to convect.

      They have “lapse” rates instead of heat transfer by convection, impossible anyway in their atmosphere because they “have empty space with no gravity acting on their hard dots of nothing molecules zipping through it at great speeds under their own molecular momentum and so thoroughly mixing” – that’s why they claim their “idealgascarbondioxide supermolecule can accumulate for hundreds and even thousands of years” – it has no volume, no weight, no attraction, they have nothing capable of being subject to gravity.. Their atmosphere doesn’t have gases buoyant in air – because they have no air. Put in however many exclamation marks it takes for you, generic, to concentrate on what this means.

      They don’t understand they have no sound in their world.

      They don’t know what real molecules of gas are. How then can they do any experiments on this?

      • Myrrh
        Oh do wake up, your self-assured hand-waving cuts no ice here. Don’t you have any actual arguments (clear and brief) ? You are as blinkered as Lolwot and TobaccoReason and other credulous truebelievers here.

  31. ALL climate models are BS. ‘Nuff said.

    • The Skeptical Warmist

      That pretty much sums up many people’s understanding of climate models. If you try to use a hammer as a screwdriver, and keep using it as a screwdriver, you’ll probably eventually give up and say the hammer is BS.

      • So, what are you saying? Climatists are screwed up?

      • The Skeptical Warmist

        What is a “climatist”?

        Climate models are not so good about saying what will happen over the short-term, but are getting better at saying why things happen over the long-term.

      • Chief Hydrologist

        It is nonsense either that the spread between models is getting any less or that discete solutions from any particular model mean much at all.

  32. You know, I predicted that the Dallas Cowboys would lose their last game because their running back, DeMarco Murray, was injured. Turns out the Cowboys did lose their last game and DeMarco didn’t play. My prediction was right!

    Can I get my PhD in Climatology now?

  33. …Catch a falling star and put it in your pocket
    Never let it fade away
    Catch a falling star and put it in your pocket
    Save it for a rainy day…

    The only reason the `parameters’ that are selected are used is simply to make GCMs agree with empirical data. The resulting GCMs we get using this method cannot then be used as evidence of reality as if we have captured nature in a bottle. Does anyone believe we can use such GCMs to foretell the future? If so, there is an oracle of Delphi that knows your future.

    • The only reason the `parameters’ that are selected are used is simply to make GCMs agree with empirical data. The resulting GCMs we get using this method cannot then be used as evidence of reality as if we have captured nature in a bottle

      Why not? Surely a GCM is just a theory like any other, and, like any other theory, it it fits current facts (in this case, by correct hindcasting), then we do indeed have grounds for thinking we have captured nature.

  34. On last night’s Horizon tv program ‘Global Weirding’ in the UK, Kerry Emmanuel told us that the following are true for certain :
    (1) hurricanes are up
    (2) the oceans are warming.

    He did not explain how, if the mechanism of the oceans warming is atmospheric warming (due to increased CO2), resulting in slower cooling of the oceans to the atmosphere, oceans have been warming, even though for the last 16 years the atmosphere has not warmed at all. Which would mean that CO2 is not to blame.

    Perhaps off-camera this was discussed, but being the BBC this sort of heretic objectivity would obviously not be permitted.

    • Whereas the reality is the opposite. Hurricanes are down and the oceans are not warming. Typical dishonesty from the consensus mob.

      After hurricane Katrina; a cat3 hurricane when it landed all the pseudo-experts predicted worse to come. But what followed was incredible calm. Now another lowball storm (not even hurricane strength) causes major damage purely because of where it lands rather than due to it’s power and the shamans repeat the same mistake. I’ll stick with facts rather than false prophets.

    • I did wonder about that. Just another BBC Climate Farce I guess.

      On the subject of which, the hopelessly alarmist BBC – a legally privileged recipient of tax funds – has resisted revealing the identities of who the “advisors” it selected give it alarmist arguments, on the grounds that it is a “private” entity. And the mostly unashamedly alarmist judges upheld the decision. A Judicial Climate Farce.

  35.  
    This new paper is far more on track about what we should be looking at regarding climate models and predictions …

    http://www.actuaries.org/HongKong2012/Papers/WBR9_Walker.pdf

    Simply excellent! All should read.

     

  36. Over at Bishop Hill we learn that at a joint meeting of the Royal Meteorological Society and Royal Geographical Society on Communicating Climate Science was held at Imperial College, London, it was noted that public trust in climate scientists is down to 38%, lower than any other discipline.

    What a surprise – lack of remorse or soul-searching and corrective action over Climategate, playing along with the IPCC’s blatant alarmist advocacy, ongoing resistance to FOI, pal-review and the refusal of journals to insist on access to datasets …

    I guess they think if they keep up the bluff long enough, the public will eventually be lulled into hoodwinkness (c).

  37. Mosher –

    “JC comment: I first became aware of the problem two years ago, when Tim Palmer presented (at Fall AGU) a plot of absolute global temperatures simulated by the CMIP3 climate models for the 20th century. The impact of this on the model thermodynamics (notably the sea ice) is profound; model tuning then works around this problem, so that n wrongs make a ‘right’, almost certainly torquing feedbacks in unfortunate ways.”

    “Mosh” – That chart was pretty devastating the first time you see it”.
    Now anyone can see this type of graph – see “Observing and Modeling Earth’s Energy Flows” by Stevens and Schwartz, Surveys in Geophysics, vol. 33, page 779-816, available open access at http://link.springer.com/article/10.1007/s10712-012-9184-0 . The graph is on page 806.

  38. Models.

    It seems to me that if the underlying basic assumption of any of these models is that an extra 0.8 W/m^2 is entering the system than is leaving;

    Then, no matter how many hoops the program jumps through before it spits out an answer, surely it is inevitable that answer will show that the system must warm!!!??

    • From the Mauritsen one “…known state of the Earth’s climate system…observables, such as the radiation balance at the top of the atmosphere”

      But is it an established, measured fact, that the radiation (im)balance at TOA is positive, ie more heat going into the earth system as a whole, than is coming out ?

      And if so, is this imbalance changing in line with changing CO2 levels, as the AGW theory must necessarily predict ?

      • Montalbano | November 10, 2012 at 2:11 am

        “…..But is it an established, measured fact, that the radiation (im)balance at TOA is positive, ie more heat going into the earth system as a whole, than is coming out ? …”

        Nope: It as an assumption.

        With CERES we have a “known” flux imbalance (from modelling) of 0.85 W m−2, however the instruments measure 6.5 W m−2 of radiation flux imbalance.

        Some “adjustment” is required.

        Loeb etal 2009 ‘Toward Optimal Closure of the Earth’s Top-of-Atmosphere Radiation Budget’ J. Climate, 22, 748–766

        So my case is that the models are all simply “self confirming” … circular logic.

      • Trenberth mentions it here:
        http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/EnergyDiagnostics09final2.pdf

        “The Clouds and the Earth’s Radiant Energy System (CERES) measurements from March 2000 to 2005 were used at top of atmosphere (TOA) but adjusted to an estimated imbalance from the enhanced greenhouse effect of 0.9  0.5 W m 2 (with 90% confidence limits)”

        Well apart from the idiocy of applying 90% confidence limits to guesswork the document should be a reference text in how to fool yourself with circular logic:

        First estimate what you should see, adjust measurements to fit your assumption, write in an IPCC report that “warming is unequivocal” based on the period before the real warming stopped. Next reference your own assumption and proclaim it as fact. Then since you know the warming is really happening despite the instruments telling you otherwise but you can’t explain where it is, therefore it must be due to an entirely unphysical process of heating the ocean without first heating it’s surface.

        Of course Trenberth doesn’t stop there. While he correctly complains that you cannot just say “natural variation is the cause” because there is no physical mechanism described, he holds no such qualms about connecting random weather events to a global warming (that he cannot even find) without even a smidgeon of physical theory to explain the connection – just an apparently magical extra 4% water vapour from the same instruments he proclaims not to trust..

        How does an apparently intelligent man become so obtuse?

      • lolwot | November 10, 2012 at 6:04 am
        “……Yet Trenberth has since turned out right….. http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content2000m.png…”

        Not necessarily – the diagram is from Levitus etal 2012.

        You do realize that “terrifying heat content” curve translates to:

        The upper 2000 meter depths of the ocean increasing in temperature bu 0.09 degrees centigrade? Over a period of 55 years?
        With initial measurements taken by mercury reversing thermometers, graduated in tenths of a degree, on a rope?

        I hope you are satisfied with all the statistics applied to tell us that is significant, I am not so quite so confident.

        “…..Another case of trying to shoot the messenger?…”
        Now, what does that mean? If someone comes up with a statement I disagree with, am I not supposed to mention their name? No one is aiming at any messengers, just disputing their “modeled and adjusted” ….um…”facts”.

        Or, is this just part of the “grab-bag” of counter arguments you dig into when you are bereft of a logical reply?

      • Chief Hydrologist

        The answer to the question is no – both LW and SW change with clouds especially.

        http://s1114.beta.photobucket.com/user/Chief_Hydrologist/media/CERES-BAMS-2008-with-trend-lines1.gif.html?sort=3&o=60

        Thee graphs are outgoing IR, reflected SW and net – by convention warming up.

        There is not much imbalance at all. The atmosphere is warmer and molecules emit more IR (see Steffan-Boltzmann) to restore outgoing IR.

      • Chief Hydrologist

        Do I need to say that the graphs are anomalies – which are much more accurate than absolute values?

    • markx | November 10, 2012 at 2:19 am

      Sorry, a bit of novice confusion here.

      You say it is only a modelled assumption that there is more heat going in at TOA than going out, yet the CERES instruments show there is indeed an imbalance … ?

      • The CERES instruments are adjusted to reflect an estimated imbalance. See my reply above. This a common “trick” in climate science. The art of wrapping your assumptions up within data adjustments or mathematical frameworks like models to hide the fact that they are just mere assumptions. Even then they still can’t get the adjusted data and biased models to agree without further fudging. More art really than science!

      • Handel | November 10, 2012 at 3:42 am |

        Sorry, a bit of novice confusion here…..You say it is only a modelled assumption that there is more heat going in at TOA than going out, yet the CERES instruments show there is indeed an imbalance … ?

        Correct:
        Modeled is 0.85 W m−2,
        Measured is 6.50 W m−2

        That degree of absolute precision may impress and reassure some.
        But certainly does nothing for me.

      • markx,
        So you’re saying the measured radiative imbalance at TOA, is ~8 times bigger than the modelled one? Meaning the models radically understate the heat gain to the earth system?

      • Handel | November 10, 2012 at 6:14 pm |
        “…..So you’re saying the measured radiative imbalance at TOA, is ~8 times bigger than the modelled one? Meaning the models radically understate the heat gain to the earth system?…”

        A very fine question indeed. Although it has never occurred to me, or indeed to “97% of climate scientists” (this figure may also need adjustment) to think of the problem that way.

        I think it means that no matter how they calculate and look and theorize, there is absolutely no way they can account for a total of 6.50 W m^2 of energy coming into the system.

        The very best they can extort from the modelling is 0.8 W m^2.

        Even that has been difficult to find somewhere in the system using both modelling and available records… (remember Trenberth’s “missing heat, and it is a travesty we can’t find it..”?)

        Therefore the conclusion they draw at this stage is that the CERES instrument is in error in its measurement of TOA radiative flux by a factor of about 8 and thus the measure requires some adjustment.

      • I guess they’ll need to keep refining CERES until it agrees with the models.

      • Particular Physicist

        So CERES suggests there is eight times as much missing heat for Trembleberth to agonize over.
        Or maybe just eight times more cooling than previously thought, models out by 800 %.

    • So TOA flux imbalance – a prerequisite for confirming and measuring AGW – is only estimated/assumed, the reason presumably being that we do not yet have the means to measure it.

      So that means, we have no empirical grounds for assuming any heat has gone “missing”.

  39. From Deser et al

    often models may disagree because future changes are within the natural variability

    iow, the models are pretty clueless about natural variability, most of the modellers’ efforts going into hypeing up the anthro effect.

  40. Chief Hydrologist

    Climate modelling has been undergoing a quiet revolution – and it is not one that should be allowed to go unnoticed by the long suffering public. Weather has been known to be chaotic since Edward Lorenz discovered the ‘butterfly effect’ in the 1960’s. Abrupt climate change on the other hand was thought to have happened only in the distant past and so climate was expected to evolve steadily over this century in response to ordered climate forcing.

    More recent work is identifying abrupt climate changes working through the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation, the Southern Annular Mode, the Artic Oscillation, the Indian Ocean Dipole and other measures of ocean and atmospheric states. These are measurements of sea surface temperature and atmospheric pressure over more than 100 years which show evidence for abrupt change to new climate conditions that persist for up to a few decades before shifting again. Global rainfall and flood records likewise show evidence for abrupt shifts and regimes that persist for decades. In Australia, less frequent flooding from early last century to the mid 1940’s, more frequent flooding to the late 1970’s and again a low rainfall regime to recent times.

    Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.

    It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

    Four multi-decadal climate shifts were identified in the last century coinciding with changes in the surface temperature trajectory. Warming from 1909 to the mid 1940’s, cooling to the late 1970’s, warming to 1998 and declining since. The shifts are punctuated by extreme El Niño Southern Oscillation events. Fluctuations between La Niña and El Niño peak at these times and climate then settles into a damped oscillation. Until the next critical climate threshold – due perhaps in a decade or two if the recent past is any indication.

    James Hurrell and colleagues in a recent article in the Bulletin of the American Meteorological Society stated that the ‘global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ Somewhat artificial is somewhat of an understatement for a paradigm shift in climate science.

    The weight of evidence is such that modellers are frantically revising their strategies. They are asking for an international climate computing centre and $5 billion (for 2000 times more computing power) to solve this new problem in climate forecasting. The monumental size of the task they have set themselves cannot be exaggerated.

    James C. McWilliams of the Department of Atmospheric and Oceanic Sciences at the University of California discussed chaos and climate in a 2007 paper titled ‘Irreducible imprecision in atmospheric and oceanic simulations’. ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable’. Sensitive dependence refers to qualitative shifts in climate and models that occur as a result of small changes in initial states. Structural instabilities are qualitative shifts in modelled outcomes as a result of plausible (within the limits of accuracy of measurements) changes in boundary parameters.

    The bottom line of all this is that the current generation of climate forecasting models cannot be relied on as accurate representations of future climate. It will be quite some time before the new models are good enough to model ‘sensitive dependence’ in climate. I doubt their chances at all; weather models are accurate, because of chaos theory in
    operation, over about 7 days at best.

    • The models are all that give us a scary exponential extrapolation from 0.7 degrees last century.to thermageddon in the next centuury.

      The models were also the only method used to separate out man’s contribution to warming from the natural variation which the IPCC then converted into man being adjudged guilty for current warming.

      But……the models were never fit for either purpose.

      Ergo besides all the phoney assumptions all that is known for a fact is that temperature was rising but now it has stopped. It may rise again, or it may cool. If it does rise we cannot say it is not natural and we cannot say it will be anything other than benign or beneficial.

      • It hasn’t stopped. The choice isn’t between it “rising again” or “cooling”. The choice is between it continuing to rise and stopping rising. It hasn’t yet been shown to have stopped.

        “If it does rise we cannot say it is not natural”

        If it does continue rising you’ll have a hard time selling that one given skeptics predict cooling from natural factors, not warming.

      • lolwot | November 10, 2012 at 6:51 am

        “….no (the earth) hasn’t (stopped warming), you are in denial…. (link to RealScience)…”

        Hi lolwat,

        I noticed you linked to a HADCrut4 chart. You may notice it differs from HADCrut3.

        Q1. Was this done to improve accuracy?

        It also differs from the satellite record.

        Q2. Which is more accurate? HADCrut4, HADCrut3, or satellite?

        Q3. Do you think all these record changes are reassuring to that average citizen?

      • lolwot | November 10, 2012 at 6:29 am

        [Warming] hasn’t stopped.

        The earth has stopped warming for 16 years now.

        The choice isn’t between it “rising again” or “cooling”. The choice is between it continuing to rise and stopping rising.

        Wishing it doesn’t make it so. If you had an argument to support this, you would doubtless have supplied it.

        If it does continue rising you’ll have a hard time selling that one given skeptics predict cooling from natural factors, not warming.

        Skeptics do, do they?

        If you keep this up, we might need to put you in the special ward along with Myrrh and Robert.

      • Lolwot

        It’s very simple. According to the thermometers, the earth’s most reliable, measured temperatures has not risen for 16 years. Even the most blinkered alarmists know this (saying it’s due to natural factors swamping CO2). You are an ostrich in massive denial.

        So you’ve found a handful of people who think an ice-age is just around the corner, have you? Now take a minute to consider what the other 99.9999999999999999999999999999999999999999999999999% of skeptics think.

      • Chief Hydrologist

        The cool water anomaly in the center of the image shows the lingering effect of the year-old La Niña. However, the much broader area of cooler-than-average water off the coast of North America from Alaska (top center) to the equator is a classic feature of the cool phase of the Pacific Decadal Oscillation (PDO). The cool waters wrap in a horseshoe shape around a core of warmer-than-average water. (In the warm phase, the pattern is reversed).

        Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. #8220;This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”

        Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.” http://earthobservatory.nasa.gov/IOTD/view.php?id=8703

        They can also vary over millennia.

        Thinking is centered around slow changes to our climate and how they will affect humans and the habitability of our planet. Yet this thinking is flawed: It ignores the well-established fact that Earth’s climate has changed rapidly in the past and could change rapidly in the future. The issue centers around the paradox that global warming could instigate a new Little Ice Age in the northern hemisphere.

        Evidence for abrupt climate change is readily apparent in ice cores taken from Greenland and Antarctica. One sees clear indications of long-term changes discussed above, with CO² and proxy temperature changes associated with the last ice age and its transition into our present interglacial period of warmth. But, in addition, there is a strong chaotic variation of properties with a quasi-period of around 1500 years. We say chaotic because these millennial shifts look like anything but regular oscillations. Rather, they look like rapid, decade-long transitions between cold and warm climates followed by long interludes in one of the two states. http://www.whoi.edu/page.do?pid=83339&tid=3622&cid=10046

        Getting your ‘science’ from dessmogblog just goes to show what a dunce – if any more proof be needed – you are.

  41. Greybeard says, 10/11 @1.26 am,
    that at a joint meeting of the Royal Meteorological
    and Royal Geographical Societies in London, it was
    noted that trust in climate scientists is down to 38%.

    Say, the apocalyptians are jes’ gonna hafta lift their game.
    We know it ain’t about the science or uncertainty or model
    adjustment … it’s about commune – icay – shun.

    ( That’s what it’s all about, tra la! )

  42. Totally unrelated to climate change, but tonight I helped celibrate a 10 yr old girl’s birthday party. Other than her dad and younger brother, I was the only male in a house full of 9 – 11 girls and 6 or so moms.

    the women (mom category) brought up Robert Downey Jr and how they thought he was sooo hot. I made the mistake of mentioning how some of the women I work with saw a photo of Jeb Bush’s kid and couldn’t help talk about how good looking he was.

    The response? Basically along the lines that anyone with the name of or a relation to George Bush was obviously unintelligent and an insult to suggest he could compete to RDjr.

    Now I do not want to put too much emphasis on this, other than to illustrate how easy it is to fool one’s self. That any of these educated, intelligent women would for a moment consider Robert Downey as a potential partner for their young daughters ahead of a Bush grandson is a mind blower. Amazing what happens when you let politics be your guiding light.

    Fooling ourselves about the climate change debate is similar. Is it so important to prove we are right about particular points? Can’t we treat climate models like Robert Downey – appreciate them for what they can do but recognize that you would not want your daughter marrying him?

    • Particular Physicist

      Would you want your daughter to become a climate model when she grows up ?

    • “the women (mom category) brought up Robert Downey Jr and how they thought he was sooo hot. I made the mistake of mentioning how some of the women I work with saw a photo of Jeb Bush’s kid and couldn’t help talk about how good looking he was.”

      Did you think the women thought Robert Downey Jr hot because he was a conservative?? :
      Wiki: “Downey has indicated that his time in prison changed his political point of view somewhat, saying: “I have a really interesting political point of view, and it’s not always something I say too loud at dinner tables here, but you can’t go from a $2,000-a-night suite at La Mirage to a penitentiary and really understand it and come out a liberal. You can’t. I wouldn’t wish that experience on anyone else, but it was very, very, very educational for me and has informed my proclivities and politics ever since.”
      http://en.wikipedia.org/wiki/Robert_Downey,_Jr.

      Otherwise, why did you bring politics into it?

      • No I doubt they had any idea of his political opinions and I did not bring politics into the discussion. I introduced the factual datum point of the young Bush’s good looks. They were the parties to automatically judge and dismiss the information based on politics and bias.

  43. Hindcasting was NEVER a good metric. It was only ever a necessary but insufficient condition. Every modeler knew this but they pretended otherwise to fool the ignorant and unfortunately they then came to believe their own hype. The reason all models now auto-reproduce a 0.7 degree rise in temperature is only because that particular tuning knob is now inside the code rather than applied by external parameters. It does not reflect an improvement in model physics, just a fudged reduction of dependent variables.

    In fact Hadley had easily managed to get a much better hindcast just by increasing the amplification effect of the sun. However, since turkeys don’t vote for Christmas they don’t much talk about that.

  44. Modellers are trying to do the best they can (well some of them are) the problem is they are reliant on deeply flawed data from the past and limited but improving data in the present. This limits the veracity of both hindcasts and forecasts. Trouble is there are too many, so called, scientists who would have us ignore this basic viewpoint and try to have us believe ‘their’ models can somehow transcend the basic logic of rubbish in, rubbish out!
    That is not to say that we should not use models, rather that modellers need to be more open and honest about their limitations, something they seem to find hard to do, hence the growing distrust and animosity towards them…

    • “rather that modellers need to be more open and honest about their limitations, something they seem to find hard to do”

      Really? No.

      The problem is that many climate deniers need to be more open and honest about the usefulness of models.

      • I use numerical models every day. I also create them and I sell those creations. They are not just useful for me but essential and highly accurate. However they are all properly validated and try to solve problems that are relatively easy to solve. Many problems are not so easy to solve and modelling the climate is the most difficult thing you could ever attempt. Climate models have never been remotely correct either spatially or temporally and thus are being used for a purpose to which they are of no use whatsoever. Understand?

        By the way, as you are on record as a believer in what might be mathematically described as a ‘fictional temperature’ ie a temperature we should be seeing but actually don’t get unless we extend a linear trend over a highly nonlinear system, who exactly is the real climate denier?

      • “Climate models have never been remotely correct either spatially or temporally”

        BS. You don’t have a clue what you are talking about.

        “who exactly is the real climate denier?”

        You are if you think global warming stopped in 1997

      • Well in a spirit of conciliation here is the full blurb for that graphi witha ll the caveats:
        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-3-1-2.html

        Very nuanced but it does appear that the annual mean precipitation of the multi model mean has improved greatly, despite individual runs of models being still poor. Of extremely limited usefulness but still better than I suggested. Apparently the errors cancel out when combined. But

        “Despite the apparent skill suggested by the multi-model mean (Figure 8.5), many models individually display substantial precipitation biases, especially in the tropics, which often approach the magnitude of the mean observed climatology (e.g., Johns et al., 2006; see also the Supplementary Material, Figures S8.9 and S8.10). Although some of these biases can be attributed to errors in the SST field of the coupled model, even atmosphere-only versions of the models show similarly large errors (e.g., Slingo et al., 2003). This may be one factor leading to a lack of consensus among models even as to the sign of future regional precipitation changes predicted in parts of the tropics (see Chapter 10).”

        So the reality is that they are still of no practical use even spatially:
        http://www.tandfonline.com/doi/abs/10.1080/02626667.2010.513518

        And of course temporally they are much worse.

      • +Lolwot

        And the blog post you point to is the same old tired argument that only the long term trend can be trusted; which is the expected ‘fictitious temperature’ I described. The trouble is this trend gets longer all the time, as the pause fails to end. Currently sitting at 15 years. We have sat through many failed predictions of its demise. You may be right that it will continue up sometime but to insist that down is actually up is being the real denier. When you can admit to yourself, like most of the warmists have, that there is indeed a long pause in the expected warming which was indeed unexpected then you can reclaim some grasp of reality.

        But none of that tells us that the warming is continuing; only the pause is continuing despite several ENSO cycles. So yes I say the warming has paused because that is what reality says. You choose to deny it, Realclimate chooses merely to explain it away. However this pause was not part of the script we were told to expect.

      • People of limited cognitive abilities normally turn to name calling when their fallacies are exposed. One day when you have extended your education beyond the point it is at now, and I truly hope you do, you will look back and regret some of these posts…

      • Unlikely. lolwot is political and precommitted to believing in CAGW, and hence feels that any nonsense at all that advances belief in it, is justified. One of his favorite deceptions is doggedly pretending that skeptics of CAGW are deniers of AGW in principle.

  45. David Springer

    http://en.wikipedia.org/wiki/Climate_change_feedback#Lapse_rate

    “The atmosphere’s temperature decreases with height in the troposphere. Since emission of infrared radiation varies with temperature, longwave radiation escaping to space from the relatively cold upper atmosphere is less than that emitted toward the ground from the lower atmosphere. Thus, the strength of the greenhouse effect depends on the atmosphere’s rate of temperature decrease with height. Both theory and climate models indicate that global warming will reduce the rate of temperature decrease with height, producing a negative lapse rate feedback that weakens the greenhouse effect. Measurements of the rate of temperature change with height are very sensitive to small errors in observations, making it difficult to establish whether the models agree with observations.”

    Big problem here. Clouds are poorly understood, difficult to measure, and hugely influence our climate. Stephens 2012, the subject of a thread earlier this week, found grave errors in models from new instruments aboard satellites that models couldn’t be pre-tuned to duplicate.

    Perhaps the models are salvageable but right now they are worse than useless because of unwarranted confidence in them.

    • PaulK over at Lucia’s has a couple of good posts on models and the number of ocean layers required to be closer to realistic.

      My take is blow of the fine resolution of chaos and focus on coarser resolution with more layers to determine approximate heat capacities and time constants. Then you can compare the simpler energy models to the more unstable “climate/weather” models.

    • hardly worse than useless.

      If I am planning for the next 30 years, lets say planning development in an area, like new york, already prone to disaster I might want to consider the expected rise in sea level I would see.

      What tools do I have at my disposal?

      1. I have the past data. One appraoch would be to extropolate from that.
      It would suggest a 10CM rise in sea level.
      2. I have a climate model, flawed of course, and it might suggest a 30CM
      rise ( say for example only )

      Now I am planning to add 1 million people to the population of NYC ( thats the plan )

      Question: will the prediction of 30CM from the model be useful?
      Well it depends. but it certainly would be on the table in any intelligent planning discussion. And I would think that the fact ought to be disclosed to anyone who is moving into an area that is “safe” given a 10CM rise and in danger given a 30CM rise. You’d be quite pissed if they didnt disclose it and you ended up homeless.

      Usefulness can only be assessed in context. Usefulness depends on purpose. And purpose depends on the question of who is involved.
      Consequently statements like yours about usefulness are useless to anyone who really understands the issues.

    • Let me give you a couple more examples of flawed models that are useful.
      Currently the mayors of 40 large cities utilize a “heat wave” warning system.
      The warning system is basically a statistical model that predicts excess death based on weather conditions. if you were to look at predicted deaths verus actual deaths you would see that the model doesnt do very well. For example, for the chicago 1995 heat storm it predicted 30 or so deaths and there were over 700. Yet, the model is useful. Its useful because you dont have to know the exact deaths, you just have to know that a threshold (user defined) is crossed. Set this threshhold too low and you get false alarms. Set it to high and you fail to act. So, local officials use the system as they see fit. And it saves lives. So, who am I going to believe about the usefulness of models. You, or the people who actually use and rely on them.
      Customer is always right.

      • Steven Mosher, “Set this threshhold too low and you get false alarms. Set it to high and you fail to act.” Yep, All models and most data is wrong, but useful if properly considered. It is more a matter of communication.

        I am not thrilled with using ensemble means then switching to selected models to communicate certainty that doesn’t exist or over-rating confidence in either.

      • Steven Mosher

        I like your “flawed model” examples.

        Here’s another:

        the mayors of 40 large cities [including Chicago] utilize a “heat wave” warning system.
        The warning system is basically a statistical model that predicts excess death based on weather conditions.

        Then Chicago has an unusually harsh winter and over 3,000 deaths are attributed to it.

        Oops!

        Wrong model.

        Max

  46. You can believe GCMs are getting better at predicting weather in the long term only if you believe you can foretell the future. But, that would be insane.

    • The Skeptical Warmist (aka R. Gates)

      Waggy,

      Since the GCM’s were not designed to predict the weather, your comment, like so many hundreds of others you make, simply lies there like a dead red herring out of water. Maybe you could take a handful of psychotropic Tisdale cherries and cook up a little meal.

      • understanding that weather is a subset of climate is a prerequisite to understanding English, which is of course the native language of global warming superstition.

      • The Skeptical Warmist (aka R. Gates)

        Waggy,

        Seems you’ve already been nibbling a bit on those psychotropic Tisdale cherries, eh? Naughty naughty…

      • Where there is this sort of basic lack of cognitive ability on display it is not surprising to see ad hom comments from the ignorant. An example would be someone who does not understand that ‘climate’ is nothing more than averages of what the weather is imagined to have been over a given period that is determined to be significant. Anyone who cannot understand that is worse than drunk; they are forever incompetent.

      • Understanding weather is averaged is science. Imagne evey trying to reason with someone about that sclence who cannot even understand what they could read on the NASA site–i.e., “When we talk about climate change, we talk about changes in long-term averages of daily weather.”

  47. 1. Model and parameterize a complex system.
    2. System is so complex that CI’s for parameter estimates are wide
    3. Somewhere along those wide ranges, system demonstrates scary behavior.

    A few others have noticed this: Cricton’s SETI analogy, Nate Silver discusses wild estimates for the infamous “R_naught” in epidemiology for public policy.

    Is there some kind of formal reasoning for dealing with this scenario that is becoming more common: where the modeler’s ignorance “creates” tail outcomes too often. Personally, I like be like the OCtupus, or the dog catching the frisbee.

  48. Here is IPCC’s climate model representation of the 20th century climate

    http://orssengo.com/GlobalWarming/IpccSmoothedGmst.PNG

    Questions
    1) Why did NOT the multi-model mean represent the warming 1910-1940?
    2) Why did NOT the multi-model mean represent the slight cooling 1940-1970?

    Note that before mid-20th century, the multi-model mean represents the secular trend where the multi-decadal variation oscillates relative to the secular trend.

    However, after mid-20th century, the multi-decadal oscillation has disappeared. Why? Has it ceased to exist?

    Is not 1970-2000 the warming phase of the multi-decadal oscillation?

    • Of course it wouldn’t take much more effort to show the max and min plots instead of the anomaly, but these would have an even worse fit.

    • “…to piss Web off.”

      They do let the truth slip out on occasion don’t they?

      Which points to the true rationale and agenda of the pseudo-scientific fake skeptics, which is to pile on with ever increasing layers of FUD (Cappy Dick) and relentless repetition of The Big Lie (Girma).

      Girma’s latest Big Lie is to suggest that the increase of atmospheric CO2 is due to a temperature increase. And Girma is just shameless enough to have one said this

      “Girma | April 13, 2012 at 11:48 pm | Reply

      In the following video, the kids in a classroom are all brainwashed to believe they will all die as a result of global warming.

      http://bit.ly/YrBo6

      Gobbles would not have done a better job!”

      Gobbles? really, Gobbles?

      I will start calling him Girma Gobbles.

      How did this guy ever get a PhD?

      • Captain Richard to you Webster :)

      • Chief Hydrologist

        Dr Girma Gobbles to you crudball…

      • WebHubTelescope

        Girma’s latest Big Lie is to suggest that the increase of atmospheric CO2 is due to a temperature increase.

        Not all of it, but the warming contributes to the increase in CO2 concentration.

        If not, how are you going to explain the increase in CO2 concentration during El Nino and a decrease in CO2 Concentration during La Nina as shown =>
        http://www.woodfortrees.org/plot/rss-land/compress:12/normalise/plot/esrl-co2/from:1978/derivative/compress:12/normalise

      • The Keeling curve illustrates the extent of the anthropogenic contribution to atmospheric CO2 since the mid-C20th. Neither the seasonal oscillation nor the effects of ENSO determine the overall shape of the curve.

      • Interannual variation in CO2ppmv in response to GAT change does not determine the long term effects of increasing RF from increasing levels of CO2. This is to confuse the ripples with the incoming tide.

      • Chief Hydrologist

        Warming increases CO2 in the atmosphere in a number of well known ways. To the extent that warming since 1950 was natural this resulted in a natural increase in CO2 in the atmosphere, Simple memes are incorrect.

      • Ripples and tides. But positive CO2 feedback to warming is of course part of the picture.

        It’s worse than we thought ;-)

      • Chief Hydrologist

        Dingbats and dweebs – it is about as bad as we thought. :cool:

        Let’s see. 96% of the flux is natural – 4% is not. The fluxes are known to greater than 20% limits – but let’s leave that aside. If natural warming increases the natural flux – what are the implications? How much of the warming was natural? These are not questions that have any moral significance except to the cult of AGW groupthink space cadets whose simple minded memes are not the answer to anything much at all. Misguided climate warriors telling themselves stories superficially in the objective idiom of science.

        So I would like to see some honesty and curiosity but I doubt that you’re capable of either.

      • The circular logic trap. But then you are an accomplished hamster.

        And:

        Misguided climate warriors telling themselves stories superficially in the objective idiom of science.

        Do you deny the physics of radiative transfer? If not, what of all that increase in RF? This is not circular logic, it is parsimonious reasoning.

        Do you disagree that TCR to 2 x CO2 is ~2C? You didn’t say.

      • Chief Hydrologist

        You’re a persistent little hamster. I am a climate catastrophist (in the sense of René Thom) – but you need a lot more than a space cadet meme to understand that.

      • You’re a persistent little hamster. I am a climate catastrophist (in the sense of René Thom) – but you need a lot more than a space cadet meme to understand that.

        The question was:

        Do you disagree that TCR to 2 x CO2 is ~2C? You didn’t say.

      • Chief Hydrologist

        ‘Researchers first became intrigued by abrupt climate change when they discovered striking evidence of large, abrupt, and widespread changes preserved in paleoclimatic archives. Interpretation of such proxy records of climate—for example, using tree rings to judge occurrence of droughts or gas bubbles in ice cores to study the atmosphere at the time the bubbles were trapped—is a well-established science that has grown much in recent years. This chapter summarizes techniques for studying paleoclimate and highlights research results. The chapter concludes with examples of modern climate change and techniques for observing it. Modern climate records include abrupt changes that are smaller and briefer than in paleoclimate records but show that abrupt climate change is not restricted to the distant past.’

        http://www.nap.edu/openbook.php?record_id=10136&page=19

        What is the connection between Rene Thom, Swanson and Tsonis and the NAS? You haven’t answered.

      • What is the connection between Rene Thom, Swanson and Tsonis and the NAS? You haven’t answered.

        The question was:

        Do you disagree that TCR to 2 x CO2 is ~2C? You didn’t say.

      • Chief Hydrologist

        The answer to your question is contained in mine. But you need to understand the underlying mechanisms in all three of the sources I cited. Going around in circles is for goldfish and hamsters.

      • Three refusals. Oh dear. You are out.

      • BBD

        The observed TCR to 2xCO2 since 1850 is between o.8C and 1.4C, depending on how much of the observed warming was due to natural forcing: IPCC estimates 7% (with a low level of scientific understanding of natural forcing) while several solar studies estimated around 50%.

        Max

      • Chief Hydrologist

        No answer to my question? I didn’t think so.

      • Chief,

        Seems clear BBD doesn’t have the foggiest what the relevance of Rene Thom, Swanson and Tsonis and the NAS is.

        He’s not the only one.

      • Web

        I assure you that those kids in the class room who are scared of dying as a result of global warming will not die of it. !00% sure.

    • Girma

      1) Why did NOT the multi-model mean represent the warming 1910-1940?
      2) Why did NOT the multi-model mean represent the slight cooling 1940-1970?

      The early C20th warming was probably forced by a combination of solar variability and increased atmospheric transparency or ‘global brightening’ (Wild 2009). The slight mid-century cooling is attributed to increased aerosol negative forcing or ‘global dimming’ (Wild 2009). Models not specifically forced in accordance with these changes do not reproduce C20th climate behaviour well, which is hardly surprising.

      • BBD

        Don’t fall into Girma’s trap.

        The truth of the matter is that the models cited by IPCC cannot explain the early 20th century warming.

        The logic goes as follows:

        1. Our models cannot explain the early 20th century warming.
        2. We know that AGW is responsible for most of the warming of the statistically indistinguishable late 20th century warming.
        3. How do we know that?
        4. Because our models cannot explain it any other way.

        Bingo!

        Max

      • What trap?

        Our models cannot explain the early 20th century warming.

        But the models weren’t run to ‘explain’ the early C20th warming:

        Models not specifically forced in accordance with these changes do not reproduce C20th climate behaviour well, which is hardly surprising.

      • Chief Hydrologist

        The circular logic trap. But then you are an accomplished hamster.

      • @bbd

        Ummm

        If you have a decent climate model, shouldn’t it be able to work at all times? Like if its good enough to forecast things at 380ppm CO2 shouldn’t it be competent at 280 ppm as well.? So how come you say ‘it wasn’t run to explain early 20C warming? It should be able to do that …and be rerun to give the last 1000 years as well.

        Or it’s not a decent model of the totality of the climate history and future.

      • @bbd

        Or if it can’t, please explain for which years you would consider it ‘reliable’ with a justification for why you have chosen those years and excluded others.

      • The circular logic trap. But then you are an accomplished hamster.

        Do you deny the physics of radiative transfer? Where did all that RF go? This is not circular logic, it is parsimonious reasoning.

      • Chief Hydrologist

        Hammy,

        The circualr reasoning applies to models. This supports my circular reasoning argument in the uncertainy monster paper, whereby tuning (implicit or explicit) to 20th century time series of global average temperature anomalies makes these models not useful for 20th century attribution studies.

        That greenhouse gases result in result in a warmer atmosphere therfore all warming is greenhouse gases is merely the logical fallacy of the non-sequitor.

      • CH

        That greenhouse gases result in result in a warmer atmosphere therfore all warming is greenhouse gases is merely the logical fallacy of the non-sequitor.

        Strawman. Here I am, recently on this thread, going on about non-CO2 influences on climate:

        The early C20th warming was probably forced by a combination of solar variability and increased atmospheric transparency or ‘global brightening’ (Wild 2009). The slight mid-century cooling is attributed to increased aerosol negative forcing or ‘global dimming’ (Wild 2009).

        And next to circular reasoning:

        The circualr reasoning applies to models.

        Forget the models. Side with Hansen. Go for an empirical estimate from paleoclimate data:

        This empirical climate sensitivity [~3C] incorporates all fast response feedbacks in the real-world climate system, including changes of water vapor, clouds, aerosols, aerosol effects on clouds, and sea ice. In contrast to climate models, which can only approximate the physical processes and may exclude important processes, the empirical result includes all processes that exist in the real world – and the physics is exact.

        Did you know that Hansen was sceptical about modelled studies? Really. Here he is, setting the record straight:

        TH: A lot of these metrics that we develop come from computer models. How should people treat the kind of info that comes from computer climate models?

        Hansen: I think you would have to treat it with a great deal of skepticism. Because if computer models were in fact the principal basis for our concern, then you have to admit that there are still substantial uncertainties as to whether we have all the physics in there, and how accurate we have it. But, in fact, that’s not the principal basis for our concern. It’s the Earth’s history-how the Earth responded in the past to changes in boundary conditions, such as atmospheric composition. Climate models are helpful in interpreting that data, but they’re not the primary source of our understanding.

        TH: Do you think that gets misinterpreted in the media?

        Hansen: Oh, yeah, that’s intentional. The contrarians, the deniers who prefer to continue business as usual, easily recognize that the computer models are our weak point. So they jump all over them and they try to make the people, the public, believe that that’s the source of our knowledge. But, in fact, it’s supplementary. It’s not the basic source of knowledge. We know, for example, from looking at the Earth’s history, that the last time the planet was two degrees Celsius warmer, sea level was 25 meters higher.

        And we have a lot of different examples in the Earth’s history of how climate has changed as the atmospheric composition has changed. So it’s misleading to claim that the climate models are the primary basis of understanding.

        Try parsimonious reasoning.

      • Chief Hydrologist

        So – Hammy – what are you asking me for?

        ‘Our results highlight that an initialization of the upper-ocean state using historical observations is effective for successful hindcasts of the PDO and has a great impact on future predictions. Ensemble hindcasts for the 20th century demonstrate a predictive skill in the upper-ocean temperature over almost a decade, particularly around the Kuroshio-Oyashio extension (KOE) and subtropical oceanic frontal regions where the PDO signals are observed strongest. A negative tendency of the predicted PDO phase in the coming decade will enhance the rising trend in surface air-temperature (SAT) over east Asia and over the KOE region, and suppress it along the west coasts of North and South America and over the equatorial Pacific. This suppression will contribute to a slowing down of the global-mean SAT rise.’

        http://www.pnas.org/content/107/5/1833.full

        They certaonly are slowing down. Really you are such a space cadet it’s funny. :lol:

      • So – Hammy – what are you asking me for?

        Evidence that you are capable of parsimonious reasoning. Three strikes and out.

      • BBD | November 10, 2012 at 7:06 pm |

        “…Did you know that Hansen was sceptical about modelled studies?…”

        …and that he quotes ocean warming, particularly the 10 years of Argo data, as his critical evidence? (the same interview as you posted, next statement).

        And the extent of this warming is an increase of 0.09 degrees C over 55 years (based on some original deep sea shipboard measurements) for the upper 2000 metres of the entire worlds oceans. (this accounts for about 50% of the ocean’s water – we really don’t have any data on the lower depths.)

        Hansen: One of the other things that really tells us what’s going on is the Earth’s present energy imbalance. We know that the effect of adding CO2 and other greenhouse gases is to reduce the heat radiation to space-it acts like a blanket. And that means that there’s more energy coming in from the sun than there is heat being radiated to space. And now, in the last 10 years, we’ve been able to measure that imbalance, because most of the excess energy has to go into the ocean. The atmosphere is very thin, it doesn’t have much heat capacity. But the ocean is four kilometers deep, and it mixes, so it has a tremendous heat capacity.

        And beginning about 10 years ago, different nations of the world distributed these Argo floats, more than 3,000 of them, around the world’s oceans. They have an instrument package that yo-yos down to a depth of two kilometers, and then yo-yos back up.

        And among other things, it measures the temperature in the ocean. What it tells us is that the ocean is gaining heat. It shows us that the planet is out of balance by about 0.6 watts per meter squared, averaged over the Earth.

        http://www.treehugger.com/treehugger-radio/nasas-james-hansen-on-climate-change-and-intergenerational-justice-podcast.html
        (don’t worry about the podcast bit .. – there is a transcript)

        Scientific summary:
        1. The great thing about water is it has a helluva heat capacity.
        2. And there is a helluva lot of it on this planet.
        3. So you can account for a helluva lot of energy with very little fudging.

      • Chief Hydrologist

        What don’t like data and science? Rather retail idiotic narratives that you learned to call parsimonious in some stupid space cadet meme? That’s where we started. You are a sham and a fraud.

        Try Newton’s 4th rule instead of Occam for a change.

      • BBD | November 10, 2012 at 7:06 pm

        says: “…Forget the models. Side with Hansen. Go for an empirical estimate from paleoclimate data:…”
        …and cites http://arxiv.org/ftp/arxiv/papers/1105/1105.0968.pdf Paleoclimate Implications for Human-Made Climate Change James E. Hansen and Makiko Sato

        BBD. Sure Hansen and Sato quote a lot of empirical data in that paper. They state that from the state of the earth’s deduced paleoclimate at various times you can estimate the earth’s energy balance. “…No climate model is needed to calculate the forcing. It requires only summing over the planet the change of heat radiation to space, which depends on known atmospheric and surface properties….”

        Fair enough, one can calculate a forcing required for a particular temperature change.

        Then he recites the theoretical forcings, including solar and GHGs, including water vapour (the effects of which are very uncertain (IPCC4)):

        “….’Fast feedbacks’ appear almost immediately in response to global temperature change. For example, as Earth becomes warmer the atmosphere holds more water vapor. Water vapor is an amplifying fast feedback, because water vapor is a powerful greenhouse gas. Other fast feedbacks include clouds, natural aerosols, snow cover and sea ice…”

        But then they immediately go on to use models (and what else could they possibly do?!!) to “prove” their statements about sensitivity.

        “…Global mean temperature change between the LGM and Holocene has been estimated from paleo temperature data and from climate models constrained by paleo data….”

        With modelling they can calculate the temperature changes:

        Shakun and Carlson (2010) obtain a data-based estimate of 4.9°C for the difference between the Altithermal (peak Holocene warmth, prior to the past century) and peak LGM conditions. They suggest that this estimate may be on the low side, mainly because they lack data in some regions where large temperature change is likely, but their record is affected by LGM cooling of 17°C on Greenland.

        A comprehensive multi-model study of Schneider von Deimling et al. (2006) finds a temperature difference of 5.8 ± 1.4°C between LGM and the Holocene, with this result including the effect of a prescribed LGM aerosol forcing of ‒1.2 W/m2.

        The appropriate temperature difference for our purposes is between average Holocene conditions and LGM conditions averaged over several millennia. We take 5 ± 1°C as our best estimate. Although the estimated uncertainty is necessarily partly subjective, we believe it is a generous (large) estimate for 1σ uncertainty.

        They go on to say:

        This empirical climate sensitivity incorporates all fast response feedbacks in the real-world climate system, including changes of water vapor, clouds, aerosols, aerosol effects on clouds, and sea ice. In contrast to climate models, which can only approximate the physical processes and may exclude important processes, the empirical result includes all processes that exist in the real world – and the physics is exact.

        If Earth were a blackbody without climate feedbacks the equilibrium response to 4 W/m2 forcing would be about 1.2°C (Hansen et al., 1981, 1984; Lacis et al., 2010), implying that the net effect of all fast feedbacks is to amplify the equilibrium climate response by a factor 2.5.

        GISS climate models suggest that water vapor and sea ice feedbacks together amplify the sensitivity from 1.2°C to 2-2.5°C. The further amplification to 3°C is the net effect of all other processes, with the most important ones probably being aerosols, clouds, and their interactions.

        In summary, saying that we can’t account for all of the effects without “climate feedback” and that modelling suggests what they probably are.

        They go on to use the modelled results to calculate a precise feedback value based on all the “empirical data” (excluding clouds, and aerosols because they were already accounted for … (?)):

        Multiplying the sum of greenhouse gas and surface albedo forcings by climate sensitivity ¾°C per W/m2 yields the predicted global temperature change (blue curves in Fig. 2d and 2e).

        ie, They use a calculated sensitivity figure deduced from paleoclimate data and models to … prove the modelled and/or the paleoclimate data is correct?

      • markx

        The empirical estimate from HS12 can be used to *validate* modelled estimates. Goodness knows why you feel compelled to quote so much rather than just state the obvious. Although confusion begets confusion, which can sometimes be the desired effect. Is that the plan?

        FUD does not work with me so you are only wasting your own time.

        I get the sense you stopped short with HS12 which is a pity. Delve further into 3.2. Read from p. 8:

        The empirical sensitivity 3 ± 1°C for doubled CO2 is consistent with the Charney et al. (1979) estimates of 3 ± 1.5°C for doubled CO2 and with the range of model results, 2.1-4.4°C, in the most recent IPCC report (Randall and Wood, 2007). However, the empirical result is more precise, and we can be sure that it includes all real-world processes. Moreover, by examining observed climate change over several Milankovitch oscillations we can further improve the accuracy of the fast-feedback climate sensitivity.

        After reading and *thinking* about the relevant text, actually *look* at Fig. 2.

        Please don’t waste your time responding to this as any attempt you make to ‘debunk’ Hansen will be risible and long-winded and I’m not interested.

      • CH

        What don’t like data and science? Rather retail idiotic narratives that you learned to call parsimonious in some stupid space cadet meme? That’s where we started. You are a sham and a fraud.

        Try Newton’s 4th rule instead of Occam for a change.

        You have demonstrated at length that you are an abusive, mendacious crank who can’t think. Now that is established there’s no need to respond to me further.

      • As usual with BBD, just hand-waving and self-important waffling. This time with a stern ‘directive’ to stick to blind faith in the Hansen gospel.

      • BBD | November 11, 2012 at 11:53 am |
        “…..The empirical estimate from HS12 can be used to *validate* modelled estimates. …… I get the sense you stopped short with HS12 which is a pity. Delve further into 3.2. Read from p. 8: ……The empirical sensitivity 3 ± 1°C for doubled CO2 is consistent with the Charney et al. (1979) …”

        BBD, I am genuinely interested in this paper. We all have our own biases, (yes, I include myself) so I am very interested in hearing the other viewpoint. (and no more quotes, I promise!)

        I have read this paper several times but I still don’t get his point.
        We have the ice-core and ocean core temperature records: empirical

        We can therefore calculate the forcing required to effect a temperature change to a new equilibrium state: empirical

        We can estimate the CO2 levels at the time: empirical (albeit some ‘timing’ debate, and as Hansen notes, CO2 climate forcing is approximately logarithmic, as its absorption bands saturate as CO2 amount increases, but we won’t pursue those here)

        We can calculate from known physics the effects of increasing CO2: empirical.

        However, CO2 only accounts for about 30% of the “forcing” required. The balance is related to water vapour, clouds, albedo (clouds, aerosols, ice cover). No empirical measure?

        The mechanism of a ‘sudden’ additive level of CO2 is put forward as being caused by an overriding tectonic plate causing a crustal melt and metamorphism of the subducted plate and sediments, with release of volatiles including CO2. No empirical measure?

        He then notes his 3 degrees C per doubling of CO2 matches the forcing required and says that the empirical climate sensitivity (derived?) incorporates all the fast response feedbacks in the real-world climate system, including changes of water vapor, clouds, aerosols, aerosol effects on clouds, and sea ice. But surely these are all lacking an empirical measure?

        Forgive me my confusion, The Hansen paper is not an easy read, but I feel the word empirical is perhaps misused there.

      • Latimer

        If you have a decent climate model, shouldn’t it be able to work at all times?

        You have missed the point. Have another look at what was actually said.

  49. MattStat/MatthewRMarler

    Here is another presentation of one of my favorite topics, inaccuracy.

    The spatio-temporal average temperature is 288K. If the Earth were in equilibrium, as the usual “basic, simple science” assumes, the entire Earth would be 288K all the time. Today’s mean temp of the Arctic is 255K, which I’ll take as the approximate temp of the Antarctic, for illustrative purposes.

    The error in each case works out to 33K, which in percentage terms is 33/288 time 100%, or 11%. In lots of fields of study, an 11% error is pretty good, especially when it is near the maximum error. The squared error is 0.0121 (sq pct); the sum of the two squared errors is 0.0242, and the mean squared error, in this case, is 0.0121, and the square root of the mean squared error (RMSE) is 11%. Now, we could do this with every thermometer on earth, and every day, am and pm, and the errors range from 0% (in abs value) up to about 15%. Average the squared error over all the spatio-temporal specific measures, and the RMSE might be 5%. In many fields, that is a really good fit of the model (in this case equilibrium) to the system that is modeled.

    However, the modeled change induced by CO2 doubling is something like 1 – 3 K, in the equilibrium model. This is a change of about 0.3% to about 1%, which is much lower than the RMSE of the model. Therefore (though this is empirical, not exactly a logical deduction) It is extremely unlikely that the equilibrium model is sufficiently accurate to make an accurate calculation of the effect of CO2 doubling. Add in other things like the possibly negative feedback of the dynamic effects of clouds, it becomes even less reasonable to think that we have either the correct sign or the correct magnitude of the effect of doubling CO2.

    In my experience and in my reading, most people in most professions who use the results of model calculations are very uncomfortable with this kind of reasoning. In medical care, where deviations from modeled values occur all the time, the amount of random variation about the modeled value (e.g. plasma concentration of a drug following a standard dose) is consistently underappreciated.

  50. Alarmist parrot that he is, lolwot doesn’t know what a CO2 signal would like like, yet maintains there is one.

  51. Old Chief: Many, many moons ago, your forefathers could communicate great distances to faraway tribes, long before paleface brought radio or telephone.

    Young brave, “Running Deer”: How did they do that, Grandfather?

    Old Chief: They built fire, took blanket, and sent “CO2 signal”

  52. Fergit the empiric data and historical record, medieval
    Warming Period, the Little Ice Age,Twentieth Century
    warm/cool/ warm, fergit PDO patterns of El Nino and
    La Nina, abrupt climate change shown in ice cores…
    Jest say:
    ‘We cannot explain the twentieth century ‘unprecedented’
    warming other than by CO2 – AGW.”

    Oh, and, P.S:
    “We DO have compelling empiric data, bristle cones
    and varve sediments, Hockey Stick and upside down
    Tiljander.”

  53. Love those grandfather stories, Max.

    Kids: Grandfather, how come yer lighting a bon fire
    and it isn’t even Guy Fawkes Night?

    Grandfather: I’m sending co2 smoke signals ter the
    guvuhment.

    Kids: Wow, what do they say?

    Grandfather: They say, ‘It’s cooling folks ……..’.

    Kids: Say, can we send some co2 signals too?

    Grandfather: Yes, me darlins, they might be beneficial.

    (Cough, cough!)

  54. Do global temperature trends over the last decade falsify climate
    predictions?—Knight et al.

    Observations indicate that global temperature rise has slowed in the last decade (Fig. 2.8a). The least squares trend for January 1999 to December 2008 calculated from the HadCRUT3 dataset (Brohan et al. 2006) is +0.07±0.07°C /decade —much
    less than the 0.18°C /decade recorded between 1979 and 2005 and the 0.2°C /decade expected in the next decade (IPCC; Solomon et al. 2007). This is despite a steady increase in radiative forcing as a result of human activities and has led some to question climate predictions of substantial twenty-first century warming (Lawson 2008; Carter 2008).

    El Niño–Southern Oscillation is a strong driver of interannual global mean temperature variations. ENSO and non-ENSO contributions can be separated by the
    method of Thompson et al. (2008) (Fig. 2.8a). The trend in the ENSO related component for 1999–2008 is +0.08±0.07°C /decade, fully accounting for the overall observed trend. The trend after removing ENSO (the “ENSO-adjusted” trend) is 0.00°±0.05°C /decade, implying much greater disagreement with anticipated global temperature rise.

    We can place this apparent lack of warming in the context of natural climate fluctuations other than ENSO using twenty-first century simulations with the HadCM3 climate model (Gordon et al. 2000), which is typical of those used in the
    recent IPCC report (AR4; Solomon et al. 2007). Ensembles with different modifications to the physical parameters of the model (within known uncertainties) (Collins et al. 2006) are performed for several of the IPCC SRES emissions scenarios
    (Solomon et al. 2007). Ten of these simulations have a steady long-term rate of warming between 0.15° and 0.25ºC /decade, close to the expected rate of 0.2ºC /decade. ENSO-adjusted warming in the three surface temperature datasets over the last 2–25 yr continually lies within the 90% range of all similar-length ENSO-adjusted temperature changes in these simulations (Fig. 2.8b). Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected
    present-day warming rate.

    The 10 model simulations (a total of 700 years of simulation) possess 17 non overlapping decades with trends in ENSO-adjusted global mean temperature within the uncertainty range of the observed 1999–2008 trend (−0.05° to 0.05°C /decade). Over most of the globe, local surface temperature trends for 1999–2008 are tatistically consistent with those in the 17 simulated decades (Fig. 2.8c). Field significance (Livezey and Chen 1983) is assessed by comparing the total area of inconsistent grid
    boxes with the range of similar area values derived by testing the consistency of trends in each simulated decade with those in the remaining simulated decades. The 5.5% of the data area that is inconsistent in the observed case is close to the median of this range of area values, indicating the differences are not field significant. Inconsistent trends in the mid latitude Southern Hemisphere strongly resemble the
    surface temperature pattern of the negative phase of the SAM (Ciasto and Thompson 2008), which did indeed show a negative trend in the last decade.

    http://www.metoffice.gov.uk/media/pdf/j/j/global_temperatures_09.pdf

    • Observations indicate that global temperature rise has slowed in the last decade

      That is for lolwot who claims there has been no slowing

  55. Particular Physicist

    Latimer > If you have a decent climate model, shouldn’t it be able to work at all times?
    {followed up by asking for explanation of qualifications, if any}

    BBD > {ducks question}

  56. In the complete absence of a clear and direct method to measure the specific effect of AGW, climate science has understandably been forced into reliance on monstrously complex models built up from bits of physics here and there, the idea being to try and integrate them into an interdependent whole.

    So that Hansen is skeptical of models, and regards them as secondary rather than primary knowledge, is certainly very interesting. The problem seems to be that he – like everyone else – doesn’t have any primary explanation to underpin his views, including his predictions.

  57. Re the AGWSF modelling empty space and not the real gas atmosphere around us, and to put Pekka out of his misery wondering why he doesn’t have sound in his world – see my post:
    November 11, 2012 at 4:24 am
    http://judithcurry.com/2012/11/09/climate-model-discussion-thread/#comment-266473

    And, Pekka: Pekka Pirilä | November 7, 2012 at 12:52 pm
    http://judithcurry.com/2012/11/05/uncertainty-in-observations-of-the-earths-energy-balance/#comment-265000

    It’s essential to understand that the atmosphere is essentially empty space with molecules far apart. There’s approximately 1000 times more space per molecule than in liquid at atmospheric pressure.

    The evaporation of one molecule occurs when it happens to receive so much energy from collisions with neighboring molecules in the water that it breaks the attractive bonds. The gas around has very little to do with that at all normal pressures. Only at very high pressures start the gas molecules get so close to each other that interaction with neighboring molecules has any significant effect.

    Pekka is describing a fictional world created out of the imaginary ideal gas molecule, which has none of the properties of a real gas molecule. The terms ideal and real are technical terms in this subject. And ideal gas doesn’t exist. An ideal gas doesn’t have volume, attraction, weight – it is not subject to gravity. An ideal gas is an imaginary hard dot of nothing with no inelastic collisions zipping around at great speeds bouncing off each other in all that empty space AGWSF says is your atmosphere.

    In other words, you don’t have any atmosphere at all in your world.

    Our real world atmosphere is the heavy, voluminous real gas Air, this is a fluid, all gases and liquids are fluids.

    At atmosphere these real world real gas molecules have real volume, they are constrained by the other real gas molecules around them so they do not “travel at great speeds through empty space under their own molecular momentum to diffuse and thoroughly mix” because they don’t have empty space around them, but other molecules with volume which stop them doing this. AGWScienceFiction has no Van der Waals..

    You have no evaporation as per real world physics, you have created an imaginary scenario of ‘lots of collisions by ideal gas molecules’, without explaining how and why such an event could even happen..

    Real world evaporation of real gas water molecules is because they are lighter than the fluid real gas above them at 0°C, and when these molecules get heated they become even lighter because less dense and so rise in Air and then colder more dense real gas molecules sink beneath, this is convection in fluids.

    Convection in fluids is how we get our winds – when volumes of the fluid gas air are differentially heated and become less dense and lighter than air and rise, and volumes of colder, denser real gas molecules of fluid air sink displacing the hotter, flowing beneath. Hot Air Rises, Cold Air Sinks.

    This is how we get our WINDS. Wind is a convection current in the fluid real gas Air, of a ‘package of air’, a volume of air on the move. On the same principle as convection currents in the fluid liquid water ocean, where they are called Currents.

    This is how we get our great Equator to Poles Winds, when the land and water are heated intensely at the equator and heat the volumes of air above. This heated volume of fluid gas air becomes less dense and rises and flows towards the poles as drawn there when the heavier, denser colder volumes of the fluid gas air at the poles sinks beneath displacing the hotter and flows to the equator.

    It takes great heating of land and water at the equator to get us our great equator to poles wind system.

    In the real world where gases are real and have volume and weight is how we get clouds. When real gas fluid water vapour rises and with its great heat capacity takes heat away from the surface and in the colder heights loses this heat and condenses to liquid water or ice, forming clouds.

    This does not happen in an atmosphere of empty space. It takes real gas which has volume to effect these phenomena we observe every day in our real atmosphere. Our real world weather systems.

    Can you work out now how we get sound?

    Think of how a wave travels in the ocean, like a Mexican wave. The water doesn’t move from one side of the ocean to the other, it is the disturbance to the fluid water which is moving, the energy disturbing the heavy fluid as it travels through.

    Sound in the fluid gas air similarly, as the sound travels it causes the molecules of air vibrate and to bump into their neighbours setting them into vibration who in turn pass it on, as the sound passes the molecules settle back into their place. The molecules of air do not move, that is wind.

    The Comic Cartoon Greenhouse Efffect world does not have the real gas atmosphere, it has substituted ideal gas in empty space.

    This empty space ideal gas fictional world is why the models can’t model the Water Cycle and why they have no rain in the Carbon Cycle, all pure clean rain is the attraction of water and carbon dioxide which is carbonic acid, because ideal gases have no weight or attraction or volume and are not subject to gravity. They don’t exist.

    • Apologies, I missed out a close italics.

      This should be:

      Real world evaporation of real gas water molecules is because they are lighter than the fluid real gas above them at 0°C, and when these molecules get heated they become even lighter because less dense and so rise in Air and then colder more dense real gas molecules sink beneath, this is convection in fluids.

      Convection in fluids is how we get our winds – when volumes of the fluid gas air are differentially heated and become less dense and lighter than air and rise, and volumes of colder, denser real gas molecules of fluid air sink displacing the hotter, flowing beneath. Hot Air Rises, Cold Air Sinks.

      This is how we get our WINDS. Wind is a convection current in the fluid real gas Air, of a ‘package of air’, a volume of air on the move. On the same principle as convection currents in the fluid liquid water ocean, where they are called Currents.

      This is how we get our great Equator to Poles Winds, when the land and water are heated intensely at the equator and heat the volumes of air above. This heated volume of fluid gas air becomes less dense and rises and flows towards the poles as drawn there when the heavier, denser colder volumes of the fluid gas air at the poles sinks beneath displacing the hotter and flows to the equator.

      It takes great heating of land and water at the equator to get us our great equator to poles wind system.

      In the real world where gases are real and have volume and weight is how we get clouds. When real gas fluid water vapour rises and with its great heat capacity takes heat away from the surface and in the colder heights loses this heat and condenses to liquid water or ice, forming clouds.

      This does not happen in an atmosphere of empty space. It takes real gas which has volume to effect these phenomena we observe every day in our real atmosphere. Our real world weather systems.

  58. It is amazing how people can not see the ‘bleeding obvious’ even when it is right in front of them.

    The hind cast of the models, against the temperature record of the 20th century, is indeed very impressive.

    It is that very fact that which proves they have to be wrong!

    They are not comparing like to like. An apple doesn’t equal an orange no matter which you cut it.

    The models are set up to to produce the climate signal. However, there are some weather signals such as the PDO influenced ENSO conditions that introduce medium term warming and cooling signals and they can be quite large as we know. They are cyclical so in the long term they average out and are not an additive effect so the long term climate signal will always emerge from this masking weather noise.

    Now the models average out these temporary weather forcings so they are only showing the true climate signal. Well I can understand that it seems a reasonable thing to do. makes them less complicated

    How can, therefore, a model set to match the climate signal only, match so well the climate plus weather signal, which is what the temperature record is, so well?

    We are told that the fact the models are not representing the 21st century record vey well is that ‘weather’ conditions are temporarily masking the true climate signal which will emerge when the weather conditions cycle.

    Well I could accept that as an explanation if it wasn’t that the hind casts of the model match the climate plus weather signal so well.

    We know that when James Hansen made his famous predictions to congress in 1988 that he didn’t know he was comparing a period, which was in the warm end of a sixty year PDO weather cycle with periods in the cool end. The PDO cycle was not identified until 1996.

    Surely the hind cast of the models should show lengthy periods of time where the climate signal is moving away, from the climate and weather signal?

    The fact that it doesn’t suggests to me that the modelers wanted to tune their models as close as possible to the temperature record so that people would have high confidence in them. However they overlooked, in their hubris, that if they were truly accurate they shouldn’t match so well.

    So it appears that the only way we could solve this conundrum is to say that during the period of the hind cast weather was never anything other a very minor force. However, in the period of the forward cast, weather has transformed itself into a major masking force.

    Sorry I can’t buy it.

    Alan

    • I think Francis Crick once said words to the effect of
      -”A model that reproduces all of the data is probably wrong, because some of the data is probably wrong.”

      That may not be much help in many circumstances, but it is cautionary.

  59. Stephen Pruett

    The fact that there are 20+ models, the outputs of which vary substantially, how can models be useful to policy? The figures I have seen illustrating the models shown climate change from minimal and likely inconsequential (or even beneficial) to alarming. However, the level of uncertainty is huge. I think climate scientists convinced themselves that because they couldn’t account for late 20th century warming without CO2 as a major component and that physically CO2 is a greenhouse gas, some of the models must be right. However, the uncertainty of the models and fact the we don’t know how to apply physics of the greenhouse effect quantitatively to the whole atmosphere, should mean that confident statements of future warming (as in IPCC ARs) are not warranted. Of course, the models also don’t work without strongly positive feedbacks, which have not been established empirically either, and this should further limit confidence in the models.

    In any other field of research, this combination of uncertainties (multiple models with widely varying outputs because of reasonable but different initial assumptions, uncertainty with regard to CO2 sensitivity, and uncertainty with regard to feedbacks, not to mention unknown unknowns) would mean that experts who expressed confidence in the models would be challenged by the majority of people in the field. Yet when Dr. Curry challenges the certainty of AGW, not the overall concept mind you just the amount and rate, she is viewed by many climate scientists as an enemy. This is not normal science.

    • This is not normal science.

      Yes, that is about all we can say for sure about establishment climate science today. It’s science tainted with another agenda – to induce belief in a pre-committed outcome, rather than an attempt to get to the bottom of things. Politically financed, politically motivated..

  60. Tuned or not tuned? Am I missing something?

    From “Tuning the Climate”: http://www.agu.org/journals/ms/ms1208/2012MS000154/2012MS000154.pdf

    “The MPI-ESM was NOT [[my emphasis]] tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen.”

    To summarize, all the models can now reproduce the observed 20th C temperatures. But they also quickly diverge on predictions for future temperatures, mainly because of a wide range of CO2 climate sensitivities (1.5-4.5 K per doubling).

    On the face of it, this makes the agreement of models with very different climate sensitivities for the 20th C temperature record a bit surprising. The authors explain it thus:

    “Models that have a high climate sensitivity tend to have a weak total
    anthropogenic forcing, and vice-versa. A large part of the
    variability in inter-model spread in 20th century forcing
    was further found to originate in different aerosol forcings.
    It seems unlikely that the anti-correlation between
    forcing and sensitivity simply happened by chance.
    Rational explanations are that 1) either modelers somehow
    changed their climate sensitivities, 2) deliberately
    chose suitable forcings, or 3) that there exists an intrinsic
    compensation such that models with strong aerosol
    forcing also have a high [[surely LOW?]] climate sensitivity. Support for
    the latter is found in studies showing that parametric
    model tuning can influence the aerosol forcing [Lohmann
    and Ferrachat, 2010; Golaz et al., 2011].”

    So 20th C warming can be explained just as well by aerosols as by GHGs (anthropogenic in both cases), at least for climate sensitivities as low as 1.5 K/doubling. Effectively all the models are telling us is: “There are a wide range of assumptions about feedback processes that are consistent with both known physics and the 20th C temperature record, resulting in climate sensitivities between 1.5 and 4.5 K/doubling.”

    Though one wonders if this effect extends down to sensitivities of 1.1 (no feedback) or even 0 (no additional greenhouse effect)? The IPCC would say no. But if we factor in influences such as multi-decadal ocean cycles (that could explain both some of the late 20th C warming and the absence of early 21st C warming) then it does not look like the GCMs rule much out.

  61. Tomas Milanovic

    If these lower resolution driving models do not have enough resolution to realistically simulate regimes, then then boundary conditions provided to the regional climate model could be systematically erroneous. It is therefore likely that the embedded regional model may represent an unrealistic realization of regional climate and variability.

    This is what I have been saying for many years and this single point represents for me the very heart of the problems there are with numerical models.
    It is so important that I will translate this issue in everyday’s language for all those who have not the time to read the paper(s) dealing with this matter.
    The quote above is saying :

    1) Models can only see spatial events larger than 100 km (or more depending on resolution)
    2) But what happens below this scale is vitally important (viscosity and storms happen there)
    3) So we will use equations (E) to compute what happens at the big (resolved) scales and suppose that this solution is near to the real solution of (E) that we would obtain with an infinitely small space step (we are talking fraction of mm here) if we had a computer able to do this.
    4)
    THEN we will use this solution as a boundary condition for a square of 100×100 km and solve again (E) but with a much higher resolution (f.ex a 1×1 km grid). Because the large scale solution was supposed correct, then for this particulat 100×100 km square the “regional” solution on a 1kmx1km grid is also necessarily correct.

    The quote above is saying that the 100km solution may be far from the real solution of (E). In that case follows that the boundary conditions for higher resolutions are wrong too. Ergo the “regional solution” is wrong.

    I consider that the low resolution solution is necessarily far from the real solution.
    Spatial structures in fluids are emergent features resulting from interaction between local, probably chaotic, oscillators. This is mathematically obvious because these dynamics are described by Navier Stokes PED equations that impose that large scales are defined by small scales.

    But there is also a whole disciplin dealing with spatio-temporal chaos : CML (coupled map lattices).
    For instance in this paper http://amath.colorado.edu/faculty/juanga/Papers/PhysicaD.pdf
    it is proven that incoherent dynamical systems will form coherent structures depending on the properties of the local oscillators and on the topology of the interactions.
    It is clear that these insights and results are only possible in a bottom (local) up (global) approach.

    Recognizing that the system we deal with is a case of deterministic spatio-temporal chaos of course also immediately disqualifies all naive global approaches based on equilibriums or steady states.
    Any progress in the climate dynamics and most importantly any “regional” prediction can only be achieved when one takes the system’s properties seriously and uses the tools necessary to deal with dissipative non linear systems far from equilibrium.

    • Tomas Milanovic would have you believe that if the sun’s output radiation level were to double over time, the earth’s climate would still be at the mercy of chaotic dynamics and we could not detect the shift in warming caused by the increase in input forcing.

      Experienced physicists know how to construct very similar “thought experiments” to puncture and therefore deflate weak arguments.

      Milanovic has no comeback to this except to say the forcing is hypothetical. But if he says this, he cannot defend his chaos argument. He is in what we refer to a logical trick box.

      It also explains why he only rarely comments here. It’s not good for his credibility to spout such easily refutable theories. Contrast this to Chief, who has the same argument, but has no credibility to lose, as he didn’t have any to start with.

      • WebHubTelescope | November 14, 2012 at 10:38 pm | Reply
        “Tomas Milanovic would have you believe that if the sun’s output radiation level were to double over time, the earth’s climate would still be at the mercy of chaotic dynamics and we could not detect the shift in warming caused by the increase in input forcing.

        Experienced physicists know how to construct very similar “thought experiments” to puncture and therefore deflate weak arguments. ”

        We might be able to detect the shift in warming; but that would still not enable us to build a GCM with long term predictive skill (because climate is chaotic, and GCMs operate with finite resolution. See the definition of chaos.).

        Show how a GCM with long term predictive skill can be build. Please.

      • Chief Hydrologist

        You are the one with no knowledge and no credibility. You are a pathetic little twerp.

  62. Did anyone mention this paper?

    http://www.ldeo.columbia.edu/~jsmerdon/papers/2012_jclim_karnauskasetal.pdf

    which clearly shows that projections made by climate models are useless at the moment

  63. Schrodinger's Cat

    The warming last century correlated with CO2 concentrations and scientists concluded that there was cause and effect on the basis that such warming was unprecedented and they couldn’t think what else could have caused it. This amazing scientific proof created AGW. Models were then constructed based on these assumptions.

    Several factors were ignored. The cause and effect is the reverse, with warming oceans de-gassing CO2 due to reduced solubility. The warming was not unprecedented as demonstrated by the MWP which also suggests that such warming does not require rapidly increasing CO2. There must be another natural cause which could well be solar activity, cloudiness or a factor not yet understood.

    Now the warming has stopped, which means so has the correlation. Do we know why it has stopped? What if it starts cooling? Will there be furious tweaking? Perhaps CO2 also causes invisible volcanoes.

    Perhaps the warming was natural after all and the models are completely wrong. Real scientists would consider this possibility.

    • There’s already furious tweaking and much more coming. By the way, some scientists did consider the possibility:

      ‘What if climate change turns out to be a natural fluctuation? They’ll kill us all.’

  64. What fraction of the head of a pin will cover the soles of the Angel standing on it….? When the models start making significant predictions that come true, I will bow down and scrape before them.

    But they are not. Instead we get the increasing prominence given to the arbitrarily defined “climate sensitivity”: As time progresses the models are failing worse and worse, so we see reversion to the shorter time scale of the first derivative which has a greater chance of being temporarily “correct” by sheer chance.

    Perhaps the endgame is just over the horizon.

  65. Past next Monday, meteorologist avoid to predict the weather- any extra day added = less reliable. ”Predicting” climate 50y in advance; is not more than tarot cards job – when the phony ”GLOBAL warming” is added to climate change – that’s the ”SMOKING GUN”’

    Climatic changes are constant; good / bad – small / big climatic changes – but they don’t need any PHONY GLOBAL WARMINGS!!! Localized warmings are NEVER global. Therefore: Warmist & Fakes don’t have a case!