Reply to Patrick Brown’s response to comments on his Nature article

by Nic Lewis

My reply to Patrick Brown’s response to my my comments on his Nature article.

Introduction

I thank Patrick Brown for his detailed response (also here) to statistical issues that I raised in my critique “Brown and Caldeira: A closer look shows global warming will not be greater than we thought” of his and Ken Caldeira’s recent paper (BC17).[1] The provision of more detailed information than was given in BC17, and in particular the results of testing using synthetic data, is welcome. I would reply as follows.

Brown comments that I suggested that rather than focusing on the simultaneous use of all predictor fields, BC17 should have focused on the results associated with the single predictor field that showed the most skill: The magnitude of the seasonal cycle in OLR. He goes on to say: “Thus, Lewis is arguing that we actually undersold the strength of the constraints that we reported, not that we oversold their strength.”

To clarify, I argued that BC17 undersold the statistical strength of the relationships involved, in the RCP8.5 2090 case focussed on in their Abstract, for which the signal-to-noise ratio is highest. But I went on to say that I did not think the stronger relationships would really provide a guide to how much global warming there would actually be late this century on the RCP8.5 scenario, or any other scenario. That is because, as I stated, I disagree with BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models necessarily also applies in the real climate system. I will return to that point later. But first I will discuss the statistical issues.

Statistical issues

When there are many more predictor variables than observations, the dimensionality of the predictor information has to be reduced in some way to avoid over-fitting. There are a number of statistical approaches to achieving this using a linear model, of which the partial least squares (PLS) regression method used in BC17 is arguably one of the best, at least when its assumptions are satisfied. All methods estimate a statistical model fit that provides a set of coefficients, one for each predictor variable.[2] The general idea is to preserve as much of the explanatory power of the predictors as possible without over-fitting, thus maximizing the fit’s predictive power when applied to new observations.

If the PLS method is functioning as intended, adding new predictors should not worsen the predictive skill of the resulting fitted statistical model. That is because, if those additional predictors contain useful information about the predictand(s), that information should be incorporated appropriately, while if the additional predictors do not contain any such information they should be given zero coefficients in the model fit. Therefore, the fact that, in the highest signal-to-noise ratio, RCP8.5 2090 case focussed on both in BC17 and my article, the prediction skill when using just the OLR seasonal cycle predictor field is very significantly reduced by adding the remaining eight predictor fields indicates that something is amiss.

Brown say that studies are often criticized for highlighting the single statistical relationship that appears to be the strongest while ignoring or downplaying weaker relationships that could have been discussed. However, the logic with PLS is to progressively include weaker relationships but to stop at the point where they are so weak that doing so worsens predictive accuracy. Some relationships are sufficiently weak that including them adds too much noise relative to information useful for prediction. My proposal of just using the OLR seasonal cycle to predict RCP8.5 2090 temperature was accordingly in line with the logic underlying PLS – it was not a case of just ignoring weaker relationships.

Indeed, the first reference for the PLS method that BC17 give (de Jong, 1993),  justified PLS by referring to a paper [3] that specifically proposed carrying out the analysis in steps, selecting one variable/component at a time and not adding an additional one if it worsened the statistical model fit’s predictive accuracy. At the predictor field level, that strongly suggests that, in the RCP8.5 2090 case, when starting with the OLR seasonal cycle field, one would not go on to add any of the other predictor fields, as in all cases doing so worsens the fit’s predictive accuracy. And there would not be any question of using all predictor fields simultaneously, since doing so also worsens predictive accuracy compared to using just the OLR seasonal cycle field.

In principle, even when given all the predictor fields simultaneously PLS should have been able to optimally weight the predictor variables to build composite components in order of decreasing predictive power, to which the add-one-at-a-time principle could be applied.  However, it evidently was unable to do so in the RCP8.5 2090 case or other cases. I can think of two reasons for this. One is that the measure of prediction accuracy used –  RMS prediction error when applying leave-one-out cross-validation – is imperfect. But I think that the underlying problem is the non-satisfaction of a key assumption of the PLS method: that the predictor variables are free of uncertainty. Here, although the CMIP5-model-derived predictor variables are accurately  measured, they are affected by the GCMs’ internal variability. This uncertainty-in-predictor-values problem was made worse by the decision in BC17 to take their values from a single simulation run by each CMIP5 model rather than averaging across all its available runs.

Brown claims (a) that each model’s own value is included in the multi-model average which gives the multi-model average an inherent advantage over the cross-validated PLSR estimate and (b) that this means that PLSR is able to provide meaningful Prediction Ratios even when the Spread Ratio is near or slightly above 1. Point (a) is true but the effect is very minor. Based on the RCP8.5 2090 predictions, it would normally cause a 1.4% upwards bias in the Spread Ratio. Since Brown did not adjust for the difference of one in the degrees of freedom involved, the bias is twice that level – still under 3%. Brown’s claim (b), that PLS regression is able to provide meaningful Prediction Ratios even when the Spread Ratio is at or virtually at the level indicating a skill no higher than when always predicting warming equal to the mean value for the models used to estimate the fit, is self-evidently without merit.

As Brown indicates, adding random noise affects correlations, and can produce spurious correlations between unrelated variables. His test results using synthetic data are interesting, although they only show Spread ratios. They show that one of the nine synthetic predictor fields produced a reduction in the Spread ratio below one that was very marginally – 5% – greater than that when using all nine fields simultaneously. But the difference I highlighted, in the highest signal RCP8.5 2090 case, between the reduction in Spread ratio using just the OLR seasonal cycle ratio and that using all predictors simultaneously was an order of magnitude larger – 40%. It seems very unlikely that the superior performance of the OLR seasonal cycle on its own arose by chance.

Moreover, the large variation in Spread ratios and Prediction ratios between different cases and different (sets of) predictors calls into question the reliability of estimation using PLS. In view of the non-satisfaction of the PLS assumption of no errors in the predictor variables, a statistical method that does take account of errors in them would arguably be more appropriate. One such method is the RegEM (regularized expectation maximization) algorithm, which was developed for use in climate science.[4] The main version of RegEM uses ridge regression with the ridge coefficient (the inverse of which is analogous to the number of retained components in PLS) being chosen by generalized cross-validation. Ridge regression RegEM, unlike the TTLS variant used by Michael Mann, produces very stable estimation. I have applied RegEM to BC17’s data in the RCP8.5 2090 case, using all predictors simultaneously.[5] The resulting Prediction ratio was 1.08 (8% greater warming), well below the comparative 1.12 value Brown arrives at (for grid-level standardization). And using just the OLR seasonal cycle , the excess of the Prediction ratio over one was only half that for the comparative PLS estimate.

Issues with the predictor variables and the emergent constraints approach

I return now to BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models also applies in the real climate system. They advance various physical arguments for why this might be the case in relation to their choice of predictor variables. They focus on the climatology and seasonal cycle magnitude predictors as they find, compared with the monthly variability predictor, these have more similar PLS loading patterns to those when targeting shortwave cloud feedback, the prime source of intermodel variation in ECS.

There are major problems in using climatological values (mean values in recent years) for OLR, OSR and the TOA radiative imbalance N. Most modelling groups target agreement of simulated climatological values of these variables with observed values (very likely spatially as well as in the global mean) when tuning their GCMs, although some do not do so. Seasonal cycle magnitudes may also be considered when tuning GCMs. Accordingly, how close values simulated by each model are to observed values may very well reflect whether and how closely the model has been tuned to match observations, and not be indicative of how good the GCM is at representing the real climate system, let alone how realistic its strength of multidecadal warming in response to forcing is.

There are further serious problems with use of climatological values of TOA radiation variables. First, in some CMIP5 GCMs substantial energy leakages occur, for example at the interface between their atmospheric and ocean grids.[6] Such models are not necessarily any worse in simulating future warming than other models, but they need (to be tuned) to have TOA radiation fluxes significantly different from observed values in order for their ocean surface temperature change to date, and in future, to be realistic.

Secondly, at least two of the CMIP5 models used in BC17 (NorESM1-M and NorESM1-ME) have TOA fluxes and a flux imbalance that differ substantially from CERES observed values, but it appears that this merely reflects differences between derived TOA values and actual top-of-model values. There is very little flux imbalance within the GCM itself.[7] Therefore, it is unfair to treat these models as having lower fidelity – as BC17’s method does for climatology variables – on account of their TOA radiation variables differing, in the mean, from observed values.

Thirdly, most CMIP5 GCMs simulate too cold an Earth: their GMST is below the actual value, by up to several degrees. It is claimed, for instance in IPCC AR5, that this does not affect their GMST response to forcing. However, it does affect their radiative fluxes. A colder model that simulates TOA fluxes in agreement with observations should not be treated as having good fidelity. With a colder surface its OLR should be significantly lower than observed, so if it is in line then either the model has compensating errors or its OLR has been tuned to compensate, either of which indicates its fidelity is poorer than it appears to be. Moreover, complicating the picture, there is an intriguing, non-trivial correlation between preindustrial absolute GMST and ECS in CMIP5 models.

Perhaps the most serious shortcoming of the predictor variables is that none of them are directly related to feedbacks operating over a multidecadal scale, which (along with ocean heat uptake) is what most affects projected GMST rise to 2055 and 2090. Predictor variables that are related to how much GMST has increased in the model since its preindustrial control run, relative to the increase in forcing – which varies substantially between CMIP5 models – would seem much more relevant. Unfortunately, however, historical forcing changes have not been measured for most CMIP5 models. Although one would expect some relationship between seasonal cycle magnitude of TOA variables and intra-annual feedback strengths, feedbacks operating over the seasonal cycle may well be substantially different from feedbacks acting on a multidecadal timescale in response to greenhouse gas forcing.

Finally, a recent paper by scientists as GFDL laid bare the extent of the problem with the whole emergent constraints approach. They found that, by a simple alteration of the convective parameterization scheme, they could engineer the climate sensitivity of the GCM they were developing, varying it over a wide range, without them being able to say that one model version showed a greater fidelity in representing recent climate system characteristics than another version with a very different ECS.[8] The conclusion from their Abstract is worth quoting:”Given current uncertainties in representing convective precipitation microphysics and the current inability to find a clear observational constraint that favors one version of the authors’ model over the others, the implications of this ability to engineer climate sensitivity need to be considered when estimating the uncertainty in climate projections.” This strongly suggests that at present emergent constraints cannot offer a reliable insight into the magnitude of future warming. And that is before taking account of the possibility that there may be shortcomings common to all or almost all GCMs that lead them to misestimate the climate system response to increased forcing.

[1] Patrick T. Brown & Ken Caldeira, 2017. Greater future global warming inferred from Earth’s recent energy budget, doi:10.1038/nature24672.

[2] The predicted value of the predictand is the sum of the predictor variables each weighted by its coefficient, plus an intercept term.

[3] A Hoskuldsson, 1992. The H-principle in modelling with applications to chemometrics. Chemometrics and Intelligent Laboratory Systems, 14, 139-153.

[4] Schneider, T., 2001: Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values. J. Climate, 14, 853–871.

[5] Due to memory limitations I had to reduce the longitudinal resolution by a factor of three when using all predictor fields simultaneously. Note that RegEM standardizes all predictor variables to unit variance.

[6] Hobbs et al, 2016. An Energy Conservation Analysis of Ocean Drift in the CMIP5 Global Coupled Models. DOI: 10.1175/JCLI-D-15-0477.1.

[7] See discussion following this blog comment.

[8] Ming Zhao et al, 2016. Uncertainty in model climate sensitivity traced to representations of cumulus precipitation microphysics. J Cli, 29, 543-560.

286 responses to “Reply to Patrick Brown’s response to comments on his Nature article

  1. Does the raw temperature data show a decreasing trend for the century or not?

    • When one is only curious to one fact in order to draw a sweeping conclusion it is usually a sign they are willing to ignore any number of others. “We’ll give him a fair trial and then we’ll hang em.”

      If the slope for 100-year estimated GMST were negative that would be even more of a headline — “Man Causing Ice-Age.” Determining if the trend is insignificant gets little grant funding or headlines. People want their work to be important.

    • Last year, the sixth day of Christmas saw a thaw on Santa’s doorstep.

      But this year, polar diplomacy has chilled out .

  2. David L. Hagen

    Fuel supply constraints make RCP8.5 a highly unrealistic scenario
    Thanks Nick for exposing statistical weaknesses in Brown et al.
    A far greater challenge is the extremely unrealistic assumptions on availability of future fossil fuel use for IPCC’s RCP8.5. e.g., See:

    Ritchie, J., & Dowlatabadi, H. (2017) Why do climate change scenarios return to coal? Energy, 140, 1276-1291.
    Abstract

    The following article conducts a meta-analysis to systematically investigate why Representative Concentration Pathways (RCPs) in the Fifth IPCC Assessment are illustrated with energy system reference cases dominated by coal. These scenarios of 21st-century climate change span many decades, requiring a consideration of potential developments in future society, technology, and energy systems. To understand possibilities for energy resources in this context, the research community draws from Rogner (1997) which proposes a theory of learning-by-extracting (LBE). The LBE hypothesis conceptualizes total geologic occurrences of oil, gas, and coal with a learning model of productivity that has yet to be empirically assessed.
    This paper finds climate change scenarios anticipate a transition toward coal because of systematic errors in fossil production outlooks based on total geologic assessments like the LBE model. Such blind spots have distorted uncertainty ranges for long-run primary energy since the 1970s and continue to influence the levels of future climate change selected for the SSP-RCP scenario framework. Accounting for this bias indicates RCP8.5 and other ‘business-as-usual scenarios’ consistent with high CO2 forcing from vast future coal combustion are exceptionally unlikely. Therefore, SSP5-RCP8.5 should not be a priority for future scientific research or a benchmark for policy studies

    Wang, J., Feng, L., Tang, X., Bentley, Y., & Höök, M. (2017). The implications of fossil fuel supply constraints on climate change projections: A supply-side analysis Futures, 86, 58-72.
    Abstract

    Climate projections are based on emission scenarios. The emission scenarios used by the IPCC and by mainstream climate scientists are largely derived from the predicted demand for fossil fuels, and in our view take insufficient consideration of the constrained emissions that are likely due to the depletion of these fuels. This paper, by contrast, takes a supplyside view of CO2 emission, and generates two supply-driven emission scenarios based on a comprehensive investigation of likely long-term pathways of fossil fuel production drawn from peer-reviewed literature published since 2000. The potential rapid increases in the supply of the non-conventional fossil fuels are also investigated. Climate projections calculated in this paper indicate that the future atmospheric CO2 concentration will not exceed 610 ppm in this century; and that the increase in global surface temperature will be lower than 2.6 C compared to pre-industrial level even if there is a significant increase in the production of non-conventional fossil fuels. Our results indicate therefore that the IPCC’s climate projections overestimate the upper-bound of climate change. Furthermore, this paper shows that different production pathways of fossil fuels use, and different climate models, are the two main reasons for the significant differences in current literature on the topic.

    • Thanks; I had read the Ritchie paper but not the Wang paper. I agree that RCP8.5 is not a realistic scenario. But for statistical analysis of GCM behaviour, simulations based on RCP8.5 offer a higher signal-to-noise ratio than those based on other scenarios.

  3. David Springer

    Merry Christmas!

  4. “for statistical analysis of GCM behaviour, simulations based on RCP8.5 offer a higher signal-to-noise ratio than those based on other scenarios.”

    It also sells the message of doom and gloom global warming, which is the main aim.

    It also offers correlation with the worst performing GCM’s when cherry picking short rising temperature data periods (only).
    Perfect.

  5. Are we going to see an update on the Arctic Ice?

  6. Reblogged this on Climate Collections.

  7. In the original version of Pinocchio, when Pinocchio was sick, one set of doctors said: “i think he will die, but if he doesn’t die, he will live”. Another set of doctors said “I think he will live, but if he doesn’t live, he will die”. So we have it here. I think it will get warmer, but if it doesn’t, it won’t get warmer. Looking back on ten years of climate research, climate blogs, climate reports, do we really know much more today that ten years ago? It seems that the amount of verbage is a reflection of how much we don’t know.

  8. Nice Lewis, very interesting to read your reply.

    “Thirdly, most CMIP5 GCMs simulate too cold an Earth: their GMST is below the actual value, by up to several degrees”

    That is astonishing. Why are GCMs not tuned to reproduce the absolute value of the GMST? I would reckon that as obligatory.
    Anyway, if the narrowing and enhancement of the future GMSTs, as shown in BC17, is also valid for the other RCPs then their eventual deviation from observations is brought nearer in time.

  9. “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s10712-012-9175-1

    Variability in the CERES record is overwhelmingly natural in origin. Neglecting natural variability over simplifies the picture considerably. And net differences in reflected shortwave and emitted IR don’t really seem to be the point. In the first differential global energy equation the net of this is the energy out component.

    Δ(H&W) ≈ Ein – Eout

    The change in heat energy content of the planet – and the work done in melting ice or vaporizing water – is approximately equal to energy in less energy out. Energy in is variable and widely expected to decline this century, have very little direct effect on the global energy budget but be amplified in changes in the terrestrial system in relatively large changes in ocean and atmospheric circulation and thus energy out. The difference in energy in and energy out is the imbalance that is determined exclusively by considering ocean heat changes. As it stands – ocean heat varies considerably – and very rapidly – as a result mostly of changes in net energy out. Argo variability is mostly natural and the record is far too short to be definitive.

    There has not been a demonstration of any skill in disentangling natural variability from mooted global warming. Yet we are expected to believe that inadequate physics that neglect causality and is tuned to overly simplified parameters is model skill. Projecting these forward has additional uncertainties caused by the dynamical complexity models.

    “Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” http://www.pnas.org/content/104/21/8709.full

    The following shows 1000’s of solutions of a single model that derive from non-unique choices for parameters and boundary conditions – and that are constrained to the vicinity of the emergent property of global surface temperature. No single solution – such as are found in the CMIP inter-comparisons – captures the intrinsic ‘irreducible imprecision’ of climate models. There is no inevitability of high sensitivity emerging from a model constrained to emergent properties or not – it happens by chaotic chance.

    https://watertechbyrie.files.wordpress.com/2014/06/rowlands-fig-1-e1513668213559.png

    Nor will we find that climate is constrained to follow models. Climate will change abruptly and more or less extremely 3 or 4 times this century – just as it did in the last. For reason that have to do with dynamical complexity and through mechanisms that are just starting to be understood.

    This entire discussion is about 6 theoretically impossible things before breakfast and is a waste of everyone’s time.

    • Does photosynthesis count at all in reducing energy out?

      • Plants are 90% efficient in turning sunlight into biomass.

        Over time plants can sequester carbon as organic material into living soils. Some 180Gt of carbon has been lost from agricultural soils and ecosystems in the modern era.

        Much of this can be returned to soils and ecosystem. Feed the world, reduce drought and flood, reclaim deserts and conserve biodiversity.

        e.g. https://kisstheground.com/

      • Curly question with multiple levels of answers.
        Sunlight energy is used to rearrange molecules but does not usually form new mass.
        Hence the overall energy that comes in should go out, I guess.
        A bit like GHG do not, in the overall scheme of things, does not actually reduce the amount of energy going out.
        The transit time through the GHG medium is just a little longer so the air is a little warmer.
        Like asking if solar panels reduce energy out if the energy goes into hydro storage.
        There is more energy there but a lot of heat energy produced getting it there that then dissipated.[back to space].
        Who knows?

      • Plants synthesize organic chemical compounds using sunlight energy. A good portion of it is exuded from roots. In healthy soils – this feeds a symbiotic soil ecology that stores water in organic material and breaks down parent materials in an acid environment into nutrients plants can use.

        The net carbon stored in soils and vegetation can be increased with newer land practices – with a potential to restore in the order of 180Gt(C) lost from grazing and cropping soils and global terrestrial ecosystems in the modern era. For lots of good reasons.

      • angech: Sunlight energy is used to rearrange molecules but does not usually form new mass.
        Hence the overall energy that comes in should go out, I guess.

        The incoming energy is used in rearranging the molecules — or put differently, the incoming energy is “stored” in chemical bonds. Thus, the overall energy that comes in is equal to the energy stored in the bonds plus the energy that goes out (omitting for the time being the energy used in degrading rocks and such.)

        Quantitatively, I have not seen yet an assessment of how much of the incident solar energy is stored in the bonds of emergent structures such as sugar, cellulose, and bones. Perhaps someone knows of studies that I have missed.

        The energy “stored” in bonds long ago is available to us as we burn wood and petroleum products.

      • A bit like GHG do not, in the overall scheme of things, does not actually reduce the amount of energy going out. …

        There simply is no point. The best rails in the world, he will go off them.

    • David L. Hagen

      Robert I. Ellison
      Thanks for prediction distribution graph.
      Have there been any fits to statistical distributions for that graph?
      I am seeking the standard deviations – Type A
      and contrasting systematic/systemic error Type B.
      PS What evidence and definition for your 95% plant efficiency in sunlight?
      I am used to seeing about 1.5%.

      • Most incident sunlight is absorbed by leaves. Some 5% is transmitted and 10% reflected. As the discussion was energy out – leaves are the great solar collector. Energy is gained and lost in the diurnal cycle. Mass accumulates heat – to hundreds of meters on land.

        http://plantsinaction.science.uq.edu.au/edition1/?q=content/1-1-2-light-absorption

        Some 9% is converted to glucose that can then be used for cell processes, to build cells or – as mentioned – to feed symbiotic soil organisms.

        And no I haven’t seen any pdf’s – the most that Rowland et al suggest is an even broader solution space than with the unjustifiable opportunistic IPCC ensembles. .

      • David L. Hagen

        Robert I. Ellison. On photosynthetic efficiency see e.g., For efficient energy, do you want solar panels or biofuels?
        September 20, 2012 12.29am EDT

        This analysis indicates that a theoretical maximal photosynthetic energy conversion efficiency is 4.6% for C3 and 6% for C4 plants.

        Plants are limited by their dependence on photons that fall in the approximate waveband 400-700 nm, and by inherent inefficiencies of enzymes and biochemical processes and light saturation under bright conditions. Their respiration consumes 30-60% of the energy they make from photosynthesis, and of course they spend half of each day in the dark and need to use previous carbohydrate stores to keep them growing.

        Actual conversion efficiency is generally lower than the calculated potential efficiency. It’s around 3.2% for algae, and 2.4% and 3.7% for the most productive C3 and C4 crops across a full growing season. Efficiency reductions are due to insufficient capacity to use all the radiation that falls on a leaf. Plants’ photoprotective mechanisms, evolved to stop leaves oxidising, also reduce efficiency.

      • Yes – I understand that when all energy pathways are considered – and there are several – that conversion to plant mass is about is about what you cited for C3 and C4 plants.

      • David L. Hagen

        Robert I. Ellison PS The ATP rotor itself is near 100% efficiency

  10. “Nor will we find that climate is constrained to follow models.”

    Much to the disgust of the believers.

  11. It would be helpful to the lay-reader for a post of this complexity to include an abstract. Thank you.

  12. In Brown’s response, we see that each of the 9 observational constraints reduced the spread, meaning that they all add skill individually. That is, they reduce the spread by penalizing models that do badly in that constraint which are likely to have been outliers in the predicted warming, otherwise the spread would not have been reduced. Given that all 9 have added skill by downweighting outliers in the predictand, it makes no sense to then remove 8 of those constraints as Lewis does. This implies that he does not care how badly the models did with these other constraints including any measure of shortwave radiation. He has removed a critical observational constraint on the models by adding back the outliers that did poorly with shortwave relative to Brown’s more comprehensive filtering and calls that a better result. If Lewis wants to disregard poor performances with these other 8 observations, he needs to explain why because that is the key difference from Brown and Caldeira who don’t want to ignore these observations in constraining their result. The whole point of BC17 was to use all these observations as a constraint.

  13. Jim D
    I think the point made is that one of the nine observations was of exceptional value.
    Hence adding any of the other 8 or all of the other 8 variables caused a reduction in accuracy.
    You yourself said
    ” If Lewis wants to disregard poor performances with these other 8 observations,” admitting that you understand that the ability of the other 8 variables is poor, compared to the ninth.
    The only advantage of using the other variables is that it enlarged the error and enabled BC17 to sneak their otherwise totally inaccurate viewpoint in.

    • Low climate sensitivity is accurate? It’s a physical system. You like the result that agrees with your politics. What would Feynman say?

      • High Climate Sensitivity is accurate?
        You cannot put a sensible limit on it and you claim it is accurate?
        It’s a physical system.
        What would Feynman say?

        For a start he would say that we are all wrong.
        Second he would point out the simple fact that life has existed on this planet for possibly 4 billion years.
        Life is very resilient.
        And that life as we know it can only exist in a narrow temperature band.
        That temperature band has an upper limit of compatibility which would be breached irrevocably many times if climate sensitivity was high.
        Hence, as he would say,
        “It ain’t”.

        Politically you have to like a high Climate Sensitivity, it is the result that agrees with your politics.
        Me, I don’t care. I go with science. Show me scientifically it is high and I am with you 100 %.

      • For instance you can quite happily go with a CS of 10, correct?
        Not that you believe it is 10, but it could be right?
        Is that really your view?
        Do you agree it is a sensible view?
        But you have to believe it is possible, right?

        Sigh, whatever happened to commonsense.

      • The range is 1.5 to 4.5. It looks like it is about to narrow.

        Sneak. Totally inaccurate. Go square to heck.

      • “The range is 1.5 to 4.5. It looks like it is about to narrow.”
        much happier.
        Some people do advocate a fat tail with much higher ranges and no thought.
        Sorry to misjudge you.

      • Lewis disregards, or prefers you don’t notice, that the second highest spread (OLR climatology) gives a higher than average temperature response with a prediction ratio of 1.2.

      • …second lowest spread ratio…

    • angech, he is measuring that “value” by using models. How do you justify that? It’s backwards. A different set of models would have given a different spread. It is not a fundamental property of the observations, but of the models used. It is better to treat the observations equally and see how the models fall using them all.

  14. They don’t cause a reduction in “accuracy” unless you limit that word to his one chosen observation. He disregards the other observations entirely instead of using them in the constraints. The accuracy of the models relative to these other observations is considerably degraded by this choice, but he either doesn’t care or plain forgets them. He has redefined what he means by “accuracy” in this process, and it is a sleight of hand people have not noticed because he doesn’t explicitly tell you this.

    • Furthermore you misunderstood what I meant by “disregards poor performance with these other 8 observations”. Some models do well and some do poorly in those, but Lewis disregards that distinction by throwing those constraints out entirely. BC17 wanted to maintain that distinction as a discriminating factor.

      • No, Jim D, you said what you said.
        No misunderstanding possible.
        And it is poor performance “by” these other 8 models, not as you tried to confuse the subject in your response “with” these other 8 models.
        A good linguistic trick you try there to make the good observation seem bad.
        Precisely because these other models are so bad.
        So horrible.
        That a really good fantastic correlation gets ignored.
        Why, Jim D, why?
        You said it,
        They are poor.

      • Your misunderstanding of what I wrote is not my problem. Nic Lewis disregards whether the model performance in those other measures is good or poor and uses them equally anyway in his supposed improved guess. Are you trying to defend throwing out observations in the evaluation? If a modeler did that you would be all over it.

      • He only gets a good correlation among the models when he throws out 8/9 of the observations. Do you like that approach? Why is the correlation based on one metric more important to you than one based on all nine? Is it not better to use all the observations and weight the model results based on their fit than to use the models to completely throw out observations? Nic does the latter.

      • Jim D
        Think about what you are saying, please.

        (BC17) “disregards whether the model performance in those other measures is good or poor and uses them equally,” not Nic Lewis

        “Are you trying to defend throwing out observations in the evaluation?”
        The observations have not and are never thrown out.

        The predictor ability, the correlation between one set of observations and another is what is on trial.
        You yourself said there were 8 poor correlations compared to the one good one.
        Should a scientist use models with a good correlation of observation predictor or use models with poor correlation to observations predictors??
        The answer is obvious scientifically.
        Which is what Nic has stated.
        Why make it so hard on yourself?

        He is not throwing out the predictors at all, he is saying that 8 out of the 9 have limited predictive value compared to the 9th.

      • angech, you have not understood what I have said at all. By using one predictor, Nic Lewis has thrown out the other eight. A better word for “predictor” is “observational constraint”. Nic has thrown out 8 observational constraints on the model results leaving errors in those constraints unchecked and unpenalized in his warming result. He does not care how well or poorly the models did with OSR for example, only seasonal OLR. Conversely BC17 want to use all nine sets of observations as a constraint.
        Put it this way, if BC17 had just decided to use 6 constraints based on climatology and monthly variation, not seasonal, Nic’s method would have produced OLR climatology as the optimal with the narrowest spread. However that gives more warming than BC17’s estimate. Now you can see how arbitrary his selection method is. Would he have sent that round to the blogs? I doubt it. The second narrowest spread gives the opposite result when used alone.

      • Jim D, I think you will find the predictors are being used within models to relate simulated flux variables to simulated temp. The observational constraint comes later.

      • I think it relates model accuracy in those predictors relative to observations to the model warming, so the observations are already being used as a constraint at this stage.

      • That comes later using the model developed by relating GCM flux predictors to GCM forecast temp. It is at the latter earlier stage that the question of “best” model arises. If you want to use simulated flux predictors within GCMs to predict the GCMs’ future temps then you don’t need to use all the flux predictors to get the “best” result. Nothing to do with empirical constraints.

      • That’s equivalent to disregarding the accuracy relative to the other observations. Why would you want to do that?

      • Jim D, there are no observations, accurate or otherwise, at this stage of proceedings. Just fitting a model that relates GCM flux outputs to GCM temp projections.

      • The observations are used to obtain the new spreads. It’s subtle but it is there in the technical video on his blog page about it. When he fills in the new predictand histogram in his example, the distance is derived from the distance of the predictor of the quantity. Took me a few goes through the video to figure that out.

      • What you write is a non-sequitor. Perhaps read the paper, it will help you understand.

      • I don’t know what you are claiming. Do you see that the observations are used in the spreads he presents?

      • I say there are no observations used at the point where the PLS regression is done, and you tell me they are used at the the point when spreads are calculated.

        If you haven’t got the paper look at what the video says about 1.04. The PLS process is done and dusted before the observations are introduced, and the point of the story if you do the PLS you don’t need all the other flux related parameters (i.e their weighting in the model going forward should be set to zero). Nothing to do with the observations.

      • No, I agree. The PLS regression is just done to produce the predictors and only depends on the models. The observations come in after that in the way those predictors are used to get the final observationally constrained predictand. Whether you want to call the OLR seasonal variation a predictor or an observational constraint is a bit grey. I call it an observational constraint.

      • So you agree with Nic Lewis that the PLS should proceed as designed, and if it only needs one flux parameter then so be it, and that model should be carried through to that observational comparisons (if one wants to use this particular method).

      • No it needs to use more than one observation for significance. Nic is also ignoring the PLS for the other eight variables which have no use without the corresponding observation. One observation is not trustworthy as it risks a significant amount of chance. You at least need to factor in shortwave parameters as independent checks. Using nine semi-independent factors is even better for robustness. As I mentioned somewhere else, the second narrowest distribution gives the opposite result to Nic’s, i.e. warmer than BC17. Make what you will of that.

      • Jim, forget reality, we’re dealing with finding a simple model to replicate the relationship between the flux values produced by GCMs and the increase in temps they produce.

        It so happens that the relationship is best describe using one, maybe two, of the parameters. Adding the others doesn’t help explain the projected temp (they have zero weight).

        At that point the rest of the method falls apart. They should have said at that point what you are effectively saying in your most recent comment ‘most of the flux parameters aren’t going to have any weight in the balance of our experiment, and that isn’t what we had in mind’.

        They should have then explored the reasons why that had happened. Instead they exploited a failure in the PLS that gave a solution that used all the parameters without stopping to think, and proceeded on their merry way.

        Do you understand?

      • No, I don’t. The second narrowest spread giving the opposite result is what should have given Lewis pause about the robustness of using just one. Had BC17 not selected seasonal variation, and just used 6 predictors, Lewis would have used that one instead (OLR climatology) and reported that BC17 underestimated the warming. Brown’s reply emphasized how robust their result was to ways of combining predictors.
        Put another way, the models more selected by the seasonal OLR may have done poorly with the OSR measures, and Lewis would not account for that problem at all by ignoring the other eight measures. It distorts the result, and not in a good way, to ignore important and highly relevant observational constraints.

      • Jim, you are confusing yourself by keeping thinking about what happened after the PLS was done. Stop there. The rest is unsound. Try thinking about the method just up to that point and forget what BC17 might have wished to demonstrate.

        On the intended methodology it was the PLS that selected what parameters to use in the rest of the study, not the protagonists. And the fact it produced the contradictory results Lewis and you mention should have stopped the study in its tracks. It caused Lewis to pause, but apparently not the authors (or you).

      • The whole point of the BC17 paper was to inform the temperature change given by models with a measure of how they are doing in relevant parameters today which requires the observations to be used as constraints. This is not an option to be thrown out. Lewis himself still uses select observations, and even select regions of them, in his one-parameter method that he advances, so I don’t know where you draw the line. Also, the spread is a function of the models used, not nature. You can arbitrarily reduce the spread in other parameters by throwing out select models at the outset. The results were not contradictory either because all the parameters reduced the spread for the RCP8.5 case, showing that each had value by itself in narrowing the warming.

      • Jim, it seems you are not very familiar with the nature of scientific study. The point of BC17 was to use a method to investigate observationally constrained projections. Before they even got to the observational bit they went off the rails. End of study. Start again.

      • Lewis only had praise for their chosen methods. You have not made your criticism very clear at all, and I am not going to try and guess what makes you so unhappy.

      • Jim, now you are definitely trolling. Lewis said their method wasn’t correctly applied in this case and in any case wasn’t appropriate. I’m sure even you noticed that.

      • How do you explain that Lewis’s preferred data gives higher sensitivities for all the other RCPs? It completely lacks robustness by cherrypicking one item from a table of 36 elements and saying only use that and throw the rest away because they don’t agree with what he wants as a result. It’s a cherry pick, and should be called out as such. The second highest spread in the table gives the opposite result (as do the 3rd, 4th and 5th). How would he explain that, if he even knew that? it is a fatal flaw in his reasoning.
        Also PLS without the observation is fairly useless because you need the observation to get the prediction ratio.

      • Jim, you’re trolling again. Even you should understand by now that everything you mention here is unsound because it is built on an unsound use of PLS and that Lewis only used it to demonstrate the unsoundness.

      • Lewis used it to demonstrate his result, and he said nothing about unsoundness as he even adopted it himself. In fact he considered his result robust and more skillful just based on spread and without considering other low spreads and what they mean in isolation even as they contradict his result, a fatal oversight for his method.
        BC17 achieved what they wanted under their hypothesis by showing that for models that do better in the current climate, warming is greater than the ensemble mean that equally weights poorer models by their observation-based measure. This is not a proof of greater warming, but is a valid way to discriminate among model projections based on observations given a wide range of model skills in the current climate. It is a statistical way of saying which ones they trust more. You might have reasons not to discriminate based on those observations, and it would be up to you how to discriminate. They give their rationale and it looks fine enough.

      • “But I think that the underlying problem is the non-satisfaction of a key assumption of the PLS method: that the predictor variables are free of uncertainty.”

        Lewis above describing the unsoundness of the method, and for that reason as I’m sure you know BC17 didn’t achieve anything useful, let alone what you claim.

      • Is he referring to the observations being uncertain? Does he not trust the means and variations in OLR, OSR and N? Does he regard the observations as completely unusable even with 15-year averages? Can we guess what Lewis means here, or is he just casting general aspersions in the way he does.

      • “Is he referring to the observations being uncertain?”
        No, as you now know this has nothing to do with observation, it’s what happening in the models.

        “Can we guess what Lewis means here, or is he just casting general aspersions in the way he does.”
        No, I think that’s more in your style as this comment exemplifies.

      • Yes, I see that, if that is all he has, it is a weak argument that because you have 15 years of observations, you need more than 15 years of model data to compare with it. Both have similar uncertainties, but the signals were there with each of the nine variables correlating with the warming. This tells us that the data period was sufficient.

      • Jim, sorry you are burbling and becoming incomprehensible. What you have leaned (slowly) is PLS doesn’t work because it produces contradictory results, and this invalidates BC17’s study.

        The next step in your education was to move on to possible reasons for this failure. I can see that this is getting well beyond your ability to understand (but perhaps not troll).

      • I think you have seen how Lewis’s selection method of seasonal OLR was unsound. Narrowness is not the sole measure of skill with these observations. It is deeply flawed and seriously misguided in its reasoning. Skill is increased by using all available observational constraints, not by eliminating most of them as Lewis does.

      • Jim D, your wood for trees plot is just the raw and smoothed global average air temperature from BEST. It tells us nothing about radiation variability or cloud forcing.

        Your posts have descended into utter nonsense.

        You asked, “When you say they can’t discern the difference between Ice Ages and a hothouse 5 years from now, why?

        Because the uncertainty bars are an ignorance width. They tell us that the models are so poor, that no one has any idea at all what the air temperature will be in the year 2100. Or in 5 years. Or, in fact, in one year.

        Our ignorance of future air temperatures is total.

        If our ignorance of future air temperature is total, and it is, then we have no idea whether the future will be hotter or cooler than now. It’s that simple.

        Apart from that I’ve had conversations with Richard Muller of BEST. He has completely ignored the finite resolution of the temperature sensors. He has also ignored systematic measurement error.

        He treats air temperature measurements as though they are perfectly accurate. This is a huge failing for a trained physicist.

        The neglected measurement uncertainty is large enough that it spans the entire width of your WfT temperature plot.

        Guess what: that plot is pretty much physically meaningless.

      • Now you don’t even believe temperature records that agree with GCMs on annual variations being tenths of a degree instead of degrees as you would have it. You also don’t believe that monthly variability is greater than annual variability in temperatures or radiation, even as I showed it was for temperature and therefore also likely for radiation if you can find monthly data for that.
        When you refer to ignorance, it is yours alone. GCMs run from as far back as 1981 by Hansen have given a good representation of the warming till now in response to what was a good guess at the forcing change in retrospect. Just that we have GCM results that far back and many more since, shows they do not have the uncertainty that you talk about, and on the contrary they gave good guidance. Evaluate some of the older GCM runs yourself if you don’t believe me, and see if you can still call them poor. AR3 was published around 2001 and has GCM runs that didn’t diverge from the true values as much as you believe they could.

      • Non-unique solutions of models do undoubtedly diverge exponentially.

        http://www.pnas.org/content/pnas/104/21/8709/F1.large.jpg
        http://www.pnas.org/content/104/21/8709.full

        You don’t get to see divergent solutions in IPCC reports. The entire exercise there is agnotology – the cultural production of ignorance. Solutions in multi model ensembles – single solutions from many 1000’s of feasible and divergent solutions – there are chosen arbitrarily to confirm modeler bias. And Jimmy wants yet another cite?

        “There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.” http://www.pnas.org/content/104/21/8709.full

        He can’t admit to anything of the sort because then it would all be completely nuts as skeptics have said all along. It doesn’t seem an especially difficult – if chaotic – idea. But Jimmy just keeps insisting that the a posteriori and arbitrarily chosen solutions do not diverge from temperatures. The non-unique solutions they hide – however – do diverge as the simulation progresses over time steps.

        The spread possible in a single model with a single emission scenario is called irreducible imprecision or the evolution of uncertainty. Models intrinsically cannot predict (or project) the future of climate. They can produce any number of nonunique and divergent solutions.

        “Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation (see ref. 26).” op. cit.

        Climate fanatics cannot allow uncertainty to undermine their cultural memes. It’s a bit of a problem. And Jimmy indulges with unrelenting persistence in repeating the same things over and over – getting the last word seems to be an article of faith. Welcome to the goldfish bowl.

      • This brings us back to Patrick Brown. The a posteriori judging of model outputs is based how they do with past climate, as I think you might understand from what you are quoting. The ones that are better with past climate are kept and used for projections of the future. The ones that don’t do so well with the past are not as likely to be good for projections. And Brown even has a way to weight the better ones more for their future projections. Which part of that do you not like Brown doing? I would trust models that did well with the past more than those that didn’t. There are varying levels of skill in models, and ways to determine that a posteriori. Common sense applies.

      • I keep trying – more fool me. All models have potentially 1000’s of exponentially diverging solutions – chaotic in origin as a result of sensitive dependence. As I said – here’s one with only those 1000’s of solutions that pass near to the surface record to 2010.

        https://watertechbyrie.files.wordpress.com/2014/06/rowlands-fig-1-e1515528276356.png

        Which one do you believe again? To pick just one and describe it as skillful is just obscurantist verbiage. No model forecast can be skillful merely – perhaps in some future – probabilistic.

        The multi model ensembles you keep confusing with sound science are nonsense – which means that both sides of the Brown and Caldeiara argument are ludicrously misguided.

      • Not sure what you’re saying. Do you not trust the models that fit the past observations more than the ones that don’t? How would you choose among models if you are not using observations to judge them by? Random perturbations to the physics alone is not a good way to choose when it is devoid of real-world comparisons. Important to evaluate the model in as many ways as you can, not just against the global mean temperature. Brown used their radiative properties, for example.

      • Individual models each have their own envelope of propagating uncertainty over the period of simulation – none are skillful in the way you imagine. The longer the simulation the greater the divergence – even when starting with small initial differences. The source of this potentially immense uncertainty is the nonlinear core equations – and is different to the results of equations or parametizations with which they endeavor to represent climate physics. Of course you don’t understand – that is obvious to blind Freddy.

      • That models can replicate the past warming which has now reached a degree and counting above the pre-industrial mean shows the dominance of the forcing change, and this has implications of it continuing into the future, possibly tripling by 2100. It’s an important thing to notice and not just dismiss these GCM results as blind luck. It comes down to, if you apply heat to a pot it warms with a simple equation, and it doesn’t matter if you can’t predict all the turbulence inside.

      • So you imagine.

        “Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” http://www.pnas.org/content/104/21/8709.full

        Do you even look at stuff I cite? Uncertainty propagates once you run out of data to tune models to. The fact that you may tune a model to ‘several empirical measures’ means zip for accurate forecasts over the coming century.

        And then you turn to a pot analogy? Forget it. Besides – models are temporally chaotic while climate is a spatio-temporal chaotic system. Both have Hurst effects up the wazoo – abrupt and seemingly random regime change.

        e.g.https://www.nature.com/articles/srep09068

        There is not a chance that they have climate physics correct. They are not even close to capturing internal variability that drives climate variability at all scales.

        “‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist.” https://eapsweb.mit.edu/sites/default/files/Three_approaches_1969.pdf

        Ed Lorenz in 1969 – nor has there been much progress in the decades since. The available math and computing power is lacking and observations do not have the precision required to reduce model uncertainty to anything close to reliability.

        There has – however – been some progress in network math. Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.

        It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

        Even simpler things such as

        You are all existing in a state of denial Jimmy dear – and lack the curiosity and openness to review simplistic global warming memes.

      • You seem very uncertain about the ability of forcing changes to explain the observed trends. Still scratching your head over this, for example.
        http://woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.2
        Or why the land is warming twice as fast as the ocean lately.
        http://woodfortrees.org/plot/crutem4vgl/mean:240/mean:120/plot/hadsst3gl/mean:120/mean:240/plot/crutem4vgl/from:1987/trend/plot/hadsst3gl/from:1987/trend
        Its external forcing that does this. Thermal inertia. A sustained energy imbalance. The things you ignore or don’t fully believe in explain these observations. You don’t even need models that are just confirmatory for what we see with our eyes anyway. So if you don’t like models, just use observations like I do.

      • Got distracted – even simpler things like Rayleigh–Bénard convection over warmer and cooler oceans surfaces I was going on to say – are ignored. It changes the energy dynamic of the planet considerably and is utterly unpredictable beforehand.

      • We have been over this ground before. CO2 has provided 2 W/m2 of forcing, most of it since 1950. This is ten times anything solar variations can do in the 11-year cycle, and yet we see the 11-year cycle in the temperature record, so it is no surprise that we see CO2 dominating the temperature record. In your random-noise rules all meme, you would not even see the 11-year cycle, right?

      • I will ignore this yet again – the discussion was about models and your return to wood for dimwits nonsense is just your usual obscurantist verbiage. And something I just can’t be bothered discussing with you yet again. You are of course utterly misguided. But return to models or this is over.

      • Models qualify by being capable of following observations. Observations are the bottom line in all this. You quoted McWilliams on the need to check models against measurements, and then don’t like it when that is pointed out to you. If you are going to quote someone, don’t go back on it. It’s like arguing with Jell-O.

      • This is lying BS. We may tune models – and I have been a hydrodynamic modeler for decades – but uncertainty evolves from there. All of the 1000’s of Rowlands et al solutions pass near to a temperature series – but diverge over the simulation period after that. How can you not see this?

      • Those 1000’s were not vetted against observations anything like to the extent the IPCC models were. This is why they have a larger spread. Evaluation is a constraining factor, and more evaluation is more constraint and less spread.

      • “Broad range of 2050 warming from an observationally constrained large climate model ensemble” https://www.nature.com/articles/ngeo1430

        He has just made something up again. The model is the 3rd generation coupled ocean/atmosphere model from the Hadley Centre. Used in the low resolution version to facilitate 1000’s of runs and explore evolving uncertainties in atmospheric and ocean models. It is and has been for a long time one of the models the IPCC uses. But the CMIP is irrelevant to the scope of model uncertainty within single models that is driven by sensitive dependence on initial conditions.

        The potential spread can be reduced with finer grid resolution – much more computing power – and more precise observations of starting points – not better tuning.

        All models evolve within broad uncertainties. It is not something that is rationally deniable.

      • Explain why they have a wider spread than the IPCC models. It is because they adjusted the physics in various ways from their default model settings, and they did not narrow down the results by seeing which ones performed better on past climate measures, as would be impossible for thousands of runs. So I am not surprised there is a wider spread. Perhaps you are.

      • Each of the multi-model members is selected from 1000’s of equally feasible solutions that can be generated from any model. There is no rational scientific justification for any of it. Each member is solely a matter of modeler bias. Not worth worrying about.

        There are no default physical parameters – there are only variables that are more or less precisely known within limits. Such as for OLR. This is the origin of model uncertainty – any model.

        You are really not smart enough to make things up convincingly Jimmy.

      • This is where matching models to past observations like Brown does, and McWilliams suggests, gives you a better idea of which ones are more likely. Would you not want to use observations for this? I am not getting a straight answer. Are you going to disagree with two papers you quoted now?

      • McWilliams says that models may be tuned to relevant physical parameters. This seems unremarkable. All models need tuning. I am a modeler of long standing. But after that –
        when the data runs out in the future – the solution space does keep expanding over the simulation period due to sensitive dependence on initial conditions. Chaos rules models without any doubt. Both the Slingo and Palmer and McWilliams paper are concerned with this evolving uncertainty and it’s implications. Try reading them – because it really seems you haven’t bothered to.

      • If you read McWilliams, he talks about evaluations against measurements which would help in tuning. This is normal practice. That study showed that detuning them with random physics perturbations increases the spread unsurprisingly.
        Also in the absence of forcing changes there is no divergence. The variability maximizes at a natural value representing the ENSO amplitude mostly with a standard deviation of 0.1 C in annual temperatures. You need to look at unforced simulations to realize that they don’t diverge but hold around a steady mean temperature. Frank is plain wrong on that. An unconstrained random walk it is not. Add forcing and you see the ramping behavior, as in nature.

      • Utter nonsense.

      • Irreducible imprecision in atmospheric and oceanic simulations?

        “Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation (see ref. 26).”
        http://www.pnas.org/content/104/21/8709.full

        It is not random but chaotic without any doubt.

      • You need to look at what happens when the same model is run only changing the initial conditions (the LENS project). This shows the level of “chaos” that can develop.
        http://www.cesm.ucar.edu/projects/community-projects/LENS/images/Figure1.gif
        The forcing signal is bigger.

      • Bottom line. Weather is chaos. Climate is predictable from forcing.

      • “Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.”

        I am a bit outraged now. Jimmy utterly distorts this to support his memes. Utterly untrue.

      • Actually ensembles are what the IPCC uses because of this issue, and you quoted another part McWilliams wrote on a posteriori evaluations that I responded to. Check what you quoted and my response if you forgot.

      • There is a vast difference between opportunistic ensembles of the IPCC and the perturbed physics ensembles under discussion. Evolving uncertainty is a property of AOS generally. The difference I have described.

        “As the ensemble sizes in the perturbed ensemble approach run to hundreds or even many thousands of members, the outcome is a probability distribution of climate change rather than an uncertainty range from a limited set of equally possible outcomes…”

        The latter is irrelevant to the evolution of uncertainty in all AOS individually.

      • So they all give warming responses to increased GHGs within the IPCC range of warming, and if there is any deviation it is broadening the top end of the response. None give flat temperatures or cooling. Does that give you any pause in advocating for this study? Would you take the middle of the range as therefore reasonable or would you say that is an unlikely possibility for some reason?

      • Pat Frank is correct and you have been arguing nonsense for days. Unconstrained – by warm regime data – models go every which way.

        The opportunistic ensembles of the IPCC are constructed from single solutions for different models and gloss over the theoretical difficulties of these arbitrary choices. Even the constrained, perturbed physics ensemble give no guidance. Is it one degree or 5?

        But one can already see –
        in the first decade of this century – divergence of real climate from model outcomes in the Rowlands et al graph. This will only grow as the physics of the planet has not been even close to adequately captured.

      • It would be 3 plus or minus one for that scenario and additions or subtractions of degrees for other scenarios. The last 60 years match a rate of over 2 C per doubling and that includes the 21st century so far.
        http://woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.2

      • Yeah – not so much.

      • Depends more on the fossil fuels than anything. That’s a 5 degree range on its own.

      • No – it is absolutely uncertain. Even the emissions pathway.

      • The emissions pathway is the biggest uncertainty – five degrees worth of uncertainty.

      • Not even close.

      • The transition from warmer to cooler to warmer again seen last century require that oceans cool warm and cool again respectively – they don’t – or that natural changes in convection alter the albedo of cloud.

        “This cloud system has been shown to have two stable states: open and closed cells. Closed cell cloud systems have high cloud fraction and are usually shallower, while open cells have low cloud fraction and form thicker clouds mostly over the convective cell walls and therefore have a smaller domain average albedo.4–6 Closed cells tend to be associated with the eastern part of the subtropical oceans, forming over cold water (upwelling areas) and within a low, stable atmospheric marine boundary layer (MBL), while open cells tend to form over warmer water with a deeper MBL. Nevertheless, both states can coexist for a wide range of environmental conditions.” http://aip.scitation.org/doi/10.1063/1.4973593

        Satellite data sugges9ts that this is the dominant forcing in the 1990’s – and that is consistent with ocean warming from annually realized XPT data over the same period.

        Jimmy would like to dismiss these observations as wiggles – a more suitable description of his commenting style – but are demonstrably secular climate variability over millennia.

        e.g. https://www.nature.com/articles/ncomms11719

        Jimmy loves his wood for dimwits charts but they ignore too much of the complexity and uncertainty of climate that is it’s chief interest.
        The endless repetition of the simple forcing meme is tedium squared.

        That there is greenhouse gas forcing is something that I have never denied – despite this oft repeated posturing. That it is potentially a trigger for abrupt climate change is a step too far for many people – including Jimmy dear. That we can manage emissions with innovative land use and technologies is without doubt. The latter make climate a matter of scientific interest only – and not a social movement intent on ramming their delusion of moral and intellectual superiority – and utterly repugnant social and economic agenda – down our throats. The sooner we can reclaim climate as a science the better it will be for the world.

      • What you are describing is a positive cloud feedback to ocean warming. Models also tend to produce a positive cloud feedback to warming, while cloudiness reducing during the rapid warming of the 90’s supports that, but runs counter to any idea that cloud feedbacks must be negative which was a cherished hope of many lukewarmers.

      • What is being described is the difference between cloud radiative effects over cold upwelling regions and over warmer surfaces – the change in cloud albedo is mostly natural.

      • Cloud albedo can respond to water temperatures which in turn can respond to global warming.

      • But the big changes are quite natural.

      • What about if Greenland melts? Spontaneous?

      • Non sequitur.

      • You talked about big changes. That’s a big change. Clouds – meh.

      • Big changes in ocean surface temperature. Try reading harder.

      • Like added heat content? That would be from the imbalance, right? Or is it some kind of self-warming effect. Volcanoes? Loss of sea ice?

      • That would be from upwelling of cold, abyssal ocean water. Duh. Just one of the mechanisms of natural climate variability.

      • You are talking about short-term fluctuations that self cancel on the long term. That is not climate change.

      • Local variations representing a couple of percent of the earth’s surface. That’s another red herring.

    • Yes – models may be tuned to several approximately known variables – or even fortuitously mimic more or less nonlinear climate variability with Lorenz variability in climate models. Hurst showed chaos in Nile River stage records over a millennia and a half – so in climate. Regimes and sudden more or less extreme shifts, perturbations of quasi standing waves in Earth’s spatio-temporal chaotic flow field.

      But once they are tuned – then feasible – with initial conditions within the bounds of observation error – model solutions continue to exponentially diverge from each other over the period of simulation.

      http://www.pnas.org/content/104/21/8709/F1.large.jpg

      That’s what physics says.

  15. ”Given current uncertainties… the implications of this ability to engineer climate sensitivity need to be considered when estimating the uncertainty in climate projections.”

    True, true, Western academia needs to do more to get the politics out of climate science and call out global warming delusionists who for years have with the help of MSM been all too effective at marginalizing the voices of reason from their side of the political divide.

    Despite the statements of numerous scientific societies, the scientific community cannot claim any special expertise in addressing issues related to humanity’s deepest goals and values… Any serious discussion of the changing climate must begin by acknowledging not only the scientific certainties but also the uncertainties, especially in projecting the future. ~Dr. Steven Koonin

  16. At his12/25/5:12 pm post above Robert Ellison said “This entire discussion is about 6 theoretically impossible things before breakfast and is a waste of everyone’s time.” I agree entirelyThe harsh reality is that the output of the climate models which the IPCC rely’s on on their dangerous global warming forecasts have no necessary connection to reality because of their structural inadequacies. See Section 1 at my blogpost
    https://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html and the linked paper
    Here is a quote:
    “The climate model forecasts, on which the entire Catastrophic Anthropogenic Global Warming meme rests, are structured with no regard to the natural 60+/- year and, more importantly, 1,000 year periodicities that are so obvious in the temperature record. The modelers approach is simply a scientific disaster and lacks even average commonsense. It is exactly like taking the temperature trend from, say, February to July and projecting it ahead linearly for 20 years beyond an inversion point. The models are generally back-tuned for less than 150 years when the relevant time scale is millennial. The radiative forcings shown in Fig. 1 reflect the past assumptions. The IPCC future temperature projections depend in addition on the Representative Concentration Pathways (RCPs) chosen for analysis. The RCPs depend on highly speculative scenarios, principally population and energy source and price forecasts, dreamt up by sundry sources. The cost/benefit analysis of actions taken to limit CO2 levels depends on the discount rate used and allowances made, if any, for the positive future positive economic effects of CO2 production on agriculture and of fossil fuel based energy production. The structural uncertainties inherent in this phase of the temperature projections are clearly so large, especially when added to the uncertainties of the science already discussed, that the outcomes provide no basis for action or even rational discussion by government policymakers. The IPCC range of ECS estimates reflects merely the predilections of the modellers – a classic case of “Weapons of Math Destruction” (6).
    Harrison and Stainforth 2009 say (7): “Reductionism argues that deterministic approaches to science and positivist views of causation are the appropriate methodologies for exploring complex, multivariate systems where the behavior of a complex system can be deduced from the fundamental reductionist understanding. Rather, large complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior. The practical implication is that there exist system behaviors and structures that are not amenable to explanation or prediction by reductionist methodologies. The search for objective constraints with which to reduce the uncertainty in regional predictions has proven elusive. The problem of equifinality ……. that different model structures and different parameter sets of a model can produce similar observed behavior of the system under study – has rarely been addressed.” A new forecasting paradigm is required. Here is the abstract of the actual paper
    “The coming cooling: usefully accurate climate forecasting for policy makers.
    Dr. Norman J. Page
    Email: norpag@att.net
    DOI: 10.1177/0958305X16686488
    Energy & Environment
    ABSTRACT
    This paper argues that the methods used by the establishment climate science community are not fit for purpose and that a new forecasting paradigm should be adopted. Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths. It is not possible to forecast the future unless we have a good understanding of where the earth is in time in relation to the current phases of those different interacting natural quasi periodicities. Evidence is presented specifying the timing and amplitude of the natural 60+/- year and, more importantly, 1,000 year periodicities (observed emergent behaviors) that are so obvious in the temperature record. Data related to the solar climate driver is discussed and the solar cycle 22 low in the neutron count (high solar activity) in 1991 is identified as a solar activity millennial peak and correlated with the millennial peak -inversion point – in the UAH temperature trend in about 2003. The cyclic trends are projected forward and predict a probable general temperature decline in the coming decades and centuries. Estimates of the timing and amplitude of the coming cooling are made. If the real climate outcomes follow a trend which approaches the near term forecasts of this working hypothesis, the divergence between the IPCC forecasts and those projected by this paper will be so large by 2021 as to make the current, supposedly actionable, level of confidence in the IPCC forecasts untenable.”
    For cooling forecast see Fig 12

  17. Just how hard is all of this?

    Take all global land (location) readings that are near or surrounded by concrete type materials, and put those on one side of a sheet of paper, list A.

    Place all other land readings on the other side of the paper, list B.

    Average each. The A list is known to be higher, we agree. So subtract average B from A and adjust downward all A list values by that difference and now determine the average of A’ and B.

    Compare this average with the official published global temperature and continue discussion.

  18. Dr Brown has a post on his blog
    Do ‘propagation of error’ calculations invalidate climate model projections of global warming? Posted on January 25, 2017 by ptbrown31
    My thoughts on claims made by Dr. Patrick Frank (SLAC) on the validity of climate model projections of global warming:

    Fascinating reading on the concepts of errors, means, and projection into the future.

    Pat Brown seems to be trying to do a reverse Pat Frank

    The hubris is writ large.

    • Why are you linking to something where Pat Frank gets demolished? I thought this was a skeptical blog. Anyway, recommended viewing.
      https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/

      • “You can tell that Pat Frank is wrong by the fact that these simulations started around 1850 and had no such divergence by 2000,”

        Off topic as we are I am happy to oblige your concerns.
        The models have to follow history. They did not run them from 1850, they copied the observations from 1850.
        Think about it.
        Pinatubo for example. It is in there, in the models.
        Can models predict a volcano accurately in time?
        No.
        But there it is.
        So your first comment is a deliberately wrong smokescreen interpretation, attacking Pat Frank for something he did not even say and which is physically untrue, as just explained.
        It is what I call a Mosher moment as I have explained to him in the past.
        When using observations you have to expect a little bit of variation, some results that do not agree with your theory.
        When they all line up like ducks in a row you have been had, or like Cowtan and Way you are selling someone the Brooklyn Bridge.

        The second point is much simpler.
        Errors propagate.
        The fact that there is a large error in the estimation of cloud forcing means that the tiny changes in temp due to tiny CO2 changes cannot be found in the magnitudes larger noise in one year. If you propagate this over 20 years you can never attribute small temp changes to CO2.

        Now I don’t like the argument but I cannot refute it.
        He does not say that CO2 warming is not real by the way.
        He just says that statistically the errors in measuring temperature due to our poor knowledge of clouds is such that we cannot tie warming to CO2 by observation.

        Bit of a blow to me to but should make the cheifio happy.
        That is what he has been trying to get you to see.
        Perhaps we could ask him what would be convincing if not atmospheric temp measurements?

      • They run the models the same way from 1850 as they do beyond 2000. The only input is external forcing like GHG changes, volcanoes, aerosols. Do you agree that the forcing keeps the temperature on track? If so, why not also in the future? I think you are starting to see that the forcing controls the temperature and prevents it from drifting, which is the point. The only question is the magnitude of the temperature change per forcing change, and the model answers that because it is not an input.
        I am reading the comments at that link and Pat Frank seems highly confused by units, insisting that the average forcing should be W/m2/year for some reason. Weird stuff.

      • “They run the models the same way from 1850 as they do beyond 2000. ”

        Simply not true.
        Proof?

        The only input is external forcing like GHG changes, volcanoes, aerosols.

        The observed past can have estimations of real events put into the model, hence the match.
        The future projections can never put real natural variability in, just an averaged guess with huge error ranges.
        They should in fact be smoothed lines except for all the bumpy wriggles they put in to make their predictions look real instead of computer generated.
        Weird stuff.

      • angech, which part don’t you understand? The only input is the external forcing. They don’t put internal natural variability in, so El Ninos don’t occur in the right years in the model runs. The ocean and atmosphere are free running. Therefore the past is done the same as the future apart from external forcing, and for the future they have to guess the change in GHGs and aerosols for example, and the future doesn’t have the large volcanoes of the past. That’s the only way to tell them apart.

      • ” They don’t put internal natural variability in,”
        -wrong
        That is what gives the wriggles in a GCM.
        Otherwise they would only have smooth curves.
        They throw figurative dice, Jim D, just like in dungeons and dragons.

        “so El Ninos don’t occur in the right years in the model runs. ”
        Wrong
        They do not occur at all. They do not need El Niño or La Niña in a GCM.
        As you correctly say
        “the ocean and atmosphere are free running. ”

        “Therefore the past is done the same as the future ”
        Wrong.
        The models put in the past observations Volcanoes , El Ninos and all because they knew they were there.
        The future starts from when the models start actually running, predicting
        Or as you say, guessing
        “for the future they have to guess the change in GHGs and aerosols for example, and the future doesn’t have the large volcanoes of the past.”

        “That’s the only way to tell them apart.”
        Bright boy, that is one of the ways, you are getting it at last.

      • You seem to be so misinformed in how they run GCMs that I am not sure it is worth correcting you. Where do you get this stuff from? They only use external forcing to drive the GCMs (GHGs, aerosols, volcanoes, solar). They have wiggles because the ocean and atmosphere are chaotic both in the model and in reality. They have El Ninos to different degrees, and some are even good but they don’t correspond by date to real ones (look at their wiggles). So, yes, the forcing is the only difference between the past and future parts of their runs, and maybe you got there in the end, but cease thinking they add in natural internal variability. That is not even possible to do in a coupled model because its purpose is to produce that part of the variation itself.

      • A win, Jim D.
        There are El Ninos in Some GCM.
        Not put in by the programmers, just evolving from the data.
        Not in the right places or times but who cares about them being wrong.
        They exist.
        Nick Stokes on explaining why the pause which didn’t exist , existed.
        “So that is why a GCM, in its normal running, can’t predict an El Nino (or pause*) Good ones do El Nino’s well, but with no synchronicity with events on Earth.”
        * comment in brackets my paraphrase, not Nick’s.
        Interestingly a number of commentators who vehemently deny the existence of a pause discuss why GCM’s cannot predict one, sort of implying that it did exist.
        My apologies.

      • Pauses are statistical possibilities. Some GCMs had them, not at the right time, but they were there. You don’t see them when you average the temperature over a decade and compare with the previous decade either in GCMs or in nature since the CO2 trend became large. When you define climate by a 30-year temperature, you only see a steady trend in the observations, nearly 0.2 C per decade in recent decades.
        http://woodfortrees.org/plot/gistemp/mean:120/mean:240
        This only happens in GCMs that account for increasing GHGs. Go figure.

      • Using the PDO as a proxy for ENSO, this is what the ENSO trend since 1900 looks like:

        https://i.imgur.com/SH1Io00.png

        The models make no attempt to time ENSO, so describing their ENSO timing as “wrong’ is just political, red-in-the-face gamesmanship. There would be an advantage if shorter term models could time ENSO as a lot rides on its timing. Big bucks. In that respect an ENSO in 2016 would have been very useful info around 2012. But in terms of climate, saying 2016 warming was caused by an El Niño is getting oneself completely lost in the weeds, and there are a ton of people completely lost in the weeds. The models do not have to time 2016. In terms of climate (2100,) that would just indicate somebody spent a ton of money on useless window dressing.

      • JCH:

        What’s with the JISAO PDO for November? I hope they didn’t go out of business.

      • They are the PDO, so no, they are not going out of business.

    • Mmmm. Pat Frank had bad reviews at ATTP but in the discussion at Pat Brown’s, which was pretty civil for a disagreement I felt his (Frank’s) replies won the day.
      You did not.
      A bit like that what colour the dress argument. We naturally see different results from the same input.
      Guess we must be models then.

      • You can tell that Pat Frank is wrong by the fact that these simulations started around 1850 and had no such divergence by 2000, and also multi-model ensembles of simulations don’t diverge as much as he expects between 2000 and 2100. Where do you think he went wrong on this? The facts just don’t support him. In reality the climate and models are constrained by the forcing and can’t drift away from an equilibrium so much unless their energy balance is completely out of whack. His wrong assumption is that the errors are uncorrelated over time when in fact they tend to cancel due to a restoring force towards equilibrium.

      • Model solutions seem to exponentially diverge over time – as theoretically predicted. I know – let’s do the experiment.

        https://watertechbyrie.files.wordpress.com/2014/06/rowlands-fig-1-e1513668213559.png

  19. Comparison of the Atlantic Meridional Overturning Circulation between
    1960 and 2007 in six ocean reanalysis products

    In this work, we explicitly consider the decadal trends because reanalyses, like forced ocean model simulations, are susceptible to spurious climate drift. Various strategies are used to mitigate the drift (see Table 1), and while these are likely successful to some degree, there is no obvious means of discriminating between spurious climate drift and actual changes in the climate system that we wish to reconstruct.

    If spurious drifts make these models too unreliable for assessing past trends, why are they useful for projecting 21st century trends? If it’s impossible to distinguish between drift and real effect in an ocean hindcast, how is it easier with long-term future coupled atmosphere+ocean runs? Intuitively those would seem much more complex and prone to drifts and other errors.

  20. Jim D, you wrote, “Pat Frank seems highly confused by units, insisting that the average forcing should be W/m2/year for some reason.

    Not correct.

    My analysis took the mean annual *error* in long wave cloud forcing (LWCF) derived in Lauer and Hamilton, 2013. It’s not an average forcing. It’s an annual average error in forcing. Mean annual error in LWCF has units Wm^-2 per year. An annual average necessarily has units of per year.

    The annual average LWCF error is a model calibration error. It propagates through sequential stepwise calculations

    ATTP never figured that out. Pat Brown tried to dodge the obvious, and then tried to eliminate it by re-writing the rules of algebra.

  21. Jim D, you wrote, “You can tell that Pat Frank is wrong by the fact that these simulations started around 1850 and had no such divergence by 2000,…

    The propagated uncertainty bars do not predict divergence. They do not enter a projection, they do not govern a projection, they have nothing to do with the trend of a temperature projection.

    The uncertainty bars indicate the level of ignorance concerning the model projection (the expectation values). That level of ignorance arises from known systematic model errors — errors deriving from the structure of the model itself. Errors that enter into every prediction.

    The large uncertainty bars indicate that no confidence can be put in the temperature projections. They are entirely unreliable. And climate modelers have no clue.

    The models give reasonable 20th century results, by the way, because they are all tuned to do so.

    • If you are saying errors propagate and grow, future projections would diverge from each other because there is no reason different models should follow the same error growth path when you are talking about random errors. The real system and models are constrained by the top-of-atmosphere energy budget to stay within realistic temperatures, while your system has no physical constraints.

      • Propagation of the model calibration error statistic yields growth of uncertainty, not growth of error. Uncertainty and error are not the same.

        Long wave cloud forcing error is systematic, not random, and is inherent in the model.

        The physical constraints on the model (aka tuning) are untouched. Uncertainty propagation does not impact the model or the temperature projection. Uncertainty growth indicates the reliability of the temperature projection.

        I have yet to encounter a climate modeler who recognizes the distinction between physical error and uncertainty.

      • You are saying that the future temperature could be 10 degrees lower or higher at some point, and therefore you imply that a different model could demonstrate those kinds of temperatures. It can’t. Physics prevents that by tightly constraining the global temperature within a small range of natural variability.

  22. Jim D, “You are saying that the future temperature could be 10 degrees lower or higher at some point…

    No, I am not. You’re confusing an uncertainty in temperature, ±K, with a physically real temperature, K. You’ve made a very naïve mistake.

    The meaning of uncertainty is right there in my post: “Uncertainty propagation does not impact the model or the temperature projection. Uncertainty growth indicates the reliability of the temperature projection. (new bold and underline)”

    Look at it this way, Jim: you’re supposing the uncertainty in temperature, ±K, is a physically real temperature.

    So, how can a physically real temperature be simultaneously positive and negative?

    That impossibility alone should have told you your thinking is incorrect.

    In my experience, Ph.D. climate modelers often make the same mistake. They’ve also often supposed an uncertainty in forcing, e.g., ±Wm^-2, is an energetic perturbation. They appear to have no training in error analysis.

    • You even plot graphs that have uncertainties that diverge, and now you are saying it is impossible for those values to be physically realized? Surely uncertainty applies to realizable states, and so it is your definition of uncertainty that is off and apparently not of any practical value in estimating errors in projections. I agree that your temperatures extremes are nonphysical and unrealizable, however.

      • Uncertainties evolve over the simulation period with exponentially divergent solutions due to sensitive dependence and structural instability. Not an idea that Jimmy can get his head around.

        https://watertechbyrie.files.wordpress.com/2014/06/rowlands-fig-1-e1515528276356.png

      • You don’t even need a model to see what temperature variations result from what forcing changes. The past is a key to this relation. Skeptics should be more aware what observations tell us about forcing and warming already half way to a doubling.
        http://woodfortrees.org/plot/best/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.25
        It is very robust and follows quantitative expectations from physics with the forcing change at 2 W/m2 being already like a 0.5% solar increase. If the 2 W/m2 was from TSI and not CO2, skeptics would be on board with this upward trend.

      • This is a different argument – one I can’t be bothered having with you.

      • RIE, Pat Frank shows plus or minus ten degree divergences in these periods. A few degrees may be understandable, but ten, no, and he does it with just one model, not perturbed models so you are not even talking about his problem, and you probably didn’t look at what he is saying. Maybe you should before commenting further.

      • Jim D, uncertainty is an ignorance measure. I pointed that out in my post of January 23, 2018 at 3:53 pm.

        Uncertainty is a measure of the reliability of calculated physical states, but is not itself a physical magnitude.

        This is the standard meaning. See, for example, C. J. Roy and W. L. Oberkampf (2011) A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing Comput. Methods Appl. Mech. Engineer., 200, 2131-2144.

        But you apparently didn’t grasp that point, either, on your way to confusing ±K with K.

        The uncertainty in Robert Ellision’s posted figure of January 25, 2018 at 12:41 am is a measure of precision, by the way, not of accuracy.

        Propagated model calibration uncertainty, however, reflects accuracy and is a valid indicator of projection reliability..

      • 10 K of uncertainty may be a measure of your ignorance, but it is not that of science. We know from physics that the temperature will not vary ten degrees in the absence of a forcing change, neither in models nor in nature. Trying to make this about semantics and not actual physical temperatures just doesn’t cut it with me.

      • Jim D “RIE, Pat Frank shows plus or minus ten degree divergences in these periods.

        No, I do not. You’re again making the mistake of a very naïve college freshman, Jim. Your truly nonsensical mistake won’t be right no matter how often you repeat it.

        It is blindingly obvious that ±K uncertainty is not temperature, K.

      • No, your uncertainty is not physical, it is just mental. I get it.

      • Jim D, you’ve transitioned from naïve ignorance to willful ignorance; the entire AGW paradigm in miniature.

      • You haven’t shown that you can distinguish between interannual natural variation and long-term trends that both nature and models produce. The interannual variation of the temperature is shown here.
        http://woodfortrees.org/plot/best/mean:12/derivative/scale:12
        Typically several tenths of a degree per year. Its long-term trend is ten times less but still easily detectable.
        http://woodfortrees.org/plot/best/mean:120/mean:240/plot/best/from:1987/trend
        A similar thing happens with net radiation which is physically connected to the temperature variation both on interannual and long-term time scales. Models capture both these variations and trends.

      • You’ve taken recourse to a diversionary irrelevance, Jim D. Seeing that, one suspects even you know that you don’t know what you’re talking about.

      • Thanks for your input.

      • Enter with error, exit with negligence.

      • The true skeptics are not fans of the stuff you’ve been peddling for ten years either.
        http://www.skepticforum.com/viewtopic.php?t=9961#p130963

      • Fandom doesn’t matter, Jim. The error analysis is correct.

      • You also have your hilarious item at WUWT where you got multiple journal rejections for trying to use a simple random walk model to represent climate models instead of autoregression. Rookie mistake. Your own assumption about the time series was wrong and it got worse from there.
        https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/

      • Wrong again, Jim D. You clearly didn’t read the WUWT post you criticized. There’s a surprise.

        I showed that climate model air temperature projections are mere linear extrapolations of GHG forcing. However, reviewers rejected the paper because they repeatedly made freshman-level naïve mistakes similar to yours.

        They think, like you, that ±K is a physical temperature. There’s a hilarious mistake for you.

        They have also mistaken a ±Wm^-2 for an energetic perturbation, and think that the necessarily offsetting errors of tuned models somehow improve the deployed physical theory. Incredible but true.

        These examples indicate the truly impoverished quality of thought one finds among consensus climate scientists. Poor thought just like yours, Jim D.

        The post you linked lists 12 fatal reviewer errors of thinking. Any one of those mistakes would indicate the culprit is not a scientist. And yet they flatter themselves with the label.

      • Your first mistake is the use of a random walk statistical model to represent temperature series. Anyone whi just looks at temperature series knows that this does not represent the past either in models or climate, and yet you assume it represents the future regardless of this fact. People who know about series and statistics use an autoregression model that also has a stochastic component to represent interannual variability. Your failure to justify your silly random walk model over the commonly used autoregression at the outset is a reason for rejection in itself. Trying to shop your silly walk model to journals with people who know about statistics is what your hilarious episode on WUWT is about. Nobody fell for it, and the editors and reviewers probably had a hard time keeping a straight face with your effort that has been going on for ten years now. The fact that you haven’t even picked up any climate-skeptical statisticians as supporters should tell you something.

      • “Pat,
        This has already been explained to you numerous, so it’s unlikely that this attempt will be any more successful than previous attempts. The error that you’re trying to propagate is not an error at every timestep, but an offset. It simply influences the background/equilibrium state, rather than suggesting that there is an increasing range of possible states at every step. For example, if we ran two simulations with different solar forcings (but everything else the same), this wouldn’t suddenly mean that they would/could diverge with time, it would mean that they would settle to different background/equilibrium states.” ATTP

        This is of course Jimmy’s argument and it is completely and demonstrably wrong. There is no doubt – it is not climate it is math and the experiment has been done many times. But the meme shared by Jimmy and Ken Rice refuses to die.

        “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” Julia Slingo and Tim Palmer

        The cause is uncertainly in initial conditions and the result is irreducible imprecision. Regardless of motivated nonsense from Jimmy D.

      • OK, so do you agree with Pat’s 30 degree range of possibility by 2100 or not? In what version of climate model physics is that even possible?

      • The versions that have the physics of deterministic chaos at their core. Oh… wait… that’s all of them. Agnotology is strong with this one.

      • So you are now agreeing with this joke of a representation?
        http://cornwallalliance.org/wp-content/uploads/2016/08/Patrick-Frank-Uncertainty-Propagation-in-Projections-of-GAT-300×250.jpg
        Note how a model that started in 2000 already has ten degrees of uncertainty by 2020, and the fastest error growth is in year 1(!) Clearly models didn’t do this. What went wrong?

      • Yes I do Jimmy and your failure is not my problem. I can barely read the graph but very obviously models do something like that.

      • You earlier posted a graphic of the CMIP3 models with a linear growth to a few degrees, so now you hold two widely conflicting thoughts in your mind at the same time. How do you resolve the difference? Show your other graph to Pat Frank and ask him why it is so different and which one he believes.
        https://wattsupwiththat.files.wordpress.com/2015/02/clip_image008_thumb1.png?w=542&h=354

      • No Jimmy – what I showed a single model with 1000’s of divergent solutions. About the best Rowlands et al could say of this perturbed physics ensemble – constrained to only those results that were close to measured temps to 2010 – was that the range was greater than with IPCC multi-model ensembles.

      • Ask what Pat Frank showed then, and compare and contrast. He might think what you showed is actually impossible or highly coincidental that all these models follow such a similar random path.

      • All which models Jimmy dear?

      • You are confused and making ad hoc claims.

      • Maybe you can show him this picture from one of his articles at WUWT and ask him whether he doesn’t believe the one on the left because he doesn’t explicitly say, or even try to explain the large difference between his results and the actual facts.
        https://wattsupwiththat.files.wordpress.com/2015/02/clip_image0022.png
        If you want to go down his rabbit hole, here it is. Lots of rambling and denigrating, but ludicrous rather than lucid.
        https://wattsupwiththat.com/2015/02/24/are-climate-modelers-scientists/

      • That looks like the IPCC multi-model – each of those solutions is arbitrarily chosen after the fact from 1000’s of feasible solutions. No mystery there – not much that is scientifically justifiable either.

      • Here’s something from his linked article – see if you agree.
        “The rapid growth of uncertainty means that GCMs cannot discern an ice age from a hothouse from 5 years away, much less 100 years away. So far as GCMs are concerned, Earth may be a winter wonderland by 2100 or a tropical paradise. ”
        Really? Then why don’t their results show that? Explain that.

      • You really don’t get that there is no unique solution to any of these models. And models do behave that way without any question about it – except from the inept and misguided.

      • So you buy into the 5 years to a GCM Ice Age bunk? All I needed to know. You need to be a bit more discerning.

      • What happens in the Lorenzian chaos of models is very different to the spatio-temporal of climate – but both are subject to more or less abrupt and extreme change purely from internal dynamics.

        “Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.” https://www.nap.edu/read/10136/chapter/2

        It is not really odd that climate fanatics are so clueless.

      • This is the part where you believe Frank that a GCM can go from the current climate to an icehouse or hothouse in 5 years, despite the fact that it has not even come close to happening in centuries of CMIP climate simulations. You and Frank don’t let facts get in the way.

      • “Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.” http://www.pnas.org/content/104/21/8709.full

        Realistically – models have extreme imprecision that is mathematically indeterminate beforehand. In practice a single solution within a very broad range is arbitrarily chosen on the basis of what it looks like and this is included in CMIP ensembles. It is one of the dumbest bits of so called science ever. We don’t get to see extremes in CMIP ensembles.

      • OK, now you are saying things that Frank didn’t, and it is incomprehensible anyway. You seem to think there is a step where the majority of CMIP solutions need to be discarded, otherwise you can’t explain your result. Can you and he get a consistent story together, especially the part about why heat capacity is ignored in his model as well as any feedback of the temperature onto the net radiation (the Planck feedback).

      • You don’t have the intellectual building blocks to understand. And are insulting and abusive. Go away.

      • I try to give you points to ponder and you come back with nothing. Heat Capacity, Planck Response. Try an energy conserving model that accounts for these things instead of Frank’s random walk model that is devoid of any concept of energy conservation or balance. It’s fundamental.
        Energy in = internal warming rate – energy out

      • http://www.pnas.org/content/pnas/104/21/8709/F1.large.jpg

        You clearly don’t understand the math – and with you it is all memes.

      • I corrected Pat Frank’s analysis using his own methods elsewhere just now. Take a look. See if you agree that he underestimated his level of uncertainty by 1000%. He is a very uncertain person, and the numbers go to show it. It just uses monthly instead of annual steps.

      • You don’t understand Pat Franks analysis – you correcting it would be something monumentally unlikely.

      • His “analysis” consists of an equation with a constant and a square root function. Not much there to understand, right?

      • “In conventional deterministic systems, the system state assumes a fixed value at any given instant of time. However, in stochastic dynamics it is a random variable and its time
        evolution is given by a stochastic differential or difference equation:

        x˙(t) = f(t, x) + g(t, x)Γ(t) (3)

        where, g(t, x)Γ(t) is a stochastic term representing the combined effect of un-modeled dynamics,
        and external disturbances acting on the system. In the orbit uncertainty propagation
        problem, this term can constitute un-modeled external disturbances such as solar radiation pressure, atmosphere drag for low-earth-satellite orbit etc. We further assume Γ(t) to be a
        zero mean white-noise process with the correlation matrix Q. The uncertainty associated
        with the state vector x(t) is usually characterized by a time dependent state pdf p(t, x). In
        essence, the study of stochastic systems reduces to finding the nature of the time-evolution of the system-state pdf.”

        Error propagation in climate models is approximated by random variables in the solution space. Pat Franks starts with a cloud long wave uncertainty of +/- 4W/m2 and the solution space expands with each simulation time step.

        It is one approach and the spread of the resultant –
        solution space is more or less right. But as I said – the idea is a little beyond Jimmy dear.

        https://wattsupwiththat.files.wordpress.com/2016/11/pat_frank_hansen_uncertainty.jpg

      • The actual annual temperature variance is 0.2 C, not 2 C as Frank would have it, so even if you wanted to have an annual-step random walk with that, you only get to 2 C in a century. A random walk assumes that even when the temperature reaches a new extreme like an El Nino it still has equal chances of going up as down, which is not the case and where autoregression is a far superior representation of how it behaves both in nature and in GCMs because the GCM mean doesn’t just drift at random by 5 degrees in 5 years.

      • You are imagining that the model solution space has something to do with the real world. The former is largely the result of sensitive dependence on initial conditions and strange attractors. Math of dynamic systems Jimmy dear.

      • Can the mean global temperature drift by 5 degrees in 5 years in either GCMs or the real world? Of course not because of energy conservation and heat capacity. It’s physics not idealized mathematics. Even attractors have limits, which a random walk does not. Different systems entirely, surprisingly to you.

      • There is abrupt and extreme change in the real world – regionally as much as 16C in a decade.

        We don’t know the dimensions of strange attractors in model space – but as I said Pat Franks is more or less correct. They have limits? Whoop de doo. You can’t make that leap from forcing to nonlinear dynamic models can you?

        “As the ensemble sizes in the perturbed ensemble approach run to hundreds or even many thousands of members, the outcome is a probability distribution of climate change rather than an uncertainty range from a limited set of equally possible outcomes, as shown in figure 9. This means that decision-making on adaptation, for example, can now use a risk-based approach based on the probability of a particular outcome.” Slingo and Palmer 2011

        The ‘limited set of equally possible outcomes’ is the multi-model CMIP ensembles. But each of these outcome are arbitrarily chosen – we could call it charitably expert judgement – from 1000’s of equally feasible outcomes. It is a scam that depends on people like you not understanding anything much about the nature of the equations at the core of models.

      • I agree with this part “This means that decision-making on adaptation, for example, can now use a risk-based approach based on the probability of a particular outcome.” But if Frank and you say the outcome is uncertain to 30 degrees (it isn’t), what would you propose?

      • I would presume that the most probable real world outcome this century is between minus 5 and plus 2. The model solution space is very different. But we are still far from being able to construct a pdf. And whether you agree or not seems far from conclusive.

      • The model solution depends mostly on the emissions but ranges by about 5 degrees, not 30, so Frank is quite wrong on that one, and you rightly disown that (you should tell him why). The climate also could cover the same 5 degree range barring some kind of tipping point with one or both ice caps, which is the only way to get some cooling (see Hansen).

      • Model solutions – note the plural – exponentially diverge from small and not so small differences in initial conditions.

        http://rsta.royalsocietypublishing.org/content/roypta/369/1956/4751/F8.large.jpg
        “Schematic of ensemble prediction system on seasonal to decadal time scales based on figure 1, showing (a) the impact of model biases and (b) a changing climate. The uncertainty in the model forecasts arises from both initial condition uncertainty and model uncertainty.”

        Models are unreliable – CMIP have no rational scientific justification – Pat Franks point – Hansen even less so. It is just what modelers have been saying for decades.

        As for the climate system – it remains complex and uncertain. But climate change is now a social movement rather that relying on sound science.

        “The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change… Over the last several hundred thousand years, climate change has come mainly in discrete jumps that appear to be related to changes in the mode of thermohaline circulation. We place strong emphasis on using isotopes as a means to understand physical mixing and chemical cycling in the ocean, and the climate history as recorded in marine sediments.” Wally Broecker

        There are changes in polar annular modes that – as solar activity declines – may cool high northern latitudes a couple of degrees C this century. There are related Pacific upwelling changes that influence cloud properties through Rayleigh–Bénard convection.

      • The model solutions don’t diverge more than the emission scenarios when you look at them. The emission total is the largest factor in the temperature uncertainty by 2100 and this is down to policy.

      • The reality is so evident that your usual punch drunk narratives can’t begin to obscure truth. This is where I started so I may as well finish with it.

        https://watertechbyrie.files.wordpress.com/2014/06/rowlands-fig-1-e1515528276356.png
        Rowlands et al 2012

        Single model – single emission scenario. Only those Lorenzian solutions that pass close to recent temperatures – but still with a broader range of outcomes than the CMIP. Solutions are exponentially divergent just mathematically due to the nonlinear core equations.

        Climate is a coupled, nonlinear, spatio-temporal chaotic system. Do you imagine that models capture that numerically? Not a chance.

      • Clearly you need to look at the AR5 results again. The model plumes are separated by scenario. Scenarios cause the wider spread. Not sure you understand when I say the biggest uncertainty is emissions. Maybe a picture helps.
        http://www.carbonbrief.org/media/137442/knutti_and_sedlacek_sres__vesus_rcps.jpg

      • I showed you a single model and a single scenario with 1000’s of nonunique solutions. Why this is unclear to you I don’t know. The experiment has been done.

        The CMIP members you can’t get past are arbitrarily chosen from 1000’s of solutions on the basis of a posteriori solution behavior. What would be surprising is if a different picture emerged from this sham science.

      • “The CMIP members you can’t get past are arbitrarily chosen from 1000’s of solutions on the basis of a posteriori solution behavior.” Reference? Just made up by you? I go with the second.

      • I showed you 1000’s of solutions for a single model and a single emission scenario. Which is correct and why?

        “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

        Models cannot give a unique solution. I have been over and over it with you and now you ask for another cite. Cut the cr@p Jimmy dear. I am not the one who makes things up.

      • That spread is mostly within the IPCC range. What are you arguing about here?

      • Which solution – if any – is correct? Models can’t tell you. Perhaps a probability density function may be possible in future – but not yet.

        http://rsta.royalsocietypublishing.org/content/roypta/369/1956/4751/F8.large.jpg

        Nor do we know the limits of ‘irreducible imprecision’.

        “Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” http://www.pnas.org/content/104/21/8709.full

        This has been understood for decades. It is news only to climate fanatics – it just doesn’t suit the narrative.

      • You can look at the actual AR5 models in Chapter 12, and their spread, as I linked above. This clearly shows that the larger uncertainty is the emissions. Showing only one emission scenario misses the point I am making. Emission scenarios affect whether the rise is 1 C or 6 C by 2100, while sensitivity only modifies that by +/-30%.

      • “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.” TAR 14.2.2.2

        They are talking perturbed physics ensembles of course. It is the last time the IPCC made any sense at all.

      • This is you falling hook, line and sinker for the Frank stuff. Be skeptical. It’s crap. What if his time step was a month instead of a year, and the monthly error is larger than his annual average of 4 W/m2? Where does his result end up then? Think it through. He doesn’t even conserve energy which is how he gets the world to cool enough for an Ice Age in 5 years. The ocean can’t physically do that, but in his version of the world, it does. GCMs have an energy balance equation and heat capacity considerations that prevent that, but you prefer his unconstrained random walk version of things. This says a lot about your gullibility to cranks, which is a shame.

      • Let me just say this. The model memes are so egregiously and obviously wrong that the parsimonious explanation is that this is a deliberate lie aimed at a gullible and innumerate public. pretty much like Jimmy D.

      • You use wood for dimwits – simplistic unscience –
        and it has nothing to do with the subject under discussion.

      • Just observations. You don’t like observations, no surprise. You say I talk about models when I focus much more on what is already happening and the evidence the past provides, so I am correcting you on that.

      • You focus on CO2 and surface temp – and make heroic ass-umptions about the complexities and uncertainties of both models and climate. But this was about models – so if you want to change the topic back (ad nauseum) to your simplistic ass-umptions do it without me.

      • Jim D, “Your first mistake is the use of a random walk statistical model to represent temperature series.

        Wrong again, Jim D. Nowhere in any of my work on climate models, and most especially not in my talk or in my manuscript, do I ever address temperature series.

        In light of that zeroth order mistake, the rest of your post is nonsensical.

        You’ve made one mistake after another. First in your naïve idea that ±K is a real physical temperature, then in your willful ignorance on that point, and now in grousing about non-existent temperature series analyses.

        You’re a special case of inerudite, Jim D., clearly showing problems ranging from a willfully refractory ignorance to a lack of reading comprehension bordering on hallucinatory.

      • Ignoring model temperature series completely would also explain why they all rejected you because that means you are not grounded in the facts. You need to pay attention to what models actually do to have any relevance at all. Temperature series from nature and models don’t have the square root divergent behavior you show, and the simple reason is that the random walk model is not a correct representation of error growth, but if you haven’t figured that out after ten years, what hope is there.

      • Robert Ellison, the error with which I work is model long wave cloud forcing error. It stems from a model calibration error. Pace ATTP, but it’s a ±uncertainty and not a constant offset.

        It is inherent in the models, and so makes an appearance in every single calculational time step. ATTP never figured that out, no matter that I explained it repeatedly. And he apparently is a trained physicist.

        The same error puts your initial conditions error into each time step, too, because every climate projection time step begins with an erroneous climate state. I have yet to see that fact realized by any climate modeler.

      • Since your attempted paper makes no reference to any real model results, it is a rather pointless effort. It is like if someone wrote a paper with a cockamamie formula for the trajectory of a ball, but never showed a real case that would prove it wrong anyway. Would you accept such a paper?

      • Jim D, “OK, so do you agree with Pat’s 30 degree range of possibility by 2100 or not?

        My work supposes no30 degree range of possibility by 2100.” I keep telling you that, Jim D. The ±K is not a physical temperature.

        But you insist on repeating the same foolish mistake.

        Your mistake is that of someone who either hasn’t ever taken, or never understood, even high school physics or chemistry.

        Insisting on such an obvious non-starter is plain idiotic.

      • You said GCMs give 5 years to an Ice Age or hothouse. If that is not a physical temperature, what is?

      • Jim D.”Since your attempted paper makes no reference to any real model results, it is a rather pointless effort.

        My paper examines more than a dozen real climate models, Jim, including both CMIP3 and advanced CMIP5 versions.

        You have looked at my work and understood absolutely nothing of it, not even what is obviously present before your eyes.

        Look at your video link, Jim. It shows the analysis of CMIP3 model projections. Your own link disproves your claim.

        Uncertainty goes as sqrt(sum of error). That’s why it starts out relatively large.

        Your posts demonstrate one thing only, Jim. You don’t know what you’re talking about. You don’t know anything about anything important to the debate.

      • You don’t show the models having that kind of error growth with actual time series of their temperatures, and if you did look at them you would see that you were wrong and that is why they rejected you, because they know the error growth doesn’t look like that. Don’t you understand why people would want to see that as evidence? The square root behavior is the random walk result, which is just a plain wrong assumption by you straight off the bat.

      • Pat Franks – most model parameters have some degree of uncertainty and extends to cloud parametization. Changes in model structure – e.g. running it without greenhouse gases – introduce structural instability into models. Uncertainty about physical parameters results in divergent solutions resulting from the nonlinear set of core fluid flow equations. Schematically.

        https://watertechbyrie.files.wordpress.com/2017/04/slingo-and-palmer-e1506928933615.png
        http://rsta.royalsocietypublishing.org/content/369/1956/4751

        Decadal forecasts may be possible sometime in the future – but ad hoc multi-model ensembles projecting over a century are inherently fraudulent (charitably incompetent) science.

      • Jim D, uncertainty is not error. Growth of uncertainty is not growth of error. I have explained this repeatedly to you. But you continue to make the same mistake over, and yet over again.

        There’s no point in explaining anything to one so adamantly disposed to remain ignorant, as you so clearly are.

        Robert Ellison, I agree with your point about models and the divergence caused by varying parameters within their uncertainty range (which climate modelers call ‘perturbed physics’ experiments).

        No one has ever propagated all the parameter uncertainties through a model projection. I’d imagine, were that done, the projection uncertainty envelope would be immediately gigantic.

        In my work, I show that models merely linearly extrapolate GHG forcing to produce their temperature projections. That justifies the linear propagation of error through those projections. Propagated cloud forcing error alone showed the projections are without merit.

      • You can get a much faster error growth with your method if you use a one month time step instead of a year. Also GCM timesteps are short enough that there are millions of timesteps in a century. maybe you can get 1000 degrees uncertainty out of that just by using the GCM timestep in your method. Clearly something is wrong with a method that relies on the arbitrary choice of a year as a time step for its level of uncertainty. You need to think it over again from scratch.

      • “You’re a special case of inerudite, Jim D., clearly showing problems ranging from a willfully refractory ignorance to a lack of reading comprehension bordering on hallucinatory.”

        Perfect description of garden variety very annoying huffpo drone. We have been looking for those precise words for many years. Thanks, pat.

      • Jim D., “You can get a much faster error growth with your method if you use a one month time step instead of a year.

        It’s not error growth, Jim. It’s growth in uncertainty. Please get with the program.

        Second, no one has reported the average model calibration error for one month. So, you can’t have any idea what happens to the uncertainty envelope.

        It is true that a more thorough knowledge of error could produce much larger uncertainties. But that would just more strongly indicate the unreliability of the models, not any mistake in error propagation.

        You wrote, “Clearly something is wrong with a method that relies on the arbitrary choice of a year as a time step for its level of uncertainty.

        The choice of a year stemmed from the use of the annual mean model cloud forcing error published by Lauer and Hamilton. That was mentioned clearly in my DDP presentation.

        Guess what annual means in “annual mean error,” Jim.

        Let’s notice here that you never really confront your own errors and mistakes, Jim. Every time I refute one of your claims, you skip over it and just go on to invent another.

        Your behavior in this conversation cannot be described as honorable.

      • If the annual average error is 4 W/m2 plus or minus, the monthly variation must be higher by the simple rule that averaging more random numbers reduces the variance. This is certainly true of surface temperatures which would vary like radiation on monthly and annual time scales. Annual and monthly data shows what I mean. The 12-month average variation is, of course, smaller.
        http://woodfortrees.org/plot/best/from:2000/mean:12/plot/best/from:2000
        Use a monthly timestep to capture that variability and see what you get.

      • Doing your mathematics for you. If the monthly uncertainty is over three times larger than 4 W/m2 and you take 12 monthly steps, your annual uncertainty growth is over ten times larger than you have in your paper, so it is off by 1000%. This means that your uncertainty level is that you now don’t even know whether the GCM can predict an icehouse or hothouse within 6 months instead of 5 years. You need to revise your numbers accordingly. It’s just what your method says should be done. Reductio ad absurdum.

      • Yes, Jim and if monthly uncertainty were a million times larger than ±4 Wm^2, then wowie! Or 10 million times! Imagine that!

        Really Jim, give it up. You’re not doing mathematics. Or science. Or anything useful.

        And you’re still wrong about the meaning of uncertainty. It doesn’t “ predict an icehouse or hothouse. You’re still confusing ±K with K.

      • The monthly uncertainty is about three times as much as the annual one when you check it, so you need to do your sums again with this new information. That’s all I am saying. You want large uncertainties. I show you how to get them. Don’t you want it to be that large or what?

      • The icehouse comment is based on what you said “The rapid growth of uncertainty means that GCMs cannot discern an ice age from a hothouse from 5 years away, much less 100 years away. So far as GCMs are concerned, Earth may be a winter wonderland by 2100 or a tropical paradise. ” Do you still believe that they could be off by that much in 5 years, and if so, which changes, the real climate or the GCM?

      • Happy to oblige, Don. :-)

      • Just try to pull your punches a little, pat. We don’t want him quitting on us.

      • That’s fine. Some people have opinions I don’t care about.

      • Jim D: Reductio ad absurdum.

        You do that a lot.

        Sorry. I couldn’t resist.

      • Indeed in this case, Frank has a method that doesn’t converge as you add more data. It diverges. Clearly something is wrong there and I have said what, but there are other dubious assumptions even apart from the random walk.

      • Jim D, “ Do you still believe that they could be off by that much in 5 years…

        Read again, Jim. I wrote that GCMs can’t discern the difference between those states. That is not the same as writing they predict both those states, or one of them or the other.

        You’ve continually misrepresented my analysis. Not one single objection you’ve raised has had any relevance at all. They’ve all been either analytically wrong or scientifically wrong. You’ve given no evidence whatever that you know what you’re talking about.

        You also wrote, “The monthly uncertainty is about three times as much as the annual one when you check it…

        Lauer and Hamilton did not report a monthly long wave cloud forcing uncertainty. Where did you get your estimate?

        The ±4 W/m^2 is an annual average root-mean-square uncertainty, derived from 20 years of data. If one puts that back into the Pythagorean equation and calculates a nominal monthly average uncertainty, it comes out to be ±1.15 Wm^-2. So how do you figure three times larger, Jim?

        You wrote, “Frank has a method that doesn’t converge as you add more data. It diverges.

        If one puts that nominally monthly ±1.15 W/m^2 into the error propagation equation, one gets a centennial uncertainty in air temperature of about ±16 C; entirely comparable with the estimate in my analysis over annual time scales. So, where do you get, “it diverges,” Jim?

      • As with temperature, and I showed that, the radiation variability at monthly scales is several times larger than at annual scales, and averages to 4 W/m2 over a year. If you think the monthly radiation anomaly is a smooth function, but also varies discontinuously from year to year, you have a very improbable function there. It is likely that the radiation anomaly tracks the temperature anomaly which I show again here.
        http://woodfortrees.org/plot/best/from:2000/mean:12/plot/best/from:2000

      • When you say they can’t discern the difference between Ice Ages and a hothouse 5 years from now, why?
        Also, their annual temperature fluctuations are only a few tenths of a degree in standard deviation, and so your estimate of their variability is wrong at the outset by about a factor of ten, and yet you used it along with the wrong model of error propagation.

      • Jim D, see my reply to you at February 2, 2018 at 2:04 am | . Sorry I posted under the wrong reply link.

    • The climate system is – we presume – an ergodic spatio-temporal chaotic system with system limits determined by internal variability. Within limits at least locally involving shifts of 10’s of degrees in as little as a decade.

      Models are very different – they have 1000’s of plausible divergent solutions due to the nonlinearity of the core equations of fluid transport. And they choose just one for inclusion in opportunistic ensembles. On what basis?

      “The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.” http://www.pnas.org/content/104/21/8709.full

      That’s right – they pull it out of their ass.

      Jimmy has memes but no science or math.

      • The primary uncertainty for future temperature is from uncertainty in GHG emissions, not sensitivity or ergodic internal behaviors. It makes an enormous difference whether we emit many thousands of GtCO2 or just one.

      • A few W/m2 of forcing seems small bikkies in the broad sweep of extreme climate variability. The biggest risk from greenhouse gases is that they may trigger abrupt changes of a scope and rapidity determined by complex and dynamic internal planetary responses.

        But stick to the point for once Jimmy dear – I do not have the patience for your nonsense today.

  23. “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

    Seemingly random is not allowed in Jimmy’s world.
    I’m am sure I have quoted this already here – but in the goldfish bowl there are only self referential circles.

    It is the key idea in understanding why a focus on individual CMIP members is misguided. There is always another exponentially diverging solution for any model – both high and low sensitivity.

    It is also a key to understanding abrupt climate change due to internal variability. Something radically under appreciated in IPCC assessments – although not in science more generally.

    • I’ll repost this from above.
      You need to look at what happens when the same model is run only changing the initial conditions (the LENS project). This shows the level of “chaos” that can develop.
      http://www.cesm.ucar.edu/projects/community-projects/LENS/images/Figure1.gif
      The forcing signal is bigger.

      Bottom line. Weather is chaos. Climate is predictable from forcing.

      • Basically, it appears that nobody he liberally quotes agrees with his twisted interpretations of their work. Big words; big quotes; same old churlish behavior.

      • The 40-member LENS experiment starts with round-off error perturbations in 1920 (Lorenz-style fluctuations), and sees how they grow from there in a fully coupled climate model. If weather chaos was going to turn into climate chaos, this would show it. But the climate change was robust to weather fluctuations, and they all just obediently follow the forcing change.

    • I am impressed – he actually found a study and reproduced the graph. JCH as usual contributes nothing but calumny and trite observations.

      “In response to the applied historical and RCP8.5 external forcing from 1920 to 2100, the global surface temperature increases by approximately 5 K in all ensemble members (Fig. 2). This consistent ∼5-K global warming signal in all ensemble members by year 2100 reflects the climate response to forcing and feedback processes as represented by this particular climate model. In contrast, internal variability causes a relatively modest 0.4-K spread in warming across ensemble members.”

      The model tuned to observations is projected forward. The 5-K is a base choice. There are many other widely divergent solutions possible with realistic choices for input parameters and boundary conditions. The starting points were then shifted. By a 1 day lag in sea surface temperature for member 1 and by the order of 1E-14K random changes in surface temperature for the rest. The model uncertainty from this very small initial condition change saturates out at around
      0.4-K. A very considerable amplification that produces the anticipated scale of internal climate variability.

      If instead a +/-4W/m2 imprecision in outgoing longwave radiation observations is used – as Pat Frank did – model uncertainty saturates out at considerably higher levels. As was insisted this is purely model uncertainty – and bears no relation to the real world. The implication of this was that models can intrinsically provide negligible pertinent information on the evolution of climate. Probabilistic forecasts may be possible – but this is a work in progress. Initialized decadal forecasts may be possible – a lead time limited by real world chaotic divergence from models – much for the same reason that weather forecasts are limited to in the order of a week. Again this is a work in progress.

      Where realistic perturbations of inputs is used – within the broad limits of observation imprecision or approximate parametization – and using only observed temperature constrained results – with 1000’s of runs – model uncertainty saturates out at several degrees by 2050.

      https://watertechbyrie.files.wordpress.com/2014/06/rowlands-fig-1-e1515528276356.png
      https://www.nature.com/articles/ngeo1430

      The results intended by Jimmy dear to reinforce his confirmation bias are apples to the Rowlands et al oranges.

      • Frank was saying that the models differ from observations by up to 4 W/m2 at any given time. I am fairly sure these do too, and from each other, but note how they do not diverge and that the forcing is driving the change instead. Note also that physics perturbations do not represent what nature does. Nature’s physics is constant, so the LENS experiment is the one that represents the size and trend of natural variability, not the Rowlands study. Nature and models are constrained by the energy balance, much in the way the Lorenz model is constrained to its attractor, so that is why the variability looks so tightly bound to the underlying mean that is itself governed by forcing.

      • Jim’s ad hoc model rationalizations remain absolute nonsense. I note only that while real world parameters have an absolute value – measurements and parametizations are more or less imprecise.

        Real world physics change as Hurst dynamics – abrupt change between regimes. It is illustrated by changing Pacific dynamics – with a large impact on the radiative budget of the Earth as a result of Rayleigh-Bénard convection – and in changes to atmospheric temperature and water vapor.

        “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference
        between how much energy is absorbed and emitted by the planet. Climate forcing
        results in an imbalance in the TOA radiation budget that has direct implications for global
        climate, but the large natural variability in the Earth’s radiation budget due to fluctuations
        in atmospheric and ocean dynamics complicates this picture.”

        These dynamics reflect back to the quote from Julia Slingo and Tim Palmer – completely deterministic but seemingly random regime change. Despite JCH – I am pretty sure I am interpreting this correctly.

        The 20-30 year Pacific regime changes in atmospheric and ocean circulation sum to variability in climate means and variance over millennia.

        https://watertechbyrie.files.wordpress.com/2014/06/vance2012-antartica-law-dome-ice-core-salt-content.jpg
        https://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00003.1

        There are related – with solar modulated polar annular mode changes – regime changes that result in variability of a couple of degrees K in higher northern latitudes.

        Changes in polar surface pressure are a trigger – a Lorenzian forcing – for regime change in ENSO. PDO. AMO, AMOC, etc. in the globally coupled, spatio-temporal chaotic flow field. It biases the system to specific outcomes.

        “Technically, an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause. Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small.” NAS 2002

        These guys are still denying that climate is chaotic – science has well and truly passed them by. The odd thing is that triggers are a two edged sword. But they cling to their memes rather than review assumptions. There is a diagnoses for this.

        With declining solar activity this century – it seems more likely than not that the system will be biased to cooler regimes.

      • The results from LENS don’t support your assertions. Where are your abrupt changes? This is a fully coupled model. All you see are the gradual forcing changes seamlessly accounting for the recent and future warming. Same for nature. You are throwing red herrings all over the place and not looking at what this shows. You are the one that wanted to discuss model results, and now you are trying to change the subject from the most relevant ones that show the role of the chaotic system relative to climate change. These model runs have decadal modes similar to the real AMO, etc., and you can barely see them as random fluctuations against the background of climate change because their amplitude is only tenths of a degree at most. This is the way it is in nature too.

      • Models are chaotic and thus exhibit Hurst effects. This is not remotely modeling Hurst effects in climate.

        https://www.nature.com/articles/srep09068

        But the LENS results do support exponential divergence of model solution trajectories – from a tiny change in initial conditions to a 0.4-K range.

        Abrupt climate change is as much as 16-K regionally in as little as the decade (NAS 2002).

      • The point is that they stay at the 0.4 K range around a mean that is non-chaotic and clearly controlled by the forcing change common to all the runs. The growth only goes that far, then it hits the physical constraints of the energy balance, an attractor of sorts in Lorenz’s terms. Chaos is bounded, not like your random walk thing.

      • … the decade? Try the indefinite article instead.

      • The LENS change in initial conditions is 0.000000000000001K – that saturates at a model uncertainty of 0.4-K.

      • Abrupt climate change is as much as 16-K regionally in as little as the decade (NAS 2002). – the twister

        Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. – NAS

      • Is he suggesting that abrupt cooling is not possible?

        http://lmgtfy.com/?q=abrupt+cooling

      • RIE, if you knew anything about Lorenz, you would know that the final variability is not a function of the initial perturbation size. Same with these simulations.

      • LOL – citation?

      • Lorenz starts with round-off error and the difference gets to the full attractor space. Butterfly effect. Look it up.

      • Lorenz started in the middle of a run with a rounding to 3 decimal places – he was being lazy – rather than six in the original printout. It should by all that was known not have made much of a difference at all. The rest is history.

      • Yes, indeed, and the difference grew to the size of the attractor. That’s what I am saying.

      • The dimensions of the strange attractor are a property of parameter values in Lorenz’s simple nonlinear set of equations. The size and multiple dimensions of climate model attractors cannot be visualized or pre-determined. But in principle more precise observations and finer grids reduce the size of the solution space.

        “By delving deeper, it is also possible to identify the particular parameters that contribute the most to the model uncertainty and focus basic research and model development on those science areas. Likewise, the uncertainty from internal variability may be reduced, at least in the near-term projections, through initializing the model with the current state of the climate system. Nevertheless, because the climate is a chaotic system and contains natural variability on all time scales, there is a level of uncertainty that will always exist however much the model uncertainty is reduced.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

        Do you actually believe the things you make up?

      • You see how constrained the LENS results are. That’s the temperature dimension of effective attractor space, maybe a half degree for global mean annual temperatures. It can’t get outside that range because of the radiative restoring effect which is seen best just after El Ninos in the way the temperature plummets. That’s radiative heat loss because the surface warmth anomaly is unsustainable. Adding GHGs raises the bar on the surface temperatures.

      • Only that the word “cooling” does not appear in the executive summary of the NAS report on abrupt climate change. All of the examples of abrupt change they cite were warming events.

        Wow, the subarctic can get cold.

      • What I said was abrupt climate change – although this does include abrupt warming and cooling.

        “The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change… Over the last several hundred thousand years, climate change has come mainly in discrete jumps that appear to be related to changes in the mode of thermohaline circulation.” Wally Broecker

        The Big Kahuna of climate change – and if JCH ever read anything he would be wondering where it would jump next.

        e.g. https://www.ocean-sci.net/10/29/2014/os-10-29-2014.html

      • The NAS was concerned the accumulation of greenhouse gases in the atmosphere could trigger abrupt climate change: a radical warming event like the one that ended the Younger Dryas. They discuss greenhouse gasses; they discuss abrupt warming events.

      • “Abrupt climate changes were especially common when the climate system was being forced to change most rapidly. Thus, greenhouse warming and other human alterations of the earth system may increase the possibility of large, abrupt, and unwelcome regional or global climatic events. The abrupt changes of the past are not fully explained yet, and climate models typically underestimate the size, speed, and extent of those changes. Hence, future abrupt changes cannot be predicted with confidence, and climate surprises are to be expected.

        The new paradigm of an abruptly changing climatic system has been well established by research over the last decade, but this new thinking is little known and scarcely appreciated in the wider community of natural and social scientists and policy-makers.” NAS 2002

        I have quoted this before. But abrupt variability goes both ways. As I said – chaos is a 2 edged sword. But cooling this century from multiple mechanisms – NH blocking patterns, AMOC decline, Pacific upwelling – seems quite likely.

        I have suggested as well that land use and technological innovation has multiple benefits in addition to reducing the risk of crossing a climate threshold.

      • You’re such a great reader you didn’t even know that the report you have cited about a billion times was primarily concerned with the possibility of an abrupt warming due to the accumulation of greenhouse gases, and then all you can do is insult, insult, insult. And soon you will be hiding behind Professor Curry.

        I really do not know of anybody who you cite who agrees with you. You’ve stitched together a laughable Frankenstein joke out of other people’s work, and you protect it with a shield supreme nastiness.

      • Yes – delete this comment please.

  24. I quote so I don’t get accused of making it up. Wait… Nevermind…

    “Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.” James C. McWilliams – http://www.pnas.org/content/104/21/8709.full

    I have a few standard cites – and some more regulars. A real treasure trove of climate science by illustrious climate scientists. Impeccable – and most are still with us and not turning in their graves at my callous hijacking of their ideas (JCH 2017).