Reply to Patrick Brown’s response to comments on his Nature article

by Nic Lewis

My reply to Patrick Brown’s response to my my comments on his Nature article.

Introduction

I thank Patrick Brown for his detailed response (also here) to statistical issues that I raised in my critique “Brown and Caldeira: A closer look shows global warming will not be greater than we thought” of his and Ken Caldeira’s recent paper (BC17).[1] The provision of more detailed information than was given in BC17, and in particular the results of testing using synthetic data, is welcome. I would reply as follows.

Brown comments that I suggested that rather than focusing on the simultaneous use of all predictor fields, BC17 should have focused on the results associated with the single predictor field that showed the most skill: The magnitude of the seasonal cycle in OLR. He goes on to say: “Thus, Lewis is arguing that we actually undersold the strength of the constraints that we reported, not that we oversold their strength.”

To clarify, I argued that BC17 undersold the statistical strength of the relationships involved, in the RCP8.5 2090 case focussed on in their Abstract, for which the signal-to-noise ratio is highest. But I went on to say that I did not think the stronger relationships would really provide a guide to how much global warming there would actually be late this century on the RCP8.5 scenario, or any other scenario. That is because, as I stated, I disagree with BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models necessarily also applies in the real climate system. I will return to that point later. But first I will discuss the statistical issues.

Statistical issues

When there are many more predictor variables than observations, the dimensionality of the predictor information has to be reduced in some way to avoid over-fitting. There are a number of statistical approaches to achieving this using a linear model, of which the partial least squares (PLS) regression method used in BC17 is arguably one of the best, at least when its assumptions are satisfied. All methods estimate a statistical model fit that provides a set of coefficients, one for each predictor variable.[2] The general idea is to preserve as much of the explanatory power of the predictors as possible without over-fitting, thus maximizing the fit’s predictive power when applied to new observations.

If the PLS method is functioning as intended, adding new predictors should not worsen the predictive skill of the resulting fitted statistical model. That is because, if those additional predictors contain useful information about the predictand(s), that information should be incorporated appropriately, while if the additional predictors do not contain any such information they should be given zero coefficients in the model fit. Therefore, the fact that, in the highest signal-to-noise ratio, RCP8.5 2090 case focussed on both in BC17 and my article, the prediction skill when using just the OLR seasonal cycle predictor field is very significantly reduced by adding the remaining eight predictor fields indicates that something is amiss.

Brown say that studies are often criticized for highlighting the single statistical relationship that appears to be the strongest while ignoring or downplaying weaker relationships that could have been discussed. However, the logic with PLS is to progressively include weaker relationships but to stop at the point where they are so weak that doing so worsens predictive accuracy. Some relationships are sufficiently weak that including them adds too much noise relative to information useful for prediction. My proposal of just using the OLR seasonal cycle to predict RCP8.5 2090 temperature was accordingly in line with the logic underlying PLS – it was not a case of just ignoring weaker relationships.

Indeed, the first reference for the PLS method that BC17 give (de Jong, 1993),  justified PLS by referring to a paper [3] that specifically proposed carrying out the analysis in steps, selecting one variable/component at a time and not adding an additional one if it worsened the statistical model fit’s predictive accuracy. At the predictor field level, that strongly suggests that, in the RCP8.5 2090 case, when starting with the OLR seasonal cycle field, one would not go on to add any of the other predictor fields, as in all cases doing so worsens the fit’s predictive accuracy. And there would not be any question of using all predictor fields simultaneously, since doing so also worsens predictive accuracy compared to using just the OLR seasonal cycle field.

In principle, even when given all the predictor fields simultaneously PLS should have been able to optimally weight the predictor variables to build composite components in order of decreasing predictive power, to which the add-one-at-a-time principle could be applied.  However, it evidently was unable to do so in the RCP8.5 2090 case or other cases. I can think of two reasons for this. One is that the measure of prediction accuracy used –  RMS prediction error when applying leave-one-out cross-validation – is imperfect. But I think that the underlying problem is the non-satisfaction of a key assumption of the PLS method: that the predictor variables are free of uncertainty. Here, although the CMIP5-model-derived predictor variables are accurately  measured, they are affected by the GCMs’ internal variability. This uncertainty-in-predictor-values problem was made worse by the decision in BC17 to take their values from a single simulation run by each CMIP5 model rather than averaging across all its available runs.

Brown claims (a) that each model’s own value is included in the multi-model average which gives the multi-model average an inherent advantage over the cross-validated PLSR estimate and (b) that this means that PLSR is able to provide meaningful Prediction Ratios even when the Spread Ratio is near or slightly above 1. Point (a) is true but the effect is very minor. Based on the RCP8.5 2090 predictions, it would normally cause a 1.4% upwards bias in the Spread Ratio. Since Brown did not adjust for the difference of one in the degrees of freedom involved, the bias is twice that level – still under 3%. Brown’s claim (b), that PLS regression is able to provide meaningful Prediction Ratios even when the Spread Ratio is at or virtually at the level indicating a skill no higher than when always predicting warming equal to the mean value for the models used to estimate the fit, is self-evidently without merit.

As Brown indicates, adding random noise affects correlations, and can produce spurious correlations between unrelated variables. His test results using synthetic data are interesting, although they only show Spread ratios. They show that one of the nine synthetic predictor fields produced a reduction in the Spread ratio below one that was very marginally – 5% – greater than that when using all nine fields simultaneously. But the difference I highlighted, in the highest signal RCP8.5 2090 case, between the reduction in Spread ratio using just the OLR seasonal cycle ratio and that using all predictors simultaneously was an order of magnitude larger – 40%. It seems very unlikely that the superior performance of the OLR seasonal cycle on its own arose by chance.

Moreover, the large variation in Spread ratios and Prediction ratios between different cases and different (sets of) predictors calls into question the reliability of estimation using PLS. In view of the non-satisfaction of the PLS assumption of no errors in the predictor variables, a statistical method that does take account of errors in them would arguably be more appropriate. One such method is the RegEM (regularized expectation maximization) algorithm, which was developed for use in climate science.[4] The main version of RegEM uses ridge regression with the ridge coefficient (the inverse of which is analogous to the number of retained components in PLS) being chosen by generalized cross-validation. Ridge regression RegEM, unlike the TTLS variant used by Michael Mann, produces very stable estimation. I have applied RegEM to BC17’s data in the RCP8.5 2090 case, using all predictors simultaneously.[5] The resulting Prediction ratio was 1.08 (8% greater warming), well below the comparative 1.12 value Brown arrives at (for grid-level standardization). And using just the OLR seasonal cycle , the excess of the Prediction ratio over one was only half that for the comparative PLS estimate.

Issues with the predictor variables and the emergent constraints approach

I return now to BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models also applies in the real climate system. They advance various physical arguments for why this might be the case in relation to their choice of predictor variables. They focus on the climatology and seasonal cycle magnitude predictors as they find, compared with the monthly variability predictor, these have more similar PLS loading patterns to those when targeting shortwave cloud feedback, the prime source of intermodel variation in ECS.

There are major problems in using climatological values (mean values in recent years) for OLR, OSR and the TOA radiative imbalance N. Most modelling groups target agreement of simulated climatological values of these variables with observed values (very likely spatially as well as in the global mean) when tuning their GCMs, although some do not do so. Seasonal cycle magnitudes may also be considered when tuning GCMs. Accordingly, how close values simulated by each model are to observed values may very well reflect whether and how closely the model has been tuned to match observations, and not be indicative of how good the GCM is at representing the real climate system, let alone how realistic its strength of multidecadal warming in response to forcing is.

There are further serious problems with use of climatological values of TOA radiation variables. First, in some CMIP5 GCMs substantial energy leakages occur, for example at the interface between their atmospheric and ocean grids.[6] Such models are not necessarily any worse in simulating future warming than other models, but they need (to be tuned) to have TOA radiation fluxes significantly different from observed values in order for their ocean surface temperature change to date, and in future, to be realistic.

Secondly, at least two of the CMIP5 models used in BC17 (NorESM1-M and NorESM1-ME) have TOA fluxes and a flux imbalance that differ substantially from CERES observed values, but it appears that this merely reflects differences between derived TOA values and actual top-of-model values. There is very little flux imbalance within the GCM itself.[7] Therefore, it is unfair to treat these models as having lower fidelity – as BC17’s method does for climatology variables – on account of their TOA radiation variables differing, in the mean, from observed values.

Thirdly, most CMIP5 GCMs simulate too cold an Earth: their GMST is below the actual value, by up to several degrees. It is claimed, for instance in IPCC AR5, that this does not affect their GMST response to forcing. However, it does affect their radiative fluxes. A colder model that simulates TOA fluxes in agreement with observations should not be treated as having good fidelity. With a colder surface its OLR should be significantly lower than observed, so if it is in line then either the model has compensating errors or its OLR has been tuned to compensate, either of which indicates its fidelity is poorer than it appears to be. Moreover, complicating the picture, there is an intriguing, non-trivial correlation between preindustrial absolute GMST and ECS in CMIP5 models.

Perhaps the most serious shortcoming of the predictor variables is that none of them are directly related to feedbacks operating over a multidecadal scale, which (along with ocean heat uptake) is what most affects projected GMST rise to 2055 and 2090. Predictor variables that are related to how much GMST has increased in the model since its preindustrial control run, relative to the increase in forcing – which varies substantially between CMIP5 models – would seem much more relevant. Unfortunately, however, historical forcing changes have not been measured for most CMIP5 models. Although one would expect some relationship between seasonal cycle magnitude of TOA variables and intra-annual feedback strengths, feedbacks operating over the seasonal cycle may well be substantially different from feedbacks acting on a multidecadal timescale in response to greenhouse gas forcing.

Finally, a recent paper by scientists as GFDL laid bare the extent of the problem with the whole emergent constraints approach. They found that, by a simple alteration of the convective parameterization scheme, they could engineer the climate sensitivity of the GCM they were developing, varying it over a wide range, without them being able to say that one model version showed a greater fidelity in representing recent climate system characteristics than another version with a very different ECS.[8] The conclusion from their Abstract is worth quoting:”Given current uncertainties in representing convective precipitation microphysics and the current inability to find a clear observational constraint that favors one version of the authors’ model over the others, the implications of this ability to engineer climate sensitivity need to be considered when estimating the uncertainty in climate projections.” This strongly suggests that at present emergent constraints cannot offer a reliable insight into the magnitude of future warming. And that is before taking account of the possibility that there may be shortcomings common to all or almost all GCMs that lead them to misestimate the climate system response to increased forcing.

[1] Patrick T. Brown & Ken Caldeira, 2017. Greater future global warming inferred from Earth’s recent energy budget, doi:10.1038/nature24672.

[2] The predicted value of the predictand is the sum of the predictor variables each weighted by its coefficient, plus an intercept term.

[3] A Hoskuldsson, 1992. The H-principle in modelling with applications to chemometrics. Chemometrics and Intelligent Laboratory Systems, 14, 139-153.

[4] Schneider, T., 2001: Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values. J. Climate, 14, 853–871.

[5] Due to memory limitations I had to reduce the longitudinal resolution by a factor of three when using all predictor fields simultaneously. Note that RegEM standardizes all predictor variables to unit variance.

[6] Hobbs et al, 2016. An Energy Conservation Analysis of Ocean Drift in the CMIP5 Global Coupled Models. DOI: 10.1175/JCLI-D-15-0477.1.

[7] See discussion following this blog comment.

[8] Ming Zhao et al, 2016. Uncertainty in model climate sensitivity traced to representations of cumulus precipitation microphysics. J Cli, 29, 543-560.

101 responses to “Reply to Patrick Brown’s response to comments on his Nature article

  1. Does the raw temperature data show a decreasing trend for the century or not?

    • When one is only curious to one fact in order to draw a sweeping conclusion it is usually a sign they are willing to ignore any number of others. “We’ll give him a fair trial and then we’ll hang em.”

      If the slope for 100-year estimated GMST were negative that would be even more of a headline — “Man Causing Ice-Age.” Determining if the trend is insignificant gets little grant funding or headlines. People want their work to be important.

    • Last year, the sixth day of Christmas saw a thaw on Santa’s doorstep.

      But this year, polar diplomacy has chilled out .

  2. Fuel supply constraints make RCP8.5 a highly unrealistic scenario
    Thanks Nick for exposing statistical weaknesses in Brown et al.
    A far greater challenge is the extremely unrealistic assumptions on availability of future fossil fuel use for IPCC’s RCP8.5. e.g., See:

    Ritchie, J., & Dowlatabadi, H. (2017) Why do climate change scenarios return to coal? Energy, 140, 1276-1291.
    Abstract

    The following article conducts a meta-analysis to systematically investigate why Representative Concentration Pathways (RCPs) in the Fifth IPCC Assessment are illustrated with energy system reference cases dominated by coal. These scenarios of 21st-century climate change span many decades, requiring a consideration of potential developments in future society, technology, and energy systems. To understand possibilities for energy resources in this context, the research community draws from Rogner (1997) which proposes a theory of learning-by-extracting (LBE). The LBE hypothesis conceptualizes total geologic occurrences of oil, gas, and coal with a learning model of productivity that has yet to be empirically assessed.
    This paper finds climate change scenarios anticipate a transition toward coal because of systematic errors in fossil production outlooks based on total geologic assessments like the LBE model. Such blind spots have distorted uncertainty ranges for long-run primary energy since the 1970s and continue to influence the levels of future climate change selected for the SSP-RCP scenario framework. Accounting for this bias indicates RCP8.5 and other ‘business-as-usual scenarios’ consistent with high CO2 forcing from vast future coal combustion are exceptionally unlikely. Therefore, SSP5-RCP8.5 should not be a priority for future scientific research or a benchmark for policy studies

    Wang, J., Feng, L., Tang, X., Bentley, Y., & Höök, M. (2017). The implications of fossil fuel supply constraints on climate change projections: A supply-side analysis Futures, 86, 58-72.
    Abstract

    Climate projections are based on emission scenarios. The emission scenarios used by the IPCC and by mainstream climate scientists are largely derived from the predicted demand for fossil fuels, and in our view take insufficient consideration of the constrained emissions that are likely due to the depletion of these fuels. This paper, by contrast, takes a supplyside view of CO2 emission, and generates two supply-driven emission scenarios based on a comprehensive investigation of likely long-term pathways of fossil fuel production drawn from peer-reviewed literature published since 2000. The potential rapid increases in the supply of the non-conventional fossil fuels are also investigated. Climate projections calculated in this paper indicate that the future atmospheric CO2 concentration will not exceed 610 ppm in this century; and that the increase in global surface temperature will be lower than 2.6 C compared to pre-industrial level even if there is a significant increase in the production of non-conventional fossil fuels. Our results indicate therefore that the IPCC’s climate projections overestimate the upper-bound of climate change. Furthermore, this paper shows that different production pathways of fossil fuels use, and different climate models, are the two main reasons for the significant differences in current literature on the topic.

    • Thanks; I had read the Ritchie paper but not the Wang paper. I agree that RCP8.5 is not a realistic scenario. But for statistical analysis of GCM behaviour, simulations based on RCP8.5 offer a higher signal-to-noise ratio than those based on other scenarios.

  3. Merry Christmas!

  4. “for statistical analysis of GCM behaviour, simulations based on RCP8.5 offer a higher signal-to-noise ratio than those based on other scenarios.”

    It also sells the message of doom and gloom global warming, which is the main aim.

    It also offers correlation with the worst performing GCM’s when cherry picking short rising temperature data periods (only).
    Perfect.

  5. Are we going to see an update on the Arctic Ice?

  6. In the original version of Pinocchio, when Pinocchio was sick, one set of doctors said: “i think he will die, but if he doesn’t die, he will live”. Another set of doctors said “I think he will live, but if he doesn’t live, he will die”. So we have it here. I think it will get warmer, but if it doesn’t, it won’t get warmer. Looking back on ten years of climate research, climate blogs, climate reports, do we really know much more today that ten years ago? It seems that the amount of verbage is a reflection of how much we don’t know.

  7. Nice Lewis, very interesting to read your reply.

    “Thirdly, most CMIP5 GCMs simulate too cold an Earth: their GMST is below the actual value, by up to several degrees”

    That is astonishing. Why are GCMs not tuned to reproduce the absolute value of the GMST? I would reckon that as obligatory.
    Anyway, if the narrowing and enhancement of the future GMSTs, as shown in BC17, is also valid for the other RCPs then their eventual deviation from observations is brought nearer in time.

  8. “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s10712-012-9175-1

    Variability in the CERES record is overwhelmingly natural in origin. Neglecting natural variability over simplifies the picture considerably. And net differences in reflected shortwave and emitted IR don’t really seem to be the point. In the first differential global energy equation the net of this is the energy out component.

    Δ(H&W) ≈ Ein – Eout

    The change in heat energy content of the planet – and the work done in melting ice or vaporizing water – is approximately equal to energy in less energy out. Energy in is variable and widely expected to decline this century, have very little direct effect on the global energy budget but be amplified in changes in the terrestrial system in relatively large changes in ocean and atmospheric circulation and thus energy out. The difference in energy in and energy out is the imbalance that is determined exclusively by considering ocean heat changes. As it stands – ocean heat varies considerably – and very rapidly – as a result mostly of changes in net energy out. Argo variability is mostly natural and the record is far too short to be definitive.

    There has not been a demonstration of any skill in disentangling natural variability from mooted global warming. Yet we are expected to believe that inadequate physics that neglect causality and is tuned to overly simplified parameters is model skill. Projecting these forward has additional uncertainties caused by the dynamical complexity models.

    “Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” http://www.pnas.org/content/104/21/8709.full

    The following shows 1000’s of solutions of a single model that derive from non-unique choices for parameters and boundary conditions – and that are constrained to the vicinity of the emergent property of global surface temperature. No single solution – such as are found in the CMIP inter-comparisons – captures the intrinsic ‘irreducible imprecision’ of climate models. There is no inevitability of high sensitivity emerging from a model constrained to emergent properties or not – it happens by chaotic chance.

    Nor will we find that climate is constrained to follow models. Climate will change abruptly and more or less extremely 3 or 4 times this century – just as it did in the last. For reason that have to do with dynamical complexity and through mechanisms that are just starting to be understood.

    This entire discussion is about 6 theoretically impossible things before breakfast and is a waste of everyone’s time.

    • Does photosynthesis count at all in reducing energy out?

      • Plants are 90% efficient in turning sunlight into biomass.

        Over time plants can sequester carbon as organic material into living soils. Some 180Gt of carbon has been lost from agricultural soils and ecosystems in the modern era.

        Much of this can be returned to soils and ecosystem. Feed the world, reduce drought and flood, reclaim deserts and conserve biodiversity.

        e.g. https://kisstheground.com/

      • Curly question with multiple levels of answers.
        Sunlight energy is used to rearrange molecules but does not usually form new mass.
        Hence the overall energy that comes in should go out, I guess.
        A bit like GHG do not, in the overall scheme of things, does not actually reduce the amount of energy going out.
        The transit time through the GHG medium is just a little longer so the air is a little warmer.
        Like asking if solar panels reduce energy out if the energy goes into hydro storage.
        There is more energy there but a lot of heat energy produced getting it there that then dissipated.[back to space].
        Who knows?

      • Plants synthesize organic chemical compounds using sunlight energy. A good portion of it is exuded from roots. In healthy soils – this feeds a symbiotic soil ecology that stores water in organic material and breaks down parent materials in an acid environment into nutrients plants can use.

        The net carbon stored in soils and vegetation can be increased with newer land practices – with a potential to restore in the order of 180Gt(C) lost from grazing and cropping soils and global terrestrial ecosystems in the modern era. For lots of good reasons.

      • angech: Sunlight energy is used to rearrange molecules but does not usually form new mass.
        Hence the overall energy that comes in should go out, I guess.

        The incoming energy is used in rearranging the molecules — or put differently, the incoming energy is “stored” in chemical bonds. Thus, the overall energy that comes in is equal to the energy stored in the bonds plus the energy that goes out (omitting for the time being the energy used in degrading rocks and such.)

        Quantitatively, I have not seen yet an assessment of how much of the incident solar energy is stored in the bonds of emergent structures such as sugar, cellulose, and bones. Perhaps someone knows of studies that I have missed.

        The energy “stored” in bonds long ago is available to us as we burn wood and petroleum products.

      • A bit like GHG do not, in the overall scheme of things, does not actually reduce the amount of energy going out. …

        There simply is no point. The best rails in the world, he will go off them.

    • Robert I. Ellison
      Thanks for prediction distribution graph.
      Have there been any fits to statistical distributions for that graph?
      I am seeking the standard deviations – Type A
      and contrasting systematic/systemic error Type B.
      PS What evidence and definition for your 95% plant efficiency in sunlight?
      I am used to seeing about 1.5%.

      • Most incident sunlight is absorbed by leaves. Some 5% is transmitted and 10% reflected. As the discussion was energy out – leaves are the great solar collector. Energy is gained and lost in the diurnal cycle. Mass accumulates heat – to hundreds of meters on land.

        http://plantsinaction.science.uq.edu.au/edition1/?q=content/1-1-2-light-absorption

        Some 9% is converted to glucose that can then be used for cell processes, to build cells or – as mentioned – to feed symbiotic soil organisms.

        And no I haven’t seen any pdf’s – the most that Rowland et al suggest is an even broader solution space than with the unjustifiable opportunistic IPCC ensembles. .

      • David L. Hagen

        Robert I. Ellison. On photosynthetic efficiency see e.g., For efficient energy, do you want solar panels or biofuels?
        September 20, 2012 12.29am EDT

        This analysis indicates that a theoretical maximal photosynthetic energy conversion efficiency is 4.6% for C3 and 6% for C4 plants.

        Plants are limited by their dependence on photons that fall in the approximate waveband 400-700 nm, and by inherent inefficiencies of enzymes and biochemical processes and light saturation under bright conditions. Their respiration consumes 30-60% of the energy they make from photosynthesis, and of course they spend half of each day in the dark and need to use previous carbohydrate stores to keep them growing.

        Actual conversion efficiency is generally lower than the calculated potential efficiency. It’s around 3.2% for algae, and 2.4% and 3.7% for the most productive C3 and C4 crops across a full growing season. Efficiency reductions are due to insufficient capacity to use all the radiation that falls on a leaf. Plants’ photoprotective mechanisms, evolved to stop leaves oxidising, also reduce efficiency.

      • Yes – I understand that when all energy pathways are considered – and there are several – that conversion to plant mass is about is about what you cited for C3 and C4 plants.

      • David L. Hagen
  9. “Nor will we find that climate is constrained to follow models.”

    Much to the disgust of the believers.

  10. It would be helpful to the lay-reader for a post of this complexity to include an abstract. Thank you.

  11. In Brown’s response, we see that each of the 9 observational constraints reduced the spread, meaning that they all add skill individually. That is, they reduce the spread by penalizing models that do badly in that constraint which are likely to have been outliers in the predicted warming, otherwise the spread would not have been reduced. Given that all 9 have added skill by downweighting outliers in the predictand, it makes no sense to then remove 8 of those constraints as Lewis does. This implies that he does not care how badly the models did with these other constraints including any measure of shortwave radiation. He has removed a critical observational constraint on the models by adding back the outliers that did poorly with shortwave relative to Brown’s more comprehensive filtering and calls that a better result. If Lewis wants to disregard poor performances with these other 8 observations, he needs to explain why because that is the key difference from Brown and Caldeira who don’t want to ignore these observations in constraining their result. The whole point of BC17 was to use all these observations as a constraint.

  12. Jim D
    I think the point made is that one of the nine observations was of exceptional value.
    Hence adding any of the other 8 or all of the other 8 variables caused a reduction in accuracy.
    You yourself said
    ” If Lewis wants to disregard poor performances with these other 8 observations,” admitting that you understand that the ability of the other 8 variables is poor, compared to the ninth.
    The only advantage of using the other variables is that it enlarged the error and enabled BC17 to sneak their otherwise totally inaccurate viewpoint in.

    • Low climate sensitivity is accurate? It’s a physical system. You like the result that agrees with your politics. What would Feynman say?

      • High Climate Sensitivity is accurate?
        You cannot put a sensible limit on it and you claim it is accurate?
        It’s a physical system.
        What would Feynman say?

        For a start he would say that we are all wrong.
        Second he would point out the simple fact that life has existed on this planet for possibly 4 billion years.
        Life is very resilient.
        And that life as we know it can only exist in a narrow temperature band.
        That temperature band has an upper limit of compatibility which would be breached irrevocably many times if climate sensitivity was high.
        Hence, as he would say,
        “It ain’t”.

        Politically you have to like a high Climate Sensitivity, it is the result that agrees with your politics.
        Me, I don’t care. I go with science. Show me scientifically it is high and I am with you 100 %.

      • For instance you can quite happily go with a CS of 10, correct?
        Not that you believe it is 10, but it could be right?
        Is that really your view?
        Do you agree it is a sensible view?
        But you have to believe it is possible, right?

        Sigh, whatever happened to commonsense.

      • The range is 1.5 to 4.5. It looks like it is about to narrow.

        Sneak. Totally inaccurate. Go square to heck.

      • “The range is 1.5 to 4.5. It looks like it is about to narrow.”
        much happier.
        Some people do advocate a fat tail with much higher ranges and no thought.
        Sorry to misjudge you.

      • Lewis disregards, or prefers you don’t notice, that the second highest spread (OLR climatology) gives a higher than average temperature response with a prediction ratio of 1.2.

      • …second lowest spread ratio…

    • angech, he is measuring that “value” by using models. How do you justify that? It’s backwards. A different set of models would have given a different spread. It is not a fundamental property of the observations, but of the models used. It is better to treat the observations equally and see how the models fall using them all.

  13. They don’t cause a reduction in “accuracy” unless you limit that word to his one chosen observation. He disregards the other observations entirely instead of using them in the constraints. The accuracy of the models relative to these other observations is considerably degraded by this choice, but he either doesn’t care or plain forgets them. He has redefined what he means by “accuracy” in this process, and it is a sleight of hand people have not noticed because he doesn’t explicitly tell you this.

    • Furthermore you misunderstood what I meant by “disregards poor performance with these other 8 observations”. Some models do well and some do poorly in those, but Lewis disregards that distinction by throwing those constraints out entirely. BC17 wanted to maintain that distinction as a discriminating factor.

      • No, Jim D, you said what you said.
        No misunderstanding possible.
        And it is poor performance “by” these other 8 models, not as you tried to confuse the subject in your response “with” these other 8 models.
        A good linguistic trick you try there to make the good observation seem bad.
        Precisely because these other models are so bad.
        So horrible.
        That a really good fantastic correlation gets ignored.
        Why, Jim D, why?
        You said it,
        They are poor.

      • Your misunderstanding of what I wrote is not my problem. Nic Lewis disregards whether the model performance in those other measures is good or poor and uses them equally anyway in his supposed improved guess. Are you trying to defend throwing out observations in the evaluation? If a modeler did that you would be all over it.

      • He only gets a good correlation among the models when he throws out 8/9 of the observations. Do you like that approach? Why is the correlation based on one metric more important to you than one based on all nine? Is it not better to use all the observations and weight the model results based on their fit than to use the models to completely throw out observations? Nic does the latter.

      • Jim D
        Think about what you are saying, please.

        (BC17) “disregards whether the model performance in those other measures is good or poor and uses them equally,” not Nic Lewis

        “Are you trying to defend throwing out observations in the evaluation?”
        The observations have not and are never thrown out.

        The predictor ability, the correlation between one set of observations and another is what is on trial.
        You yourself said there were 8 poor correlations compared to the one good one.
        Should a scientist use models with a good correlation of observation predictor or use models with poor correlation to observations predictors??
        The answer is obvious scientifically.
        Which is what Nic has stated.
        Why make it so hard on yourself?

        He is not throwing out the predictors at all, he is saying that 8 out of the 9 have limited predictive value compared to the 9th.

      • angech, you have not understood what I have said at all. By using one predictor, Nic Lewis has thrown out the other eight. A better word for “predictor” is “observational constraint”. Nic has thrown out 8 observational constraints on the model results leaving errors in those constraints unchecked and unpenalized in his warming result. He does not care how well or poorly the models did with OSR for example, only seasonal OLR. Conversely BC17 want to use all nine sets of observations as a constraint.
        Put it this way, if BC17 had just decided to use 6 constraints based on climatology and monthly variation, not seasonal, Nic’s method would have produced OLR climatology as the optimal with the narrowest spread. However that gives more warming than BC17’s estimate. Now you can see how arbitrary his selection method is. Would he have sent that round to the blogs? I doubt it. The second narrowest spread gives the opposite result when used alone.

      • Jim D, I think you will find the predictors are being used within models to relate simulated flux variables to simulated temp. The observational constraint comes later.

      • I think it relates model accuracy in those predictors relative to observations to the model warming, so the observations are already being used as a constraint at this stage.

      • That comes later using the model developed by relating GCM flux predictors to GCM forecast temp. It is at the latter earlier stage that the question of “best” model arises. If you want to use simulated flux predictors within GCMs to predict the GCMs’ future temps then you don’t need to use all the flux predictors to get the “best” result. Nothing to do with empirical constraints.

      • That’s equivalent to disregarding the accuracy relative to the other observations. Why would you want to do that?

      • Jim D, there are no observations, accurate or otherwise, at this stage of proceedings. Just fitting a model that relates GCM flux outputs to GCM temp projections.

      • The observations are used to obtain the new spreads. It’s subtle but it is there in the technical video on his blog page about it. When he fills in the new predictand histogram in his example, the distance is derived from the distance of the predictor of the quantity. Took me a few goes through the video to figure that out.

      • What you write is a non-sequitor. Perhaps read the paper, it will help you understand.

      • I don’t know what you are claiming. Do you see that the observations are used in the spreads he presents?

      • I say there are no observations used at the point where the PLS regression is done, and you tell me they are used at the the point when spreads are calculated.

        If you haven’t got the paper look at what the video says about 1.04. The PLS process is done and dusted before the observations are introduced, and the point of the story if you do the PLS you don’t need all the other flux related parameters (i.e their weighting in the model going forward should be set to zero). Nothing to do with the observations.

      • No, I agree. The PLS regression is just done to produce the predictors and only depends on the models. The observations come in after that in the way those predictors are used to get the final observationally constrained predictand. Whether you want to call the OLR seasonal variation a predictor or an observational constraint is a bit grey. I call it an observational constraint.

      • So you agree with Nic Lewis that the PLS should proceed as designed, and if it only needs one flux parameter then so be it, and that model should be carried through to that observational comparisons (if one wants to use this particular method).

      • No it needs to use more than one observation for significance. Nic is also ignoring the PLS for the other eight variables which have no use without the corresponding observation. One observation is not trustworthy as it risks a significant amount of chance. You at least need to factor in shortwave parameters as independent checks. Using nine semi-independent factors is even better for robustness. As I mentioned somewhere else, the second narrowest distribution gives the opposite result to Nic’s, i.e. warmer than BC17. Make what you will of that.

      • Jim, forget reality, we’re dealing with finding a simple model to replicate the relationship between the flux values produced by GCMs and the increase in temps they produce.

        It so happens that the relationship is best describe using one, maybe two, of the parameters. Adding the others doesn’t help explain the projected temp (they have zero weight).

        At that point the rest of the method falls apart. They should have said at that point what you are effectively saying in your most recent comment ‘most of the flux parameters aren’t going to have any weight in the balance of our experiment, and that isn’t what we had in mind’.

        They should have then explored the reasons why that had happened. Instead they exploited a failure in the PLS that gave a solution that used all the parameters without stopping to think, and proceeded on their merry way.

        Do you understand?

      • No, I don’t. The second narrowest spread giving the opposite result is what should have given Lewis pause about the robustness of using just one. Had BC17 not selected seasonal variation, and just used 6 predictors, Lewis would have used that one instead (OLR climatology) and reported that BC17 underestimated the warming. Brown’s reply emphasized how robust their result was to ways of combining predictors.
        Put another way, the models more selected by the seasonal OLR may have done poorly with the OSR measures, and Lewis would not account for that problem at all by ignoring the other eight measures. It distorts the result, and not in a good way, to ignore important and highly relevant observational constraints.

      • Jim, you are confusing yourself by keeping thinking about what happened after the PLS was done. Stop there. The rest is unsound. Try thinking about the method just up to that point and forget what BC17 might have wished to demonstrate.

        On the intended methodology it was the PLS that selected what parameters to use in the rest of the study, not the protagonists. And the fact it produced the contradictory results Lewis and you mention should have stopped the study in its tracks. It caused Lewis to pause, but apparently not the authors (or you).

      • The whole point of the BC17 paper was to inform the temperature change given by models with a measure of how they are doing in relevant parameters today which requires the observations to be used as constraints. This is not an option to be thrown out. Lewis himself still uses select observations, and even select regions of them, in his one-parameter method that he advances, so I don’t know where you draw the line. Also, the spread is a function of the models used, not nature. You can arbitrarily reduce the spread in other parameters by throwing out select models at the outset. The results were not contradictory either because all the parameters reduced the spread for the RCP8.5 case, showing that each had value by itself in narrowing the warming.

      • Jim, it seems you are not very familiar with the nature of scientific study. The point of BC17 was to use a method to investigate observationally constrained projections. Before they even got to the observational bit they went off the rails. End of study. Start again.

      • Lewis only had praise for their chosen methods. You have not made your criticism very clear at all, and I am not going to try and guess what makes you so unhappy.

      • Jim, now you are definitely trolling. Lewis said their method wasn’t correctly applied in this case and in any case wasn’t appropriate. I’m sure even you noticed that.

      • How do you explain that Lewis’s preferred data gives higher sensitivities for all the other RCPs? It completely lacks robustness by cherrypicking one item from a table of 36 elements and saying only use that and throw the rest away because they don’t agree with what he wants as a result. It’s a cherry pick, and should be called out as such. The second highest spread in the table gives the opposite result (as do the 3rd, 4th and 5th). How would he explain that, if he even knew that? it is a fatal flaw in his reasoning.
        Also PLS without the observation is fairly useless because you need the observation to get the prediction ratio.

      • Jim, you’re trolling again. Even you should understand by now that everything you mention here is unsound because it is built on an unsound use of PLS and that Lewis only used it to demonstrate the unsoundness.

      • Lewis used it to demonstrate his result, and he said nothing about unsoundness as he even adopted it himself. In fact he considered his result robust and more skillful just based on spread and without considering other low spreads and what they mean in isolation even as they contradict his result, a fatal oversight for his method.
        BC17 achieved what they wanted under their hypothesis by showing that for models that do better in the current climate, warming is greater than the ensemble mean that equally weights poorer models by their observation-based measure. This is not a proof of greater warming, but is a valid way to discriminate among model projections based on observations given a wide range of model skills in the current climate. It is a statistical way of saying which ones they trust more. You might have reasons not to discriminate based on those observations, and it would be up to you how to discriminate. They give their rationale and it looks fine enough.

      • “But I think that the underlying problem is the non-satisfaction of a key assumption of the PLS method: that the predictor variables are free of uncertainty.”

        Lewis above describing the unsoundness of the method, and for that reason as I’m sure you know BC17 didn’t achieve anything useful, let alone what you claim.

      • Is he referring to the observations being uncertain? Does he not trust the means and variations in OLR, OSR and N? Does he regard the observations as completely unusable even with 15-year averages? Can we guess what Lewis means here, or is he just casting general aspersions in the way he does.

      • “Is he referring to the observations being uncertain?”
        No, as you now know this has nothing to do with observation, it’s what happening in the models.

        “Can we guess what Lewis means here, or is he just casting general aspersions in the way he does.”
        No, I think that’s more in your style as this comment exemplifies.

      • Yes, I see that, if that is all he has, it is a weak argument that because you have 15 years of observations, you need more than 15 years of model data to compare with it. Both have similar uncertainties, but the signals were there with each of the nine variables correlating with the warming. This tells us that the data period was sufficient.

      • Jim, sorry you are burbling and becoming incomprehensible. What you have leaned (slowly) is PLS doesn’t work because it produces contradictory results, and this invalidates BC17’s study.

        The next step in your education was to move on to possible reasons for this failure. I can see that this is getting well beyond your ability to understand (but perhaps not troll).

      • I think you have seen how Lewis’s selection method of seasonal OLR was unsound. Narrowness is not the sole measure of skill with these observations. It is deeply flawed and seriously misguided in its reasoning. Skill is increased by using all available observational constraints, not by eliminating most of them as Lewis does.

    • Yes – models may be tuned to several approximately known variables – or even fortuitously mimic more or less nonlinear climate variability with Lorenz variability in climate models. Hurst showed chaos in Nile River stage records over a millennia and a half – so in climate. Regimes and sudden more or less extreme shifts, perturbations of quasi standing waves in Earth’s spatio-temporal chaotic flow field.

      But once they are tuned – then feasible – with initial conditions within the bounds of observation error – model solutions continue to exponentially diverge from each other over the period of simulation.

      That’s what physics says.

  14. ”Given current uncertainties… the implications of this ability to engineer climate sensitivity need to be considered when estimating the uncertainty in climate projections.”

    True, true, Western academia needs to do more to get the politics out of climate science and call out global warming delusionists who for years have with the help of MSM been all too effective at marginalizing the voices of reason from their side of the political divide.

    Despite the statements of numerous scientific societies, the scientific community cannot claim any special expertise in addressing issues related to humanity’s deepest goals and values… Any serious discussion of the changing climate must begin by acknowledging not only the scientific certainties but also the uncertainties, especially in projecting the future. ~Dr. Steven Koonin

  15. At his12/25/5:12 pm post above Robert Ellison said “This entire discussion is about 6 theoretically impossible things before breakfast and is a waste of everyone’s time.” I agree entirelyThe harsh reality is that the output of the climate models which the IPCC rely’s on on their dangerous global warming forecasts have no necessary connection to reality because of their structural inadequacies. See Section 1 at my blogpost
    https://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html and the linked paper
    Here is a quote:
    “The climate model forecasts, on which the entire Catastrophic Anthropogenic Global Warming meme rests, are structured with no regard to the natural 60+/- year and, more importantly, 1,000 year periodicities that are so obvious in the temperature record. The modelers approach is simply a scientific disaster and lacks even average commonsense. It is exactly like taking the temperature trend from, say, February to July and projecting it ahead linearly for 20 years beyond an inversion point. The models are generally back-tuned for less than 150 years when the relevant time scale is millennial. The radiative forcings shown in Fig. 1 reflect the past assumptions. The IPCC future temperature projections depend in addition on the Representative Concentration Pathways (RCPs) chosen for analysis. The RCPs depend on highly speculative scenarios, principally population and energy source and price forecasts, dreamt up by sundry sources. The cost/benefit analysis of actions taken to limit CO2 levels depends on the discount rate used and allowances made, if any, for the positive future positive economic effects of CO2 production on agriculture and of fossil fuel based energy production. The structural uncertainties inherent in this phase of the temperature projections are clearly so large, especially when added to the uncertainties of the science already discussed, that the outcomes provide no basis for action or even rational discussion by government policymakers. The IPCC range of ECS estimates reflects merely the predilections of the modellers – a classic case of “Weapons of Math Destruction” (6).
    Harrison and Stainforth 2009 say (7): “Reductionism argues that deterministic approaches to science and positivist views of causation are the appropriate methodologies for exploring complex, multivariate systems where the behavior of a complex system can be deduced from the fundamental reductionist understanding. Rather, large complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior. The practical implication is that there exist system behaviors and structures that are not amenable to explanation or prediction by reductionist methodologies. The search for objective constraints with which to reduce the uncertainty in regional predictions has proven elusive. The problem of equifinality ……. that different model structures and different parameter sets of a model can produce similar observed behavior of the system under study – has rarely been addressed.” A new forecasting paradigm is required. Here is the abstract of the actual paper
    “The coming cooling: usefully accurate climate forecasting for policy makers.
    Dr. Norman J. Page
    Email: norpag@att.net
    DOI: 10.1177/0958305X16686488
    Energy & Environment
    ABSTRACT
    This paper argues that the methods used by the establishment climate science community are not fit for purpose and that a new forecasting paradigm should be adopted. Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths. It is not possible to forecast the future unless we have a good understanding of where the earth is in time in relation to the current phases of those different interacting natural quasi periodicities. Evidence is presented specifying the timing and amplitude of the natural 60+/- year and, more importantly, 1,000 year periodicities (observed emergent behaviors) that are so obvious in the temperature record. Data related to the solar climate driver is discussed and the solar cycle 22 low in the neutron count (high solar activity) in 1991 is identified as a solar activity millennial peak and correlated with the millennial peak -inversion point – in the UAH temperature trend in about 2003. The cyclic trends are projected forward and predict a probable general temperature decline in the coming decades and centuries. Estimates of the timing and amplitude of the coming cooling are made. If the real climate outcomes follow a trend which approaches the near term forecasts of this working hypothesis, the divergence between the IPCC forecasts and those projected by this paper will be so large by 2021 as to make the current, supposedly actionable, level of confidence in the IPCC forecasts untenable.”
    For cooling forecast see Fig 12

  16. Just how hard is all of this?

    Take all global land (location) readings that are near or surrounded by concrete type materials, and put those on one side of a sheet of paper, list A.

    Place all other land readings on the other side of the paper, list B.

    Average each. The A list is known to be higher, we agree. So subtract average B from A and adjust downward all A list values by that difference and now determine the average of A’ and B.

    Compare this average with the official published global temperature and continue discussion.

  17. Dr Brown has a post on his blog
    Do ‘propagation of error’ calculations invalidate climate model projections of global warming? Posted on January 25, 2017 by ptbrown31
    My thoughts on claims made by Dr. Patrick Frank (SLAC) on the validity of climate model projections of global warming:

    Fascinating reading on the concepts of errors, means, and projection into the future.

    Pat Brown seems to be trying to do a reverse Pat Frank

    The hubris is writ large.

    • Why are you linking to something where Pat Frank gets demolished? I thought this was a skeptical blog. Anyway, recommended viewing.
      https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/

      • “You can tell that Pat Frank is wrong by the fact that these simulations started around 1850 and had no such divergence by 2000,”

        Off topic as we are I am happy to oblige your concerns.
        The models have to follow history. They did not run them from 1850, they copied the observations from 1850.
        Think about it.
        Pinatubo for example. It is in there, in the models.
        Can models predict a volcano accurately in time?
        No.
        But there it is.
        So your first comment is a deliberately wrong smokescreen interpretation, attacking Pat Frank for something he did not even say and which is physically untrue, as just explained.
        It is what I call a Mosher moment as I have explained to him in the past.
        When using observations you have to expect a little bit of variation, some results that do not agree with your theory.
        When they all line up like ducks in a row you have been had, or like Cowtan and Way you are selling someone the Brooklyn Bridge.

        The second point is much simpler.
        Errors propagate.
        The fact that there is a large error in the estimation of cloud forcing means that the tiny changes in temp due to tiny CO2 changes cannot be found in the magnitudes larger noise in one year. If you propagate this over 20 years you can never attribute small temp changes to CO2.

        Now I don’t like the argument but I cannot refute it.
        He does not say that CO2 warming is not real by the way.
        He just says that statistically the errors in measuring temperature due to our poor knowledge of clouds is such that we cannot tie warming to CO2 by observation.

        Bit of a blow to me to but should make the cheifio happy.
        That is what he has been trying to get you to see.
        Perhaps we could ask him what would be convincing if not atmospheric temp measurements?

      • They run the models the same way from 1850 as they do beyond 2000. The only input is external forcing like GHG changes, volcanoes, aerosols. Do you agree that the forcing keeps the temperature on track? If so, why not also in the future? I think you are starting to see that the forcing controls the temperature and prevents it from drifting, which is the point. The only question is the magnitude of the temperature change per forcing change, and the model answers that because it is not an input.
        I am reading the comments at that link and Pat Frank seems highly confused by units, insisting that the average forcing should be W/m2/year for some reason. Weird stuff.

      • “They run the models the same way from 1850 as they do beyond 2000. ”

        Simply not true.
        Proof?

        The only input is external forcing like GHG changes, volcanoes, aerosols.

        The observed past can have estimations of real events put into the model, hence the match.
        The future projections can never put real natural variability in, just an averaged guess with huge error ranges.
        They should in fact be smoothed lines except for all the bumpy wriggles they put in to make their predictions look real instead of computer generated.
        Weird stuff.

      • angech, which part don’t you understand? The only input is the external forcing. They don’t put internal natural variability in, so El Ninos don’t occur in the right years in the model runs. The ocean and atmosphere are free running. Therefore the past is done the same as the future apart from external forcing, and for the future they have to guess the change in GHGs and aerosols for example, and the future doesn’t have the large volcanoes of the past. That’s the only way to tell them apart.

      • ” They don’t put internal natural variability in,”
        -wrong
        That is what gives the wriggles in a GCM.
        Otherwise they would only have smooth curves.
        They throw figurative dice, Jim D, just like in dungeons and dragons.

        “so El Ninos don’t occur in the right years in the model runs. ”
        Wrong
        They do not occur at all. They do not need El Niño or La Niña in a GCM.
        As you correctly say
        “the ocean and atmosphere are free running. ”

        “Therefore the past is done the same as the future ”
        Wrong.
        The models put in the past observations Volcanoes , El Ninos and all because they knew they were there.
        The future starts from when the models start actually running, predicting
        Or as you say, guessing
        “for the future they have to guess the change in GHGs and aerosols for example, and the future doesn’t have the large volcanoes of the past.”

        “That’s the only way to tell them apart.”
        Bright boy, that is one of the ways, you are getting it at last.

      • You seem to be so misinformed in how they run GCMs that I am not sure it is worth correcting you. Where do you get this stuff from? They only use external forcing to drive the GCMs (GHGs, aerosols, volcanoes, solar). They have wiggles because the ocean and atmosphere are chaotic both in the model and in reality. They have El Ninos to different degrees, and some are even good but they don’t correspond by date to real ones (look at their wiggles). So, yes, the forcing is the only difference between the past and future parts of their runs, and maybe you got there in the end, but cease thinking they add in natural internal variability. That is not even possible to do in a coupled model because its purpose is to produce that part of the variation itself.

      • A win, Jim D.
        There are El Ninos in Some GCM.
        Not put in by the programmers, just evolving from the data.
        Not in the right places or times but who cares about them being wrong.
        They exist.
        Nick Stokes on explaining why the pause which didn’t exist , existed.
        “So that is why a GCM, in its normal running, can’t predict an El Nino (or pause*) Good ones do El Nino’s well, but with no synchronicity with events on Earth.”
        * comment in brackets my paraphrase, not Nick’s.
        Interestingly a number of commentators who vehemently deny the existence of a pause discuss why GCM’s cannot predict one, sort of implying that it did exist.
        My apologies.

      • Pauses are statistical possibilities. Some GCMs had them, not at the right time, but they were there. You don’t see them when you average the temperature over a decade and compare with the previous decade either in GCMs or in nature since the CO2 trend became large. When you define climate by a 30-year temperature, you only see a steady trend in the observations, nearly 0.2 C per decade in recent decades.
        http://woodfortrees.org/plot/gistemp/mean:120/mean:240
        This only happens in GCMs that account for increasing GHGs. Go figure.

      • Using the PDO as a proxy for ENSO, this is what the ENSO trend since 1900 looks like:

        The models make no attempt to time ENSO, so describing their ENSO timing as “wrong’ is just political, red-in-the-face gamesmanship. There would be an advantage if shorter term models could time ENSO as a lot rides on its timing. Big bucks. In that respect an ENSO in 2016 would have been very useful info around 2012. But in terms of climate, saying 2016 warming was caused by an El Niño is getting oneself completely lost in the weeds, and there are a ton of people completely lost in the weeds. The models do not have to time 2016. In terms of climate (2100,) that would just indicate somebody spent a ton of money on useless window dressing.

      • JCH:

        What’s with the JISAO PDO for November? I hope they didn’t go out of business.

      • They are the PDO, so no, they are not going out of business.

    • Mmmm. Pat Frank had bad reviews at ATTP but in the discussion at Pat Brown’s, which was pretty civil for a disagreement I felt his (Frank’s) replies won the day.
      You did not.
      A bit like that what colour the dress argument. We naturally see different results from the same input.
      Guess we must be models then.

      • You can tell that Pat Frank is wrong by the fact that these simulations started around 1850 and had no such divergence by 2000, and also multi-model ensembles of simulations don’t diverge as much as he expects between 2000 and 2100. Where do you think he went wrong on this? The facts just don’t support him. In reality the climate and models are constrained by the forcing and can’t drift away from an equilibrium so much unless their energy balance is completely out of whack. His wrong assumption is that the errors are uncorrelated over time when in fact they tend to cancel due to a restoring force towards equilibrium.

      • Model solutions seem to exponentially diverge over time – as theoretically predicted. I know – let’s do the experiment.

  18. Comparison of the Atlantic Meridional Overturning Circulation between
    1960 and 2007 in six ocean reanalysis products

    In this work, we explicitly consider the decadal trends because reanalyses, like forced ocean model simulations, are susceptible to spurious climate drift. Various strategies are used to mitigate the drift (see Table 1), and while these are likely successful to some degree, there is no obvious means of discriminating between spurious climate drift and actual changes in the climate system that we wish to reconstruct.

    If spurious drifts make these models too unreliable for assessing past trends, why are they useful for projecting 21st century trends? If it’s impossible to distinguish between drift and real effect in an ocean hindcast, how is it easier with long-term future coupled atmosphere+ocean runs? Intuitively those would seem much more complex and prone to drifts and other errors.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s