Brown and Caldeira: A closer look shows global warming will not be greater than we thought

by Nic Lewis

A critique of a recent paper by Brown and Caldeira published in Nature that predicted greater than expected global warming.

Last week a paper predicting greater than expected global warming, by scientists Patrick Brown and Ken Caldeira, was published by Nature.[1] The paper (henceforth referred to as BC17) says in its abstract:

“Across-model relationships between currently observable attributes of the climate system and the simulated magnitude of future warming have the potential to inform projections. Here we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget and the magnitude of projected global warming. When we constrain the model projections with observations, we obtain greater means and narrower ranges of future global warming across the major radiative forcing scenarios, in general. In particular, we find that the observationally informed warming projection for the end of the twenty-first century for the steepest radiative forcing scenario is about 15 per cent warmer (+0.5 degrees Celsius) with a reduction of about a third in the two-standard-deviation spread (−1.2 degrees Celsius) relative to the raw model projections reported by the Intergovernmental Panel on Climate Change.”

Patrick Brown’s very informative blog post about the paper gives a good idea of how they reached these conclusions. As he writes, the central premise underlying the study is that climate models that are going to be the most skilful in their projections of future warming should also be the most skilful in other contexts like simulating the recent past. It thus falls within the “emergent constraint” paradigm. Personally, I’m doubtful that emergent constraint approaches generally tell one much about the relationship to the real world of aspects of model behaviour other than those which are closely related to the comparison with observations. However, they are quite widely used.

In BC17’s case, the simulated aspects of the recent past (the “predictor variables”) involve spatial fields of top-of-the-atmosphere (TOA) radiative fluxes. As the authors state, these fluxes reflect fundamental characteristics of the climate system and have been well measured by satellite instrumentation in the recent past – although (multi) decadal internal variability in them could be a confounding factor. BC17 derive a relationship in current generation (CMIP5) global climate models between predictors consisting of three basic aspects of each of these simulated fluxes in the recent past, and simulated increases in global mean surface temperature (GMST) under IPCC scenarios (ΔT). Those relationships are then applied to the observed values of the predictor variables to derive an observationally-constrained prediction of future warming.[2]

The paper is well written, the method used is clearly explained in some detail and the authors have archived both pre-processed data and their code.[3] On the face of it, this is an exemplary study, and given its potential relevance to the extent of future global warming I can see why Nature decided to publish it. I am writing an article commenting on it for two reasons. First, because I think BC17’s conclusions are wrong. And secondly, to help bring to the attention of more people the statistical methodology that BC17 employed, which is not widely used in climate science.

What BC17 did

BC17 uses three measures of TOA radiative flux: outgoing longwave radiation (OLR), outgoing shortwave radiation (OSR) – being reflected solar radiation – and the net downwards radiative imbalance (N).[4] The aspects of each of these measures that are used as predictors are their climatology (the 2001-2015 mean), the magnitude (standard deviation) of their seasonal cycle, and monthly variability (standard deviation of their deseasonalized monthly values). These are all cell mean values on a grid with 37 latitudes and 72 longitudes, giving nine predictor fields each with 2664 values for three aspects (climatology, seasonal cycle and monthly variability) for each of three variables (OLR, OSR and N). So, for each climate model there are up to 23,976 predictors of GMST change.

BC17 consider all four IPCC RCP scenarios and focus on mid-century and end-century warming; in each case there is a single predictand, ΔT . They term the ratio of the ΔT predicted by their method to the unweighted mean of the ΔT values actually simulated by each of the models involved the ‘Prediction ratio’. They assess the predictive skill as the ratio of the root-mean-square error of the differences for each model between its predicted ΔT and its actual (simulated) ΔT, to the standard deviation of the simulated changes across all the models. They call this the Spread ratio. For this purpose, each model’s predicted ΔT is calculated using the relationship between the predictors and ΔT determined using only the remaining models.[5]

As there are more predictors than data realizations (with each CMIP5 model providing one realization), using them directly to predict ΔT would involve massive over-fitting. The authors avoid over-fitting by using a partial least squares (PLS) regression method. PLS regression is designed to compress as much as possible of the relevant information in the predictors into a small number of orthogonal components, ranked in order of (decreasing) relevance to predicting the predictand(s), here ΔT. The more PLS components used, the more accurate the in-sample predictions will be, but beyond some point over-fitting will occur. The method involves eigen-decomposition of the cross-covariance, in the set of models involved, between the predictors and ΔT. It is particularly helpful when there are a large number of collinear predictors, as here. PLS is closely related to statistical techniques such as principal components regression and canonical correlation analysis. The number of PLS components to retain is chosen having regard to prediction errors estimated using cross-validation, a widely-used technique.[6] BC17 illustrates use of up to ten PLS components, but bases its results on using the first seven PLS components, to ensure that over-fitting is avoided.

The main result of the paper, as highlighted in the abstract, is that for the highest-emissions RCP8.5 scenario predicted warming circa 2090 [7] is about 15% higher than the raw multimodel mean, and has a spread only about two-thirds as large as that for the model-ensemble. That is, the Prediction ratio in that case is about 1.15, and the Spread ratio is about 2/3. This is shown in their Figure 1, reproduced below. The left hand panels all involve RCP8.5 2090 ΔT as the predictand, but with the nine different predictors used separately. The right hand panels involve different predictands, with all predictors used simultaneously. The Prediction ratio and Spread ratio for the main results RCP8.5 2090 ΔT case highlighted in the abstract is shown by the solid red line in panels b and d respectively, at an x-axis value of 7 PLS components.

Figure 1. Sensitivity of results to predictors or predictands used and to the number of PLS components used. a, Prediction ratios for the nine predictor fields, each individually targeting the ΔT 2090-RCP8.5 predictand. b, As in a but using all nine of the predictor fields simultaneously while switching the predictand that is targeted. c, d As in a, b respectively but showing the Spread ratios using hold-one-out cross-validation.
Is there anything to object to in this work, leaving aside issues with the whole emergent constraint approach? Well, it seems to me that Figure 1 shows their main result to be unsupportable. Had I been a reviewer I would have recommended against publishing, in the current form at least. As it is, this adds to the list of Nature-brand journal climate science papers that I regard as seriously flawed.

Where BC17 goes wrong, and how to improve its results

The issue is simple. The paper is, as it says, all about increasing skill: making better projections of future warming with narrower ranges by constraining model projections with observations. In order to be credible, and to narrow the projection range, the predictions of model warming must be superior to a naïve prediction that each model’s warming will be in line with the multimodel average. If that is not achieved – the Spread ratio is not below one – then no skill has been shown, and therefore the Prediction ratio has no credibility. It follows that the results with the lowest Spread ratio – the highest skill – are prima facie most reliable and to be preferred.

Figure 1 provides an easy comparison between different predictors of skill in predicting ΔT for the RCP8.5 2090 case. That case, as well as being the one dealt with in the paper’s abstract, involves the largest warming and as a result the highest signal to noise ratio. Moreover, it has data for nearly all the models (36 out of 40). Accordingly, RCP8.5 2090 is the ideal case for skill comparisons. Henceforth I will be referring to that case if not stated otherwise.

Panel d shows, as the paper implies, that use of all the predictors results in a Spread ratio of about 2/3 with 7 PLS components. The Spread ratio falls marginally to 65% with 10 PLS components. The corresponding Prediction ratios are 1.137 with 7 components and 1.141 with 10 components. One can debate how many PLS components to retain, but it makes very little difference whether 7 or 10 are used and 7 is the safer choice. The 13.7% uplift in predicted warming in the 7 PLS components case used by the authors is rather lower than the “about 15%” stated in the paper, but no matter.

The key point is this. Panel c shows that using just the OLR seasonal cycle predictor produces a much more skilful result than using all predictors simultaneously. The Spread ratio is only 0.53 with 7 PLS components (0.51 with 10). That is substantially more skilful than when using all the predictors – a 40% greater reduction below one in the Spread ratio. Therefore, the results based on just the OLR seasonal cycle predictor must be considered to be more reliable than those based on all the predictors simultaneously.[8] Accordingly, the paper’s main results should have been based on them in preference to the less skilful all-predictors-simultaneously results. Doing so would have had a huge impact. The RCP8.5 2090 Prediction ratio using the OLR seasonal cycle predictor is under half that using all predictors – it implies a 6% uplift in projected warming, not “about 15%”.

Of course, it is possible that an even better predictor, not investigated in BC17, might exist. For instance, although use of the OLR seasonal cycle predictor is clearly preferable to use of all predictors simultaneously, some combination of two predictors might provide higher skill. It would be demanding to test all possible cases, but as the OLR seasonal cycle is far superior to any other single predictor it makes sense to test all two-predictor combinations that include it. I accordingly tested all combinations of OLR seasonal cycle plus one of the other eight predictors. None of them gave as high a skill (as low a Spread ratio) as just using the OLR seasonal cycle predictor, but they all showed more skill than the use of all three predictors simultaneously, save to a marginal extent in one case.

In view of the general pattern of more predictors producing a less skilful result, I thought it worth investigating using just a cut down version of the OLR seasonal cycle spatial field. The 37 latitude, 72 longitude grid provides 2,664 variables, still an extraordinarily large number of predictors when there are only 36 models, each providing one instance of the predictand, to fit the statistical model. It is thought that most of the intermodel spread in climate sensitivity, and hence presumably in future warming, arises from differences in model behaviour in the tropics.[9] Therefore, the effect of excluding higher latitudes from the predictor field seemed worth investigating.

I tested use of the OLR seasonal cycle over the 30S–30N latitude zone only, thereby reducing the number of predictor variables to 936 – still a large number, but under 4% of the 23,976 predictor variables used in BC17. The Spread ratio declined further, to 51% using 7 PLS components.[10] Moreover, the Prediction ratio fell to 1.03, implying a trivial 3% uplift of observationally-constrained warming.

I conclude from this exercise that the results in BC17 are not supported by a more careful analysis of their data, using their own statistical method. Instead, results based on what appears to be the most appropriate choice of predictor variables – OLR seasonal cycle over 30S–30N latitudes – indicate a negligible (3%) increase in mean predicted warming, and on their face support a greater narrowing of the range of predicted warming.

For possible reasons for the problems with BC17’s application of PLS regression, see the longer and more detailed post at ClimateAudit.

 

Why BC17’s results would be highly doubtful even if their application of PLS were sound

Despite their superiority over BC17’s all-predictors-simultaneously results, I do not think that revised results based on use of only the OLR seasonal cycle predictor, over 30S–30N, would really provide a guide to how much global warming there would actually be late this century on the RCP8.5 scenario, or any other scenario. BC17 make the fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models also applies in the real climate system. I think this is an unfounded, and very probably invalid, assumption. Therefore, I see no justification for using observed values of those aspects to adjust model-predicted warming to correct model biases relating to those aspects, which is in effect what BC17 does.

Moreover, it is not clear that the relationship that happens to exist in CMIP5 models between present day biases and future warming is a stable one, even in global climate models. Webb et al (2013),9 who examined the origin of differences in climate sensitivity, forcing and feedback in the previous generation of climate models, reported that they “do not find any clear relationships between present day biases and forcings or feedbacks across the AR4 ensemble”.

Furthermore, it is well known that some CMIP5 models have significantly non-zero N (and therefore also biased OLR and/or OSR) in their unforced control runs, despite exhibiting almost no trend in GMST. Since a long-term lack of trend in GMST should indicate zero TOA radiative flux imbalance, this implies the existence of energy leakages within those models. Such models typically appear to behave unexceptionally in other regards, including as to future warming. However, they will have a distorted relationship between climatological values of TOA radiative flux variables and future warming that is not indicative of any genuine relationship between them that may exist in climate models, let alone of any such relationship in the real climate system.

There is yet a further indicator that the approach used in the study tells one little even about the relationship in models between the selected aspects of TOA radiative fluxes and future warming. As I have shown, in CMIP5 models that relationship is considerably stronger for the OLR seasonal cycle than for any of the other predictors or any combination of predictors. But it is well established that the dominant contributor to intermodel variation in climate sensitivity is differences in low cloud feedback. Such differences affect OSR, not OLR, so it would be surprising that an aspect of OLR would be the most useful predictor of future warming if there were a genuine, underlying relationship in climate models between present day aspects of TOA radiative fluxes and future warming.

Conclusion

To sum up, I have shown strong evidence that this study’s results and conclusions are unsound. Nevertheless, the authors are to be congratulated on bringing the partial least squares method to the attention of a wide audience of climate scientists, for the thoroughness of their methods section and for making pre-processed data and computer code readily available, hence enabling straightforward replication of their results and testing of alternative methodological choices.

Endnotes

[1] Greater future global warming inferred from Earth’s recent energy budget, doi:10.1038/nature24672. The paper is pay-walled but the Supplementary Information, data and code are not.

[2] Uncertainty ranges for the predictions are derived from cross-validation based estimates of uncertainty in the relationships between the predictors and the future warming. Other sources of uncertainty are not accounted for.

[3] I had some initial difficulty in running the authors’ Matlab code, as a result of only having access to an old Matlab version that lacked necessary functions, but I was able to adapt an open source version of the Matlab PLS regression module and to replicate the paper’s key results. I thank Patrick Brown for assisting my efforts by providing by return of email a missing data file and the non-standard color map used.

[4] N = incoming solar radiation – OSR – OLR; with OSR and OLR being correlated, there is only partial redundancy in also using the derived measure N.

[5] That is, (RMS) prediction error is determined using hold(leave)-one-out cross-validation.

[6] For each CMIP5 model, ΔT is predicted based on a fit estimated with that model excluded. The average of the squared resulting prediction errors will start to rise when too many PLS components are used.

[7] Average over 2081-2100.

[8] Although I only quote results for the RCP8.5 2090 case, which is what the abstract covers, I have checked that the same is also true for the RCP4.5 2090 case (a Spread ratio of 0.66 using 7 PLS components, against 0.85 when using all predictors). In view of the large margin of superiority in both cases it seems highly probable that use of the OLR seasonal cycle produces more skilful predictions for all predictand cases.

[9] Webb, M J et al 2013: Origins of differences in climate sensitivity, forcing and feedback in climate models Clim Dyn (2013) 40:677–707.

[10] Use of significantly narrower or wider latitude zones (20S–20N or 45S–45N) both resulted in a higher Spread ratio. The Spread ratio varied little between 25S–25N and 35S–35N zones.

Moderation note:  As with all guest posts, please keep your comments civil and relevant.

245 responses to “Brown and Caldeira: A closer look shows global warming will not be greater than we thought

  1. So if I get it, they first make models that have a tenuous relation to actual climate, and have not been properly validated. Then they engage into very complicated mathematical processing over some aspects of those models to try to convince us that somehow that makes them more correct. And the result is only good because it makes predictions worse.

    Sorry but I think it is all a futile exercise. The models are not up to the task. This is the ECMWF model for ENSO, not the entire climate.

    https://i.imgur.com/coRfZyA.png

    And it is just a forecast for a few months in the future.

    The worse they make their predictions, the sooner they will fall on their faces when the expected warming does not materialize.

    • Harry Twinotter

      “The worse they make their predictions, the sooner they will fall on their faces when the expected warming does not materialize.”

      So that is your prediction is it? And it is based on…

      • It is not a prediction. It is an observation.

        https://i.imgur.com/CXwGBte.png

        Models are already predicting more warming than it is observed. Make the models predict 15% more warming and they will be 15% more wrong.

      • Harry Twinotter

        OK I accept your chart in good faith, and you show a 13-month moving average instead of a 5 year one as I have seen some others do (using a 5 year average removes the excursions above the CMIP5 median). Your results validates the one big projection from the climate models that the global mean temperature will increase as the CO2 forcing increases, something which some people deny.

        If you compare to one of the other data sets (not HadCRUT4) the results look a bit different.

        Even if we are fortunate and the observed warming falls below the projected warming, it just means we have a decade or two on our side. As Gavin Schmidt points out, the gap is not substantial.

      • Harry Twinotter

        A CMIP5 comparison to other data sets. Dr Schmidt’s feeling is some of the forcing were misspecified. But I do recommend people wait for published studies.

        https://pbs.twimg.com/media/DKM9wDkXkAARHtE.jpg

      • Even if we are fortunate and the observed warming falls below the projected warming, it just means we have a decade or two on our side.

        No. It means models do not reproduce climate change. It might mean we have a decade or two, a century or two, or a millennia or two. It certainly means we don’t know how future climate will be like.

      • Harry Twinotter

        “No. It means models do not reproduce climate change.”

        Oh really? I am still waiting for you to make your case.

        Like I said before, the models project global mean temperature will increase as CO2 increases. The model projection are spot on in that respect. Unless you are trying to say differently?

  2. Curious George

    The article is paywalled, so I can only comment on a sentence “climate models that are going to be the most skilful in their projections of future warming should also be the most skilful in other contexts like simulating the recent past.” As models are “tuned” to match any desired part of the past as closely as possible, there is a lot of space for shenanigans. I recall Gavin Schmidt tweeting on Sep 20, 2017 “Claim of a substantial gap between model projections for global temperature & observations is not true (updated with 2017 estimate)”. If they had this good forecast in 2000, why did they not publish it then?
    https://twitter.com/ClimateOfGavin/status/910535115100606465

    • Harry Twinotter

      I guess the armchair climate scientists are going to come out in force on this one.

      “If they had this good forecast in 2000, why did they not publish it then?”

      I suspect you do not actually understand what you are saying here.

      • Curious George

        Harry, please show me where they published it in year 2000. I saw it first in 2017.

      • Harry Twinotter

        You still do not appear to understand what you are saying. I can go find the references for you if you like, but honestly you can do that too.

        You appear hung up on 2000 for some reason. Why do you think the projections must be from a study during that year?

      • CG, the AR4 was published in 2007, but the CMIP3 models used by that began with data only up to 2000 as Gavin showed. Hope this helps.

      • Curious George

        Please show me where they published it in 2007, 2008,2009,2010,2011, or 2012.

      • AR4 is the 4th IPCC assessment which was a fairly public piece of work at the time. It won a Nobel Peace Prize, for example. All their models started in 2000. You can start with Figure SPM.5 here.
        http://www.ipcc.ch/pdf/assessment-report/ar4/syr/ar4_syr_spm.pdf

      • Harry Twinotter

        The chart appears to be from the Realclimate website, not a study. The CMIP3 data is from the IPCC AR4 report. To understand the chart’s context you need to see the Realclimate articles it was published under.

        The forcings used in the CMIP3 models are from observations up to 2000, the forcing after that are extrapolated . So the model runs are from sometime after 2000 and before 2007 when the IPCC AR4 Report was published, probably around 2004.

        If you want to know more about the chart, you can try tweeting Gavin Schmidt. But the articles in Realclimate are clear enough, they have done a whole series of them.

        So I still do not understand Curious George’s question. Obviously a chart that includes 2017 observations cannot have been made before 2017.

        http://www.realclimate.org/index.php/climate-model-projections-compared-to-observations/

      • Curious George

        Jim, thanks for the link. I am lacking your imagination; I don’t see anything resembling Gavin’s graph in AR4 SPM. Harry, the graph clearly labels everything after 2000 as “Forecast”. I don’t like people who engage in forecasting the past. I sort of like Gavin, so that figure must have been produced in 2000, maybe 2001.

      • Harry Twinotter

        Curious George.

        Your problem is you cannot read a chart, nor know how to finds it’s source.

        The charts is fine, the projections come from around 2004 which is what they say in the Realclimate article which was the source of the chart.

        So the problem really is you just did not understand the chart, instead going off on some vague Conspiracy Theory.

      • CG, CMIP3 simulations starts at 2000. You can represent the output many ways, but that is the data used. If you go into the detailed chapters of IPCC AR4 you may find more plots, and there are also sites where you can probably get the raw data which is possibly what Gavin did. If you don’t believe Gavin plotted the CMIP3 projections correctly, do it yourself. Don’t expect spoonfeeding if you don’t care enough to check it yourself, because it is now clear you won’t believe anything provided to you.

      • The “prediction” cone starts in 2000. It may have different forcing inputs from the original. I think it does. Obviously the observations have been added year by year. Is that what is stuck in his eye. If so, good. It’s utterly ridiculous.

      • Curious George

        Forecast: verb (used without object), forecast or forecasted, forecasting.
        – to conjecture beforehand; make a prediction.
        – to plan or arrange beforehand.
        I don’t have a problem with running a model in 2017 with data from 1990-1999 to see what the model would predict for years 2000-2017. But that’s not a forecast. That’s a model’s projection – the crucial “beforehand” is missing. It would have been a credible forecast when published in 2000 or maybe 2001.

        You are very well aware that the spaghetti graph of model runs only contains runs carefully hand-picked from thousands of runs not shown. What runs you select to show is the actual “forecasting”. You can select other runs not incorporated in AR4 later and get a very different “forecast”.

      • Harry Twinotter

        Jim D.

        “If you don’t believe Gavin plotted the CMIP3 projections correctly, do it yourself”

        You have lost the plot, Dude. Is this your best hobby, attempts at disrupting other people’s discussions? Go away, it is Xmas.

    • https://watertechbyrie.files.wordpress.com/2014/06/rowlands-fig-1-e1513668213559.png

      You may select any of these solutions. I’d pick the one closest to the mark and keep updating while no one was looking as temperature continued to diverge from hugely inadequate models and methods. Oh… wait….

      Early days but the only way it looks respectable is if they pretend that big and sudden recent blips are slow and steady global warming.

  3. Pingback: Brown and Caldeira: A closer look shows global warming will not be greater than we thought — Climate Etc. – NZ Conservative Coalition

  4. Reblogged this on Climate Collections.

  5. My simple-minded questions: If you look at Fig. 2d in Brown’s blog post, the red line (Observationally informed projections) are high compared to the yellow line (Observations). Isn’t it clear just from that bias that cooler models would be a better match to observations?

    And since temperature is more-or-less the integral of radiation flux (over time scales where the ocean stores heat), isn’t temperature a better signal to observe? Derivatives are always much noisier.

  6. Our skill in predicting the weather and by extension the climate is about on par with our skill at predicting the appearance of Katydids in Southern California.

  7. As future patterns of SST increase are uncertain in GCMs, and may differ from those observed in the historical record, this introduces an additional uncertainty in the magnitude of global mean cloud feedback and our ability to constrain it using observations.180,181 ” file:///C:/Users/rober/Downloads/Ceppi_et_al-2017-Wiley_Interdisciplinary_Reviews-_Climate_Change.pdf

    ““Closed cells tend to be associated with the eastern part of the subtropical oceans, forming over cold water (upwelling areas) and within a low, stable atmospheric marine boundary layer (MBL), while open cells tend to form over warmer water with a deeper MBL.” Ilan Koren et al, 2017, Exploring the nonlinear cloud and rain equation The region of the planet where sea surface temperature change most dramatically is over a large part of the Pacific Ocean. Rayleigh-Benard Convection cloud physics result in changes in planetary albedo.

    I have no doubt that models may be run with historic sea surface temperatures – but projecting these into the future remains an impossible dream. Nor do I imagine they have cloud ‘parameterisation’ correct. They are of course unable to model cloud physics as the grids are too coarse for one thing.

    But if they fail to understand the nonlinear mathematics of climate models – it is all utter nonsense at any rate.

  8. Positive is more cloud and negative less. The dip in the last couple of years is from a warm eastern Pacific.

    https://watertechbyrie.files.wordpress.com/2017/08/ceres_ebaf-toa_ed4-0_anom_toa_shortwave_flux-all-sky_march-2000tojanuary-2017-1.png

    Any model getting this right is using wrong physics.
    Until they get the basics right – I have nothing but contempt for most climate science.

    It is all nonlinear – especially models. There is great science out there but this is not it.

  9. Is Nic comparing apples with oranges when comparing 1c with 1d? Seasonal OLR might significantly reduce the spread in temperature predictions when the right weighting is given, but that reference spread is the unweighted temperature prediction with just the seasonal OLR as a predictor, right? It wouldn’t be the unweighted spread with all predictors because then 1c would be saying that any one predictor is better than them all together, which makes no sense at all. What 1c says is that seasonal OLR is the factor with the biggest improvement over the mean, but the other factors can also contribute improvements of their own.

    • Put another way, reducing the spread with one predictor doesn’t mean that you have improved the skill more than using them all. To say that seasonal OLR is the only one that matters is to ignore current-climate errors in the other eight predictors when making your final prediction. Figure 1c shows they all can add value because some models are better than others in those measures too, and why ignore them. What you are suggesting is redefining current skill with only one variable which was not the point of the paper. They wanted to use all relevant variables to define skill in choosing how to weight the models going forwards.

    • No, you have misunderstood. The reference spread (the denominator of the Spread ratio) is the standard deviation in 2090 GMST as actually simulated by the different models in the ensemble – 36 models. It is the same measure whether one predictor field is used or all of them.
      None of the panels in Figure 1 show that some models are better (predicted by) than others, whether by any predictor or by all of them. It is averages across the entire model-ensemble that are plotted in all cases.

      • Yes, see my later comments where I have refined my criticism. You are conflating spread with skill. Using one predictor instead of all nine implies less skill in the current climate with the other eight, where this paper was trying to show that selecting for more skill can be used to refine the prediction. You are not following what the paper was aiming at by removing all the predictors.

  10. Well, I personally don’t see how this paper really adds much. If models are tuned to certain features of the historical record, they show more warming. However, if the models themselves are more flawed than simple energy balance methods, these results mean little. As Mauretsen and Stevens point out the evidence is that there is a correlation between sensitivity of AOGCM’s and weak anthropogenic forcing due to exaggerated aerosol negative forcing. This would indicate that higher sensitivity models lack credibility.

    • According to this paper, the higher sensitivity models get the global structure and variability in radiative terms better.

      • Yes, but is the paper correct?

      • I think Nic’s interpretation of what the paper was doing was wrong. It sets out to fit all those predictors to determine what the good models do. If you only want to fit one predictor and disregard how bad the others are in the current climate, you get what Nic does, and you are not answering the same scientific question any more.

      • JimD, But the question here is about the total energy balance of the models. If their aerosol forcing is too negative, they can’t get future temperature trends right if this forcing is corrected.

      • No, the question is why Nic Lewis wants to use only one predictor when that necessarily degrades the representation of the current climate. The idea is that those nine predictors define models that are more adequate compared to observations. Using one predictor is using a very limited skill measure going forwards, disregarding poor performance with OSR, for example, in making the projection. OSR would capture aerosol responses for example, and should not be ignored.

      • “this paper was trying to show that selecting for more skill can be used to refine the prediction.”
        To be more accurate, the abstract says “we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget and the magnitude of projected global warming.”

        I show that, using their chosen PLS regression method and their method of appraising the goodness of the relationship, a much stronger (higher skill) relationship is found between the magnitude of projected global warming and patterns of OLR season cycle magnitude than with all predictor fields simultaneously.

        As I wrote in response to another comment, the fact that the prediction error was larger when all Predictors were used than just one of them, OLS seasonal cycle, means that their method failed to achieve its objective of optimally weighting the predictors when all predictors were used. As there are in that case so many more predictors than predictand values (approaching 1000 times more), it is perhaps not surprising than the method was not able to achieve its objective in that case.

      • Don’t conflate spread with skill. It is easy to get a narrow spread with one predictor if the models are more narrowly selected for that variable. In the extreme case, one model is good for that variable and the spread is very narrow. That does not make the temperature prediction better for that model, also because you have only verified one variable against reality for that model instead of all nine.

      • “According to this paper, the higher sensitivity models get the global structure and variability in radiative terms better.”
        No, that is not a correct interpretation. Study the paper and see exactly what it did.

      • This is what their abstract says “Here we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget and the magnitude of projected global warming. When we constrain the model projections with observations, we obtain greater means and narrower ranges of future global warming across the major radiative forcing scenarios, in general.” Distinguish that from what I said.

  11. Nic,
    Thank you for you review of this paper.
    I have tried to flesh it out a bit at ATTP’s who has a post on it as well with some points of interest for others here. One was the actual date range of the data used.
    My concern was that if one merely used the last 3 years observations before the paper [2014 to 2016] , for instance, in a period of warming due to a prolonged El Nino, to estimate the degree of warming, it would obviously match with the higher ranges of possible warming in the models [a higher ECS estimate would be derived].

    I was informed by a commentator that,
    “The data period used in the paper is 2001-2015, basically defined by availability of complete annual CERES data at the time they were conducting the study.”
    which was better though would still give more positive warming trends due to the uptick at the end.

    However his next comment on timing was
    “I’m not sure this is really relevant to the study in question though since, based on some of the figures, they seem to be utilising short-term variations in spatial CERES TOA data rather than longer term absolute global averages. They found correlations between short-term flux variations and long-term feedbacks (and therefore climate sensitivity) in CMIP models, and then determined an equivalent short-term flux variation observational figure from CERES TOA data. They then infer long-term feedback strength by applying the observed figure to the correlation found in models.”

    I found this a bit difficult to process but the message sort of implies that the range of assessment may still have been from short term very warm data periods in the 2001-2015 range rather than the whole range itself.
    As you have read the paper you might be able to clear up this confusion with a few simple comments on the data range and data ranges used for the paper.
    Thanks.

    • I believe that all the recent past climate data used in the paper is taken from the full 2001-2015 period, as monthly averages. That is rather a short period for deriving the seasonal cycle, but it utilises pretty much the entire CERES record.

  12. I will go read your post as well and get back. The middle part of his comment was
    “On TOA flux measurement, as I understand things the global annual average absolute net flux from CERES is very uncertain in relation to any imbalance. Proportionally absolute flux errors are arguably fairly small, only around 1-2%, but with the global average TOA imbalance we’re trying to find something smaller than that. However, the month-to-month and year-to-year changes are considered to be highly accurate, so what’s needed is some kind of ground-truthing to define a uniform offset for the data. The published CERES-EBAF release uses Argo OHC data to do that.”

    Does anyone know if the satellites that measure the energy from the sun and the energy radiated from the earth in the last 5 years match, is there an imbalance and does anyone actually have a graph of how big the imbalance is?

    • angech,
      I believe the comment you quote about CERES flux data is valid. The absolute values of OLR and OSR are reasonably accurate, but insufficiently so to obtain a useful estimate of the absolute value of N. That is because N, the excess over incoming solar radiation over OSR and OLR, is over two orders of magnitude smaller than (OLR + OSR). So the CERES N absolute value is adjusted to match N derived from its counterpart, the Earth’s rate of heat accumulation, as derived from Argo ocean observations, etc. But the CERES errors are thought to be quite stable, so month to month and year to year changes in CERES N should be realistic.
      Argo based estimates of N over the decade to 2016 are 0.6-0.7 W/m2. Five years is rather too short a period for accurate estimation, while before 2005 or 2006 the Argo network was not fully deployed.

  13. I am wonder if there has been any significant progress in reducing the uncertainty in the estimate of the net energy balance apart from the paper by Loeb et al around the same time as the following by Stephens et al.

    “The net energy balance is the sum of individual fluxes. The current uncertainty in this net surface energy balance is large, and amounts to approximately 17 Wm–2. This uncertainty is an order of magnitude larger than the changes to the net surface fluxes associated with increasing greenhouse gases in the atmosphere (Fig. 2b). The uncertainty is also approximately an order of magnitude larger than the current estimates of the net surface energy imbalance of 0.6 ±0.4 Wm–2 inferred from the rise in OHC13,14. The uncertainty in the TOA net energy fluxes, although smaller, is also much larger than the imbalance inferred from OHC.”

    An update on Earth’s energy balance in light of the latest global observations
    Stephens et al 2012
    https://tinyurl.com/ztc9bso

    Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty. Loeb et al 2012

  14. “I think I’ve never heard so loud
    The quiet message in a cloud.
    Hear it now, what were the odds?
    The raucous laughter of the Gods.”
    Kim

    Rhyming couplets. I’m impressed.
    Thank you very much.
    Chief Hydrologist

    top comment @ https://judithcurry.com/2011/02/09/decadal-variability-of-clouds/

    But then Kim was always top comment. I laugh at the problem now. Clouds and – close behind Tomas and his spatio-temporal quasi standing wave chaotic oscillators. In climate and just about everywhere else.

    I have been communing with clouds on my daily – at a minimum – ride into town on my mag wheeled, space frame chasis, sprung and electrified wheel chair with the pilots seat. I went on the beach today – for the first time in fact. Anyway – I have time to commune with sky and sea. I have seen clouds like fluffy cobblestones across the sky. Clearly Rayleigh-Bénard convection cells. Closed cells over warm almost tropical waters in an emerging La Niña in the western Pacific. There is even a formula I saw today?

    But we knew that back then. That natural variability involved the El Niño – Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO). That together are closely coupled in the Interdecadal Pacific Oscillation as quasi standing waves in Earth’s spatio-temporal globally coupled chaotic flow field. And that cloud changes – observed from the surface and by satellite – changed with sea surface temperature.

    Ideas of solar UV/ozone chemistry subtly modulating atmospheric pressure at the poles – and blocking patterns in both hemisphere’s spinning up more or less winds and great ocean gyres – came later. This modulates welling up from the abyss in the eastern Pacific especially of cold and nutrient rich water. It is an a periodic biological miracle with abundances of fish and happy seals, birds and bears.

    So solar UV triggers Hurst effects over millennia. That’s an idea from the middle of the 20th century. Hurst after studying a millennia and a half of Nile River height data discovered Hurst effects. Well he didn’t call it Hurst effects. I have an idea Mandelbrot had something to do with that. But you get the idea don’t you? The Nile has changing regimes in floods and drought that are recognizably chaotic. There is a mean and variance that persists for a time and then an abrupt shift to another regime. At scales of 7 years of abundance and 7 years of famine to theoretically universal scale – should the Sun survive that long. Sorry – Mandelbrot got in my head.

    I look forward to the model that can deterministically model this. I really do. But let’s be realistic – and selectively quote from Julia Slingo and Tim Palmer (2012) again:

    “Finally, Lorenz’s theory of the atmosphere (and ocean) as a chaotic system raises fundamental, but unanswered questions about how much the uncertainties in climate-change projections can be reduced. In 1969, Lorenz [30] wrote: ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist’. Thirty years later, this problem remains unsolved, and may possibly be unsolvable.”

    Now that’s an idea that comes from the 1960’s and demonstrates convincingly the silliness of both the paper and – I am sorry to say Nic – Nic’s response. The butterflies wings. Each of these models is capable of families of many 1000’s of feasible solutions due to uncertainties in initial conditions and the core nonlinear equations of fluid motion. This may only be statistically – possibly – approached as a probability density function and not by picking – while no one watches – a solution that looks pretty and sending it to the IPCC.

    I am thinking of getting back to Tim Palmer – and asking if I am selectively quoting him? Really – this is not good enough people.

  15. If I understand Nic Lewis’s argument it is this:
    1) One predictor is far superior to the others.
    2) The predictand is significantly different if only the superior predictor is used.
    3) Therefore, the other predictors are outliers that should be ignored and the predictand from the superior predictor is the best we can achieve.

    Inferior predictors shouldn’t be omitted, even if they increase the uncertainty in the result; but they should have non-zero weights applied to them that are consistent with some measure of their quality. Have Brown and Caldeira applied equal weights?

    • jefftweb ,
      No, Brown and Caldeira didn’t apply equal weights.
      They used a sophisticated statistical method, PLS regression, which should optimally weight each spatial location in each predictor field used. Optimal here is in terms of making accurate predictions of future GMST warming in the model under the RCP scenario and future period involved. They measured this accuracy by the reduction in the Spread ratio, which relates to the mean prediction error.
      The fact that the prediction error was larger when all Predictors were used than just one of them, OLS seasonal cycle, means that their method failed to achieve its objective of optimally weighting the predictors when all predictors were used. As there are in that case so many more predictors than predictand values (approaching 1000 times more), it is perhaps not surprising than the method was not able to achieve its objective in that case.

      “Inferior predictors shouldn’t be omitted, even if they increase the uncertainty in the result”
      Well, Brown and Caldeira’s chosen method is designed to minimise the prediction error, so your argment is with them, not me.

      • Nic, I’m using “predictor” to indicate one of the nine types of data that Brown and Caldeira list in the first figure of your post. That seems to be what they mean by predictor, rather than each individual datum. In your post, aren’t you saying that only the OLR seasonal cycle data should be used? And that the other eight types of data should be ignored? The PLS regression seems to eliminate data that don’t affect the result. What I object to is completely eliminating data that disagree with OLR seasonal cycle. I think my argument is with you, not them. Let me rephrase the question from my previous comment: Do Brown and Caldeira apply the same weight to all nine types of data? I ask, because I’m not sure how to tell. If their method implicitly weighs higher data more heavily, it may be OK; otherwise, not.

      • jefftweb,
        “The PLS regression seems to eliminate data that don’t affect the result.”
        PLS regression is intended to do so, but it is clear that it is not capable of actually doing so in this case. Otherwise the prediction error would be lower when using all predictors simultaneously.

        “Do Brown and Caldeira apply the same weight to all nine types of data?”
        They do so initially, but their PLS regression method will result in different average weights in the regression coefficients being given to the nine types of data. The data in the nine prediction fields have differing variances, which also affects how much influence they have on the prediction.

        The underlying statistical problem appears to be that the PLS method they use simply doesn’t work properly – it doesn’t fit a satisfactory statistical model – when all nine predictor fields are used simultaneously.

  16. As he writes, the central premise underlying the study is that climate models that are going to be the most skilful in their projections of future warming should also be the most skilful in other contexts like simulating the recent past.

    =============

    So does fitting a polynomial curve win me any prizes?

  17. Nic Lewis, Thank you for the essay.

  18. Would it be very difficult or expensive to rerun these models using actual greenhouse gas concentrations, incoming solar energy, and other input parameters? I understand the models were run forward in 2007, therefore they have about a decade of additional data to test their skills.

    I have only run oil field models, we check the model set up with actual data, and if necessary do a bit of arm bending to account for phenomena arising from our inability to predict small scale patterns. Feed the model the ten years’ worth of additional input data, run it, see how it marched reality. I can see El Niño making a mess out of this approach, but i would definitely try to use a 10 year reanalysis of real data to test against the model using 10 additional years of real inputs. Using a reanalysis you can see in gorgeous color where the model has performance gaps.

  19. than we thought.
    Who’s “we”?
    Not me!

  20. From Paul’s site
    “Given your results, what do they imply for the climate sensitivity of the system at the moment. Given that the widely used figure in models is around 3C and you have found them under-
    predicting, I assume the ECS from your work would be in the 3.5-4C range?
    Yes our constrained central estimate of ECS is 3.7.”

    The ECS he uses is on average 3.7 instead of the Climate models 3.0.
    Plug this into the graphs and you get a 27% higher rate of increase.

    Fine.
    You set the parameters you want, you get the result you want

  21. Kate Marvel et al.
    “An emerging consensus suggests that global mean feedbacks to increasing temperature are not constant in time. If…. feedbacks become more positive in the future, the equilibrium climate sensitivity (ECS) inferred from recent observed global energy budget constraints is likely to be biased low.”
    Stating the obvious and also implying that the current observed feedbacks are insufficient to be any cause for concern.
    Also conveniently forgetting to mention that if the feedbacks were negative [clouds] the ECS would go lower.
    Fits in with this paper picking a period of increasing temperature to implie increasing feedbacks and ignoring natural variability.

    Plain Language Summary
    “We don’t know how hot it’s going to get because we don’t know exactly how the planet will react to human-caused climate change. You might think we could estimate this from observations: But …..”

    Again observations fail her [them] and it is the observation’s fault. The models are all spot on even though they are all different.

    • Again observations fail her [them] and it is the observation’s fault. The models are all spot on even though they are all different.

      The current observations are 1.8 ℃ per decade, and they are going up. They have not failed anybody. You want the observation to censor science.

      Because of your politics.

      • True.
        Small point , the current observations are now going down and have been doing so for the past 6 months.
        I am sure current means now, not 6 months ago. The observations used finished in an El Niño year which is cherry picking as Tamino would say.

      • “You want the observation to censor science.”

        That’s a bit strong. I think Penn State should double M. Mann’s funds and let him do more public outreach while recruiting the best students.

      • JCH | December 16, 2017
        ” You want the observation to censor science. ”

        I cannot believe you wrote this, meant this or think this.
        Observations do not censor science.
        Observations have no political bias.
        Observations are data.
        Observations are facts.
        When your views deny the data, deny the importance of reality, as far as we can assess it, then politics or belief enter very strongly into comments.
        You use the observations all the time when it suits you.
        When the observations do not support your views you get cranky with the observations and the people making them.
        Fine.
        Go ahead.
        Observations do not censor science.
        People do.

      • Well, I cannot believe the ridiculous things you say.

        Again observations fail her [them] and it is the observation’s fault.

        Ridiculous.

      • The El Niño ended ~18 months ago. The 30-year trend to the end of the 2016 El Niño is less than 30-year trend to November 2017.

        The current temperature anomalies are not the undoing the 2016 El Niña.

        And, it’s looking like December 2107 could as warm as November 2017, and perhaps even a bit warmer. Getting hotter into the teeth of La Niña does you guys no good at all.

      • Have a happy and warm Christmas, JCH.
        Thanks for your help at S.O.D. on GHG.
        My understanding of Ms Marvel’s comments are different to yours.
        I will await the verdict of the tapes in January for 2017.
        Happy Xmas to everyone else as well.

  22. Looking forward to the end of year 2017 global surface temp and the satellite results. A big fall in December anticipated or not, JCH?
    Without knowing, the silence in the warmist blogosphere suggests a big fall but that isn’t a scientific criterion, just an observation.

    • Harry Twinotter

      Who cares about a fall in one year? And a year proceeded by more rises than falls.

      • A zen question from a zen master.
        Not in the mood to be difficult at the moment I am afraid.
        You win.
        Happy Christmas as well.

    • First, on satellites, who in their right mind gives a chit what those silly things do up in goose world?

      As for December, right now December is warmer than November. Your temperature collapse has not happened, and you’re quickly running out of little girl. Short, sweet, and warm.

      Hint, 2018 could easily be as warm as 2017, which will increase the 30-year trend.

      We’re having a cooling hiatus. Children in England may never know cooling again. Haha.

      • jch

        Actually, children in England have never known a warming trend. The temperature trend has been gently downwards here all this century as I posted here last year.

        Which proves something or other.

        tonyb

      • The temperature trend has been gently downwards here all this century

        https://www.metoffice.gov.uk/hadobs/hadcet/graphs/HadCET_graph_ylybars_uptodate.gif

        Need a better class of “sceptic”.

      • And, it would not mean a thing. The physical explanation would be interesting. It would not be: the enhanced greenhouse effect is not happening.

      • VTG

        Here we go. The trend since the beginning of the century in CET is downwards. The Met Office confirmed this when I spoke with them about the data, but rightly pointed out that, though interesting, it was not long enough to constitute a change in our climate.

        https://curryja.files.wordpress.com/2015/11/slide51.png

        The change in seasons is especially interesting.

        tonyb

      • It takes a very special analysis to describe the data as “gently downwards here all this century”.

      • VTG

        No, what I actually said was;

        ‘The temperature trend has been gently downwards here all this century…’

        Are you saying it isn’t?

        tonyb.

      • “Are you saying it isn’t?”

        OK, as you choose to pick a few nits.

        What you posted, in its totality:

        Actually, children in England have never known a warming trend. The temperature trend has been gently downwards here all this century as I posted here last year.

        is

        (1) untrue and
        (2) a poor way to characterise the data.

        (1) because it only applies to some children; those born in 2010, just for example, have experienced a very strong rising trend, and equally those born earlier did experience a warming trend later in their childhoods. In other words, it implies a monotonic decrease.
        (2) is blindingly obvious.

        Had you posted: “Children born in January 2000, have, over their life to date, experienced a net cooling trend”, that would have been true. But still a desperately poor way to characterise the dataset.

        Better scepticism required.

      • VYTG

        Wow! That is really desperate stuff. You can do better than that. What I said was quite correct. Even the Met Office agree that there has been a gently downwards trend this century.

        I think you are over parsing this don’t you? It was a tongue in cheek reply to JCH’s tongue in cheek comment.

        Whether it all means anything is another matter, but nevertheless it is interesting.

        tonyb

      • Tony,

        what you wrote wasn’t that the trend over the century to date is negative.

        You said it was gently down “all this century” – which was why I had a look, because it sounded wrong. Sure enough, it was wrong. That’s what I’ve come to expect from “sceptics”.

        In reality, it’s very obviously noisy over that time, which isn’t interesting at all, merely exactly what you’d expect for a small piece of land over a short period.

      • VTG

        You are normally a sensible commenter so I will assume you have been writing this whilst at the office Christmas Party.

        The trend from 2000 to now has been gently downwards. If you do not agree with the Met Office that is your prerogative.

        tonyb

      • If you choose to misrepresent the met office, that’s your prerogative.

      • VTG

        I have a variety of emails from the met office. They accept the trend is downwards, even if you don’t seem to want to.

        As they rightly say it is over far too short a period for it to be meaningful,. They suggested to me that a trend line of 50 years, rather than 20 or so would be needed for it to be scientifically interesting.

        As I said tongue in cheek at the outset, ‘it proves something or other.’

        tonyb

      • Tony b
        How about the chart showing temp trends in CET from 1640 to now or the new 1540 to now?

        Too short a chart shows little except “children won’t know what global heating looks like”
        Scott

      • They accept the trend is downwards, even if you don’t seem to want to.

        Tony, stop misrepresenting me. If you want to defend something, defend the actual claim you made – that the trend has been continuously down; “children in England have never known a warming trend. ”

        That was your claim, word for word. It was incorrect.

        Nowhere have I said the trend from 2000 to now was anything other than negative. Please stop claiming I have – or link to where i did say that.

      • Vtg

        I suggest you reread your comment at 11.46 where you became incredibly pedantic.

        Jch made a light hearted comment to which I replied in similar vein. Nobody is accusing anyone of anything or of deliberately misrepresenting anyone.

        The trend has been downwards this century to date. That is at odds with the CET since its inception which shows an upwards trend for some 300 years. If you want to put a different interpretation on the data or parse it in some way then that is up to you.

        As you well know, that sentence you quoted then ran on

        ‘The temperature trend has been gently downwards here all this century”

        So in your last post ŷou seem to agree the trend has been negative. I also say this. So what on earth is all this about?

        Tonyb

      • The children in England have known a warming trend. Some have been fooled by the pause; others are not.

      • The trend has been downwards this century to date. That is at odds with the CET since its inception which shows an upwards trend for some 300 years.

        This seems an implicit claim that the current just negative trend over 17 years is unique over the history of CET. I can only suggest you check it to save me the bother.

        So what on earth is all this about?

        Dunno Tony, you tell me why you’re making repeated inaccurate and misleading claims, as you’ve just done again in your latest.

      • Oh for God’s sake. The reason for the CET hiatus seems obvious.

        e.g. https://www.nature.com/articles/ncomms8535

        The suggestion is that we ain’t seen nothin’ yet.

      • The reason for the CET hiatus seems obvious.

        Poe’s Law applies to a significant fraction of posts here. This one is a great example.

      • Solar! LMAO.

      • “Solar activity during the current sunspot minimum has fallen to levels unknown since the start of the 20th century. The Maunder minimum (about 1650–1700) was a prolonged episode of low solar activity which coincided with more severe winters in the United Kingdom and continental Europe. Motivated by recent relatively cold winters in the UK, we investigate the possible connection with solar activity. We identify regionally anomalous cold winters by detrending the Central England temperature (CET) record using reconstructions of the northern hemisphere mean temperature. We show that cold winter excursions from the hemispheric trend occur more commonly in the UK during low solar activity, consistent with the solar influence on the occurrence of persistent blocking events in the eastern Atlantic.” http://iopscience.iop.org/article/10.1088/1748-9326/5/2/024001/meta

        Nuts to both of you.

      • Vtg

        Implicit claim? I said nothing of the sort. There have been many ups and downs during the longvrecord but the overall trend has been up.

        Forgive me but I feel I have wandered into a Monty Python sketch. It has become very silly.

        I am sure we both have better things to do

        Tonyb

      • climatereason, the CET mean this century is warmer than the mean in the 20th century which in your terms means children today don’t know the temperatures that the adults grew up in. That’s climate change properly put into the generational context.

      • “For a high-end decline in solar ultraviolet irradiance, the impact on winter northern European surface temperatures over the late twenty-first century could be a significant fraction of the difference in climate change between plausible AR5 scenarios of greenhouse gas concentrations.” https://www.nature.com/articles/ncomms8535

        Although I did link this earlier – they obviously missed it.

      • Jimd

        No, this is putting climate change into a generational context

        https://curryja.files.wordpress.com/2015/02/slide4.png

        Tonyb

      • Tony

        You ought to arrange for VTG being your permanent debating adversary. You could tour with him to demonstrate what not to do in debates.
        He agreed with what you said and then spent the rest of the thread backpedaling saying that he never said which was obvious he did say.
        What’s not to like. This reminds me of another thread on the forest fires post. When confronted with data not conforming with their world view, they become contortionists trying to rearrange facts to their liking.

      • Ceresco,

        This is getting weird.

        1) Is a 7 year old a child?
        2) what is the trend since 2010?
        3) have any children in England “known a warming trend”?
        4) was Tony’ s claim accurate?

        Let’s have your contortions.

      • climatereason, you said no, but ended up agreeing, it is a generational difference, or even a lifespan difference. Many children today will see 2100, and that climate would be unlike anything known before in human history by several degrees with continued unmitigated emissions.

      • “Cosmic rays, solar activity have greater impact on climate models
        1 day ago … Lead author, Henrik Svensmark, from The Technical University of Denmark has long held that climate models had greatly underestimated the impact of solar activity. He says the new research identified the feedback mechanism through which the sun’s impact on climate was varied. ”
        The sun? Surely not.

      • jimd

        if you look closely, what I was demonstrating is that we can observe a general trend upwards from around 1660 or so by looking at the temperatures a human experiences over a lifetime.

        Temperature rises in the 1730’s were greater than any time until the 1990’s which was remarkable enough to warrant a paper by Phil Jones. He was especially interested in the ferociously cold winter of 1740 which stopped that warming trend in its tracks.

        Both the Met office and the Dutch met office and other scientists agree that CET has some utility as a NH proxy (and perhaps, but more arguably, a global one)

        So we can observe that without additional co2 for most of that period temperatures were rising-sometimes substantially. In that time of course there were also some falls as it included some especially cold periods of the LIA.

        Nearby Upland Dartmoor illustrate that crops/habitation were at higher levels than is possible today during the MWP , Roman and Minoan warm periods.

        So today is unusual, but as yet, not unprecedented.

        Whether that is because of natural cycles, sunspots (I am ambivalent on those) greater levels of solar, or that co2 has a limited additional effect on temperature after around 300ppm is as yet unproven.

        In summary the 1880’s global temperature average should perhaps be better seen as a staging post in increasing warmth and not the starting post.

        tonyb

      • we can observe a general trend upwards from around 1660…So today is unusual, but as yet, not unprecedented

        Really? I just looked at the data

        Average temperatures:
        1658-1758 = 9.00C, rate of increase 0.6C/century
        1758-1858 = 9.1 C, rate 0.0C/century
        1858 – 1958 = 9.3C, rate 0.5C/century
        1958 – 2016 = 9.8C, rate 1.9C/century

        And of course, uncertainty in the data is much bigger as we go back, particularly before 1800.

        I’d say the data shows there is little consistent trend until the 20th century (the trend is scarcely a tenth of a degree a century from 1658 – 1900) and current temperatures are absolutely unprecedented in the record.

        You make very confident assertions which are simply falsified by the most cursory examination of the data.

        Better scepticism needed.

      • How many Limeys live inside Central England, and how many Limeys live outside Central England?

      • tonyb, since 1980 the global warming rate has been about three times the average of the last century, so stay tuned for faster changes ahead due to this upturn.

      • Vtg

        It is not me making these bold assertions but such as Phil jones and the Met office.

        I referenced earlier the comment from Phil jones who was intrigued by the warmth of the period around 1730 and noted the sharp rise in temperature from the 1690’s. Hubert lamb, his predecessor at UEA was also intrigued by this period and the warming is noted in the annals of the Hudson bay company also. This from my recent article

        “The annals of Dumfermline Scotland from 1733/4, recorded that wheat was first grown in the district in 1733. Lamb wryly observes that was not correct, as enough wheat had been grown further north in the early 1500’s to sustain an export trade (before the 1560’s downturn).

        This from a 2005 paper by Jones and Briffa [link]  about the very warm period noted in old records and especially CET;

        ” The year 1740 is all the more remarkable given the anomalous warmth of the 1730s. This decade was the warmest in three of the long temperature series (CET, De Bilt and Uppsala) until the 1990’s occurred. The mildness of the decade is confirmed by the early ice break-up dates for Lake Malaren and Tallinn Harbour. The rapid warming in the CET record from the 1690s to the 1730s and then the extreme cold year of 1740 are examples of the magnitude of natural changes which can potentially be recorded in long series. Consideration of variability in these records from the early 19th century, therefore, may underestimate the range that is possible.”

        So, the warmth was extraordinary, coming as it did hard on the heels of extreme cold. As Dr jones observes, variability is perhaps greater than previously realised.

        All in all we can observe a warming trend that precedes this modern period by many years, but what the cause of it is, I don’t know.

        There have been warm periods and cold periods throughout our recorded history and today barely exceeds the 1730’s which in itself appears to have been superceded by a number of other warm periods that have been well researched. Whether in due course the current warm period will exceed those I have no way of telling.

        Btw, you can not suddenly change periods in order to prove a point. Each century shows rises and falls albeit with a rising trend. In taking a date from 1958 you have assumed a century worth of warming and extraploated it, whereas the data is only for 58 years. I have no more idea of what the next 42 years will bring than you do. As shown with the warmer period from the 1690’s to 1739 a rising trend can suddenly be reversed

        If you disagree with the data provided by the met office or de bilt their address is easily found. Although DrJones has retired he retained a keen interest in the climate and an email sent to him at UEA will be forwarded.

        Tonyb

      • Jimd

        You have done the same as Vtg in taking a short period, this tme from the 1980’s onwards, and assumed that will yield a similar warming over a century as it does over 30 years or so. Looking at the record of the past a monotnc upwards rate seems more likely to be interspersed with periods of cooling or a hiatus.

        Tonyb

      • Tony,

        Thing is Tony, none of the words from the met office actually support your interpretation of the data.

        Which were:

        we can observe a general trend upwards from around 1660 or so

        And

        So today is unusual, but as yet, not unprecedented.

        The data doesn’t support your interpretation of it either.

        There is no general trend upwards evident from 1660.

        Both the multi decadal rise preceding and the current temperatures are entirely unprecedented in the record.

        A reasonable description would be a hockey stick, with generally flat albeit noisy data pre 1900, then a significant rise beyond that.

      • tonyb, just going by science the warming seen so far is nothing yet, even the local warm events you point to. Just giving some scientific perspective here which I know is not popular in these parts. It could be 1-4 C globally this century depending mostly on policies and partly on sensitivity. This dwarfs things you talk about that amount to only tenths globally, and rivals the kinds of changes since the Ice Age. It really is climate change of global scale and with global consequences. 2100 is within the lifetime of many children today, so it is also highly relevant. Don’t minimize this stuff.

      • Again, does the CET cover the entirety of England?

      • Jch

        My article written a frightening 7 years ago explains the geographical extent of CET

        https://judithcurry.com/2011/12/01/the-long-slow-thaw/

        It was chosen using a number of rural stations in a roughly triangular area of central England selected to avoid the warming influences of the coast.

        It is susceptible to uhi so from the 1970’s the met office, who maintain the data, make a small allowance. Arguably the reasons for the substantial spike around 1990 to 2000 was caused by the use of ringwood, a station quite susceptible to population influence and the fact it is adacent to a large and growing airport. I have discussed this with David Parker who created the data and there are a number of papers on it

        When this station was removed the temperature dropped, although whether it was due to the change of stations or natural variability is impossible to know.

        Vtg and Jimd

        You seem to be sceptical of the idea of warm periods prior to this one. The graphic in the article linked to above, illustrates the warming from around 1690 to 1739 was much more dramatic than the similar preceding period centred on 2010. the current 30 year period does of course retain a temperature at a relatively high historical plateau, so any temperature reductions we can observe must carry that caveat.

        It is a very well documented era but is only one of many, for example the period from 1500 to 1560 which ended in a severe period of the LIA, a similar period centred around 1420, another around 1330 and the Mwp itself, roughly 780 to 1180, although it was not continually warm, it was on the whole mild, dry and settled,

        The sea castles of Wales, such as Harlech, reflect the relatively high sea levels at that time. Even accounting for isostacy the sea gate there remains high and dry today.

        To this day the govt funded dartmoor authority confirm the bronze age and Mwp was warmer than today in this upland era.

        Do you think the climate has been constant throughout history? Why is it so surprising that the temperature goes up and down, sometimes in quite short spells and at other times the period of cold or warmth lasts fror many years.

        Tonyb

      • Is this map approximately correct?

        https://i.imgur.com/HSra6BE.png

      • Jch

        Hope you found the map useful.

        My iPad has a mind of its own, I meant ringway not ringwood. It is now the third largest airport in the UK with some 22 million passengers a year. Mind you much of Engand could be considered a heat source as it is a small country with over 60 million inhabitants.

        Tonyb

      • Tony,

        A suggestion. Please stop trying to put words in my mouth.

        A second suggestion. Provide citations for your claims. Your assertions have proved unfounded several times in this thread.

        Amongst the many words in your latest are further un referenced claims. I lack the time or motivation to check those out.

        Of course there has been variability in past climates.

        The current climate in the UK appears unprecedented on the timescale of the CET.

      • The current climate in the UK appears unprecedented on the timescale of the CET.

        ============

        The rate of increase is nothing out of the ordinary and the current temperatures are still below that in the MWP.

        If you look at the rate of decrease post war over 30 years and compare it with the rate of increase, again nothing out of the ordinary.

      • Vtg

        The link I provided has some 90 references and direct links to a huge variety of papers and books. I also provided a link to the jones paper. The article it comes from has another thirty references. I think that should be enough citations.

        I was merely demonstrating that as far as temperature rises go, the 1730 period is more remarkable than the current period in terms of temperature change.

        No one ever said the period covered by CET is the warmest period we have had. Where did i say that? It lies well outside the warmest periods which are the MWP, roman and bronze age .

        Tonyb

      • tonyb, no, we have been talking about the children, a subject you raised, and what kinds of temperature changes are relevant for the current children in particular. So far you seem entirely resistant to considering their prospects using a scientific basis.

      • Tony

        “Here’s a link with 90 citations” doesn’t wash.

        You’ve made several claims on this thread which were not actually supported by the data.

        You want a claim to be believed? Directly cite a reputable source which backs it up.

        Because every time you make a claim I’ve checked, it’s turned out false.

        In your latest post, yet again, you make a bald assertion. I don’t particularly care if it’s true or not, but if you want to convince me (it’s fine if you don’t) you’re going to have to provide a citation.

        I’m a sceptic, see?

      • Jimd

        It was not me that mentioned children jch did it in a jocular fashion and I followed up on a similar vein

        Vtg

        I have quoted numerous references and direct citations includng those from lamb and groves.

        The article ‘the long slow thaw’ directly makes citations adjacent to the relevant text.

        The jones citation directly references the warmimg period ending in 1739

        . The various graphics directly uses the Met office CET series.

        I am not sure how many more directly relevant citations and references I can make.

        Tonyb

      • tonyb, yes,looking back now, I see it was all just silly japes and making fun of cherry-picking with JCH. You took the prize for that.

      • I guess the irony is a cooling hiatus would be a flattish trend after warming peak, and in a warming world it is unlikely this will ever happen again after this particular cooling hiatus goes paws up, which it will inevitably do. As for the CET map, I don’t quite see how anybody can take the CET seriously. I guess we could create a temperature record for the Confederate States of America. Why? Or the 13 Colonies. Or the Louisiana Purchase. How about the Great American Desert? And the Urban Heat Island stuff just escapes me. Warming is not caused by humans; it’s being caused by city dwellers?

      • JCH:

        “I guess we could create a temperature record for the Confederate States of America.”

        I think that’s an excellent idea. This in itself would indicate nearby temperatures while not matching them. The idea behind a single ice core is about the same. Is it everything? No, but it’s better than not an ice core.

      • JCH: Warming is not caused by humans; it’s being caused by city dwellers?

        There you lost your focus. UHI is caused by cities, but not by CO2; reducing CO2 will have no effect on UHI.

      • I guess cities are caused by concrete, i-beams, shingles, etc., and I have been told if CO2 was completely removed from the atmosphere, the cirties would remain unchanged heat islands.

      • JCH: and I have been told if CO2 was completely removed from the atmosphere, the cirties would remain unchanged heat islands.

        “Unchanged” might be a stretch, but yes, the cities will remain heat islands if CO2 concentrations are reduced.

      • “It is estimated that almost half the pronounced recent trend in winter temperature since 1935 (north of 20°N) is due to atmospheric circulation changes (Hurrell 1996; Serreze et al. 2000). In addition, the upward trend in the North Atlantic Oscillation over the last 30 years accounts for a large portion of the increase in surface winter temperature in Europe and Asia (north of 40°N) (Hurrell 1996; Hurrell and van Loon 1997; Thompson et al. 2000; Hurrell et al. 2003). A mode of climate variability with extensive effects in the Northern Hemisphere, is the northern annular mode (NAM) (Thompson and Wallace 2001), which also goes by the name of the North Atlantic Oscillation (NAO) (Hurrell 1995) or the Arctic Oscillation (AO) (Thompson and Wallace 1998). Thompson and Wallace (2001) describe the northern annular mode as “a planetary-scale pattern of climate variability characterized by an out-of-phase relation or seesaw in the strength of the zonal flow along ∼55° and 35°N and accompanied by displacements of atmospheric mass between the Arctic basin and the mid-latitudes centers ∼45°N.”

        The CET is in the zone of influence of blocking patterns in the Atlantic – as is the eastern US. CET has the virtue of being a long term record in the right place. But then crazy incompetence is what we have come to expect.

      • Tony

        The various graphics directly uses the Met office CET series.

        I am not sure how many more directly relevant citations and references I can make.

        Here’s the thing Tony.

        The CET data does not support the various claims you’ve made about it on this thread.

        And you’ve responded by merely making many more tangentially related assertions about past climate.

        To which you now say “go and read my guest post and its citations”

        No.

        A guest blog at climate etc is the very antithesis of a reputable source, particularly when written by the same person making the claim!

        If you want to make further assertions about Romans, Minoans, or Vulcans indeed, feel free. But provide a cite to the specific source for the specific claim or be prepared for scepticism.

        Thus far, your assertions have not proved reliable when checked.

      • VTG

        Normally people complain that I provide far too much information and associated references. Although I have already quoted in this thread, in context, dozens of references, I am pleased you want even more, this time on Dartmoor and the Bronze age. Due to space constraints I have pared quotes to the minimum so please read the complete item for context.

        I had assumed during our conversation you were British and would already be familiar with much of the information and have visited the area where the gov. funded Dartmoor authority make continual references to a warmer climate in the historical past in their interpretation centres and literature.

        As it seems you aren’t -and are therefore unfamiliar with the vast body of literature available on the subject- I have referenced just a few about ancient Dartmoor- an upland area at max 2000 feet in the South West of England which, due to its height and location is pretty marginal land and readily prone to climate changes affecting human activity. Many of the reaves, huts, circles and fields referenced can still be seen to this day.

        The detailed references at the end of this first link (most are relevant) are available in the Dartmoor research library, part of the same building where the Met office archices are located in Exeter. I visit both establishments regularly for research purposes. To date many of the more interesting (to me) documents from both these sources have not yet been digitised due to budget constraints..

        In general, the academic studies concentrate narrowly on one aspect of the past climate, such as meres and lichen, so those relating to habitation, grazing, forests and crops need to be read also, as they show on a human scale, the variations encountered as the climate waxed and waned

        http://www.dartmoor.gov.uk/__data/assets/pdf_file/0005/814496/lab-archaeo.pdf

        “By the middle of the Bronze Age, most of the trees had been cleared from high Dartmoor and the land had become farmland. …. The crops would have been viable due to a warmer and drier climate than today. Dotted around in the fields are the remains of hut circles (round dwelling houses) of the prehistoric farmers. ‘

        ‘Towards the end of the Bronze Age the weather began to get colder and wetter and the soils became acid, causing grass and crops to grow less easily, and Dartmoor became a less pleasant place to live. Gradually the houses and fields on the high moor were deserted as their inhabitants moved to the lower ground, the burial places of their ancestors were abandoned.’

        Useful Reference Books
        ● Butler, J. Dartmoor Atlas of Antiquities, five volumes
        (Devon Books, 1991–97) (T)
        ● Dartmoor National Park Authority, A Guide to the
        Archaeology of Dartmoor (Devon Books, 2003) (ST)
        ● Devon Archaeological Society Devon Archaeology No 3:
        Dartmoor issue (Devon Archaeological Society, 1991) (T)
        ● Devon Archaeological Society Proceedings of the
        Dartmoor Conference 1994
        (Devon Archaeological Society, 1995–96) (T)
        ● Fleming, A. The Dartmoor Reaves (Batsford, 1988) (T)
        ● Gerrard, S. Dartmoor
        (Batsford & English Heritage, 1997) (ST)
        ● Gill, Crispin (editor) Dartmoor: A New Study
        (David & Charles, 1983) (ST)
        ● Hemery, Eric High Dartmoor (Robert Hale, 1983) (ST)
        ● Todd, Malcolm The South West to AD 1000,
        (Longman, 1987) (T)
        ● Sale, R. Dartmoor the Official National Park Guide
        (Pevensey Press, 2000) (ST)
        ● Woods, S. Dartmoor Stone (Devon Books, 1988) (ST)
        ● Worth, R.H. Worth’s Dartmoor (David & Charles,
        1971; Peninsula Press, 1994) (ST

        see also Turney et al.: ‘Bronze Age abandonment of marginal lands in Britain’

        The stone circles and field systems which were the form of habitation and cropping in generally milder than today Dartmoor are mentioned here;

        https://www.triposo.com/loc/Dartmoor/history/background

        This recent publication also mentions changing conditions. To relate it to actual likely climatatic circumstances the ref to human habitation heights, heights at which different crops could be grown, forestation etc need to be accessed from such as Burgess and Turney and Lamb, all linked within it.

        http://burleighbrown.weebly.com/uploads/3/0/4/2/30429262/amesbury_et_al._2008_bronze_age_upland_settlement_decline_in_southwest_england_testing_the_climate_change_hypothesis.pdf

        “Within the zone of high-resolution analysis the record fluctuates with climatic deteriorations that are replicated in both proxy records occurring at ca. 745 cal BC, 350 cal BC and cal AD 75. There is then a significant shift to a warmer and/or drier climate at ca. cal AD 330…….it is followed by the most significant shift to wetter and/or cooler conditions in the profile lasting until ca. cal AD 1625. There is a further climatic amelioration from ca. cal AD 1625 to 1840.”

        ‘ The results from the climate reconstruction clearly point to a period of climatic deterioration on Dartmoor between ca.1395 and 1155 cal BC. Furthermore, this is preceded by a period of relatively mild and stable climate from ca. 2000 to 1455 cal BC and an abrupt climatic amelioration from ca.1455 to 1395 cal BC. There is some agreement between the
        timing of this period and the supposed construction date of
        the field systems.’

        Much the same ground is covered here in this British Museum publication.
        https://eprints.soton.ac.uk/64120/1/BAR1_2008_2_Brown.pdf

        (It is another Brown, not me)
        — — —-

        ‘The climate was warmer than today (probably by 2°C) and this had an impact on agricultural land being used, as farming was able to extend into the the moors and uplands of Britain.By the late in the Bronze Age (around 1000 BC) the climate cooled and became wetter, so many of the farming settlements of the upland areas were abandoned.’

        http://www.ancientcraft.co.uk/Archaeology/bronze-age/bronzeage_living.html

        (James Dilley -University of Southampton specialist in prehistoric techniques. (Time Team; Coast; National Geographic; The Great British Countryside, New Scientist Live),

        Dr Philip Stott, the professor emeritus of bio-geography at the University of London, told The Telegraph: “What has been forgotten in all the discussion about global warming is a proper sense of history.”

        According to Prof Stott, the evidence also undermines doom-laden predictions about the effect of higher global temperatures. “During the Medieval Warm Period, the world was warmer even than today, and history shows that it was a wonderful period of plenty for everyone.”

        http://www.telegraph.co.uk/news/uknews/1426744/Middle-Ages-were-warmer-than-today-say-scientists.html

        ‘….Dartmoor contains the largest concentration of Bronze Age remains in the United Kingdom, which suggests that this was when a larger population moved onto the hills of Dartmoor. The large systems of Bronze Age fields, divided by reaves, cover an area of over 10,000 hectares (39 sq mi) of the lower moors.[15]

        The climate at the time was warmer than today, and much of today’s moorland was covered with trees……

        After a few thousand years the mild climate deteriorated leaving these areas uninhabited and consequently relatively undisturbed to the present day…..’

        https://en.wikipedia.org/wiki/Dartmoor

        tonyb

      • Tony,

        Thank you for the references; you have gone well beyond the call of duty although I suspect you thoroughly enjoyed doing so!

        I was entirely unaware of the bronze age climate of England and regrettably have never visited Dartmoor.

        A few things spring out: it’s rather alarming to note the impact of quite minor changes in climate given what we now face. It’s interesting that according to the CET we have already experienced almost the same degree of anthropogenic warming as cooling is noted in the Dartmoor paper. It’s also rather alarming that such past changes, should they be anything beyond regional in nature, likely preclude anything other than worryingly high climate sensitivity. I never had you down as an alarmist before.

        Prof Stott and the Telegraph provide an amusing if stereotypical coda.

      • The Stott Telegraph piece was from 2003, so I wonder why that is still making the rounds. Are there really so few examples of these kinds of quotes that you have to go back 14 years to find one? Does he himself have any updated views or second thoughts since the many more recent estimates of the MWP? There was also a perfectly good rebuttal from a Met Office person in the same 2003 piece.

      • Vtg

        Thank you for your comments. Dartmoor is very well researched, starting from Victorian times, although in their enthusiasm they took it upon themselves to ‘repair’ some of the stone rows. Fortunately many more have survived their best efforts.

        The moor is an interesting ‘canary in the coal mine’ due to its history, location, height and it’s susceptibility to climate change. It warrants more research. The met office is only some 10 miles away but their days of primary research on such subjects are they tell me, largely in the past.

        It was mined very extensively in roman periods and it is thought the tin was one of the reasons the Romans came to Britain in the first place.I wrote a long piece on the likely sea levels at that time at nearby St michaels mount, , a tidal island, where tin was exported from, but I won’t burden you with that.

        It’s other claim to fame is as the setting for Conan doyles ‘hound of the baskervilles’ set on hound, tor, where there are the remains of an early medieval village. I recommend a visit to the aptly named fast food van at its foot’ hound of the basket meals’

        It is here that the Beatles managed to get their tour bus firmly wedged in a local bridge for most of the day during the filming of ‘magical mystery tour’

        I have offered to take a party from the Met office on a tour of dartmoors delights , ancient and moden, but sadly to date there has been no response. They are an excellent bunch of high quality scientists although I do not always agree with them!

        If you ever make your way to Dartmoor, the Grimspound bronze age stone circle and the afore mentioned medieval village are hugely atmospheric, especially when the mist rolls in.

        With regards

        Tonyb

      • Jim D

        You and VTG apparently are working off the outdated and discredited assumptions that the MWP didn’t exist. I’ve built up a record of so many citations that it’s overwhelmed my computer. After the holidays, once I have more time, I’ll provide you with the links for you to acquaint yourself with the 21st century knowledge base. I’m always eager to help people catch up.

      • The most recent reconstructions are much longer and show the MWP as a minor blip in a general descent from a Holocene Optimum thousands of years ago and ending in the LIA. That is the big picture. The Holocene Optimum was easily warmer than the MWP. The reconstructions also show that the rise since the LIA is about 20 times faster than the descent before that. The descent also is no mystery because of the phase of the Milankovitch cycle.

      • You can save papers until the cows come home, but until there is one single study that established the MWP was warmer than today, then the process hasn’t even started to establish that it is warmer than today.

  23. Pingback: What if global warming ends up being greater than we thought? | …and Then There's Physics

  24. Correction what global warming?

    The climate is now in the process of transitioning to colder conditions as we move forward in time from here.

    Overall sea surface temperatures trending down now at +.22c and this will eventually translate to lower over all global temperatures.

    Prolonged solar equates to overall lower sea surface temperatures and a slightly higher albedo.

    Lower sea surface temperature due to less UV light entering the ocean.

    Slightly higher albedo due to an increase in major volcanic activity and an increase in global cloud coverage/snow coverage.

    The above tied to an increase in galactic cosmic rays, low solar wind/ low AP INDEX, and a more meridional atmospheric circulation pattern which all tie into very low prolonged solar conditions.

    This winter should feature extremes due to very low solar combined with a -QBO which causes indexes such as the AO/NAO /EPO in the N.H to be negative on balance. The result extremes.

    Very hard to forecast the weather in a given spot because it is very hard to know where exactly the troughs and ridges will set up.

    Those spots under and ahead of the troughs stormy and cold.

  25. Pingback: Weekly Climate and Energy News Roundup #296 | Watts Up With That?

  26. Harry Twinotter

    I don’t think BC17 “predicted” greater than expected climate sensitivity, it said the model projections that appear more likely are on the warmer side of the model median. There is still an impressive spread of model projections.

    But one should not ignore the uncertainty monster, it bites both ways. As Carl Sagan once said, think of all the money that was spent in case the Russians attacked the West. The Russians didn’t, but I have not seen many argue that it was not money well spent, and prudent too.

    • Harry Twinotter: “think of all the money that was spent in case the Russians attacked the West. The Russians didn’t, but I have not seen many argue that it was not money well spent, and prudent too.”

      Look at all the military spending of the Soviet Union and Warsaw Pact nations in case the West attacked them. Not many would argue it was money well spent or particularly prudent given that it resulted in their total collapse.

    • I sometimes describe myself as a catastrophist in the sense of Rene Thom. But the progressive response is ideologically captured and completely nuts.

      https://watertechbyrie.com/

    • Harry, You said that “model projections that appear more likely are on the warmer side of the model median.” This would seem to be contradicted by Mauretsen and Stevens when they observe he strong correlation between GCM high ECS and lower (than historical data) anthropogenic forcing. This is due to an unrealistically negative aerosol forcing. This would imply that the higher ECS models in fact are less physically realistic than those with lower ECS and less negative aerosol forcing.

      Stevens further asserts that this correlation is almost certainly not an accident, i.e., to match historical temperature trends, the higher ECS models may require an unrealistic aerosol forcing.

      • Stevens didn’t evaluate them against radiative observations which give the opposite story that higher sensitivities match current observations better. Maybe this study supports models with higher aerosol effects via the OSR constraints that would be sensitive to them.

      • They proposed an effect, which is not actually the same as the Lindzen Iris Effect, that could move modeled ECS closer to the observations-based estimates. Their modeled ECS central estimate, using their speculative theory (nothing wrong with that,) was 2.2K to 2.5K, down from 2.8k.

        Bjorn Stevens is currently a coauthor on an unpublished paper that suggests the lower bound for ECS should be in the range 2K to 2.4K. Stress: lower bound. Up from 1.5K. This would indicate the 2.2K to 2.5K central estimate has moved back up with additional science.

  27. Furthermore, it is well known that some CMIP5 models have significantly non-zero N (and therefore also biased OLR and/or OSR) in their unforced control runs, despite exhibiting almost no trend in GMST. Since a long-term lack of trend in GMST should indicate zero TOA radiative flux imbalance, this implies the existence of energy leakages within those models.

    This is not correct in most, if not all, cases. As described in the write-up for the NorESM1-M, there is a difference between TOM (Top of model) and the reported TOA (Top of atmosphere). The TOA value is created for comparison with the location of satellites (e.g. for CERES), and is the result of additional processing performed to account for stratospheric radiation transfer occurring above the top of the model.

    For example, this NorESM1-m reports a picontrol net TOA imbalance of just over 2W/m2. However, as it says in that link, the actual TOM imbalance was found to average at 0.086W/m2. So, no, the reported TOA control imbalances are not indicative of energy leakages in the models.

  28. paulskio
    That is an interesting point; thank you for raising it. But the figures you mention do not appear to be consistent with the NorESM1-M write-up that you link to. That says that the TOA, not TOM, imbalance was 0.086 W/m2″
    “The global mean net radiation at the TOA averaged over the whole control
    integration is 0.086Wm−2”.

    Moreover, I note that the 2016 paper “An Energy Conservation Analysis of Ocean Drift in the CMIP5 Global Coupled Models”, DOI: 10.1175/JCLI-D-15-0477.1,, states that “It is shown that for many of the models there is a significant disagreement between ocean heat content changes and net
    top-of-atmosphere radiation.”, and that “the greater part of difference between netTOA and dOHC/dt is explained by energy leaks in other [non-ocean] components of the coupled models (e.g., atmosphere, land
    surface).”

    • That says that the TOA, not TOM, imbalance was 0.086 W/m2″

      Yes, the use of terminology is confusing (it’s not too surprising that modelers might think of the top of model as the top of atmosphere in general contexts), but you’ll probably be aware that the reported picontrol net TOA flux for NorESM1-M is not 0.086, but greater than 2 (as it states in the paper you linked). So they can’t be talking about the same thing. Later, when they explicitly distinguish TOA and TOM for the 1976-2005 period they refer to a TOM imbalance of 0.5 and SW and LW TOA fluxes which sum to 2.5W/m2.

      So we have four imbalance figures referring to TOA/TOM and 1976-2005/picontrol. The figures are 0.086, 0.5, 2.1, 2.5.

      We know that 0.5 refers to 1976-2005 TOM and 2.5 refers to 1976-2005 TOA. And if you download the NorESM1-M TOA flux CMIP5 files you’ll find 2.1 and 2.5 for picontrol and 1976-2005 respectively. There’s some ambiguity in the description but I think it’s clear that 0.086 can only be referring to TOM.

      The paper you link appears to assume the reported TOA figure (~2.1W/m2) as the NorESM1-M picontrol model imbalance, which seems wrong for the purpose of the study. I don’t know if that affects their results? There may be some models with energy conservation issues, but large reported TOA imbalances appear not to be a good guide for that being the case.

      • paulskio
        Yes, I agree it looks as if the NorESM1-M authors made an error when they said that the TOA imbalance was -0.086 W/m2; they seem to have been confused and should have written TOM. I don’t think it is not a case of there being any ambiguity; the authors carefully distinguish the terms TOM and TOA.
        It is not clear to me how the piControl net SW+LW flux at TOA can differ from that at TOM. It seems to imply that the stratosphere above the top of model is not in equilibrium, which can’t be correct.

        Have you come across any other model description papers that make a similar point? I’ve not spotted any.

      • Oddly – piControl refers to the pre-industrial (pi – get it?) atmosphere and the 0.086 refers to this control simulation. In the historical simulation the imbalance is… wait for it… 0.5W/m2. An adjustment is made for TOA flux but this is far from the critical point.

      • niclewis,

        It is not clear to me how the piControl net SW+LW flux at TOA can differ from that at TOM.

        In terms of the internal logic of the model in action the TOM is the model’s TOA – there’s nothing beyond it, but ultimately models have to stop somewhere and that somewhere is not at an altitude/atm. pressure which relates to true TOA. The idea seems to be that obtaining a like-for-like comparison with observational data requires accounting for difference in altitude of the TOM and where satellites are because in reality there is still some radiative transfer occurring in between. So, some kind of adjustment procedure is applied in post-processing as far as I can understand it.

        It seems to imply that the stratosphere above the top of model is not in equilibrium

        In reality we perhaps don’t expect fluxes to be in perfect equilibrium at an altitude equivalent to TOM. That there is a desire for equilibrium at TOM is due to the internal logic of modelling since we don’t want the system to be losing or gaining energy.

        From what little I understand of this the logic of the whole observational comparison adjustment procedure doesn’t quite work for me. If we don’t expect equilibrium at TOM altitude in reality then simulated magnitudes of SW and LW fluxes which do cancel at TOM must be wrong, other than by chance. Therefore, we shouldn’t really expect models to match with observations in this respect. It’s possible that’s something they’ve accounted for though.

        That’s the only model write-up I’ve seen to clearly make the distinction, though it’s not like I’ve read many of them. The available CCSM4 picontrol data indicates a TOA imbalance of just over 2W/m2, whereas the write-up talks about a picontrol model imbalance of around 0.1W/m2 and doesn’t make reference to any other values. I assumed it must be standard practice given how little it’s overtly referenced in relation to those models. Could be wrong about that though.

    • “In earth system models, such as NorESM, there are numerousparameters associated with physical parameterizations that can be assigned values within bounds set by empirical or physical reasoning. It is beyond the scope of this study to describe all aspects of parameter tuning in NorESM. Emphasis here will be on the approach to minimize the radiative imbalance
      at the top of atmosphere (TOA), due to its importance
      for a stable climate state in CMIP5 long-term experiments…

      “Table 1 provides selected global mean values for the years 1976–2005 of the Historical1 simulation with NorESM along with observations or reanalysis products from recent decades. The net TOA SW flux of NorESM is 234.9 W m−2 and the observations listed in the table are in the range 234.0–244.7 W m−2. It should be noted that the NorESM values are adjusted for the fact that the top of the model is slightly below the TOA seen from satellites (Collins et al., 2006). The actual net downward SW flux at the top of the model is 231.8 W m−2, while the net upward LW flux at the top of model is 231.3 W m−2. Hence, the model experiences an approximate radiative imbalance of +0.5W m−2 at its upper boundary during the years 1976–2005.”

      So they adjust to get the required imbalance. But – really it’s all nuts as the very same model will give very different answers for very small differences in initial conditions.

  29. Pingback: Computers say climate change will be worse than we thought, again (and again)

  30. Predictability is still a key to real science. With models there is irreducible imprecision arising from uncertainty in observations of current conditions and the nonlinenear set of core fluid flow equations. Within each model there are 1000’s of equally feasible solutions within the solution space. Reconciling that with the single solution that is arbitrarily chosen and used to represent the model solution space in the CMIP ‘comparisons’ – is the theoretical problem most seem content to ignore.

    “AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.” http://www.pnas.org/content/104/21/8709.full

    A posteriori solution behavior is just what it sounds like. Pick a solution from 1000’s in accordance with a posteriori solution behavior.

    Paradoxically irreducible imprecision in a model can be reduced by better observations, better physics and massively more computing power to reduce grid size.

    Models are not within spitting distance of even theoretically being able to predict the future of climate.

  31. nobodysknowledge

    I had the following questions to Mr Brown on his blog, but I haven`t got any answer yet. Could it be some secrets about which models are skilful and which models are useless according to Brown and Caldeira?
    From Brown presentation
    “We utilize the idea that the models that are going to be the most skillful in their projections of future warming should also be the most skillful in other contexts like simulating the recent past.”
    “Global warming is fundamentally a result of a global energy imbalance at the top of the atmosphere so we chose to assess models in their ability to simulate various aspects of the Earth’s top-of-atmosphere energy budget. We used three variables in particular: reflected solar radiation, outgoing infrared radiation, and the net energy balance.”
    ” we specifically choose to use the Earth’s net top-of-atmosphere energy budget and its two most fundamental components (it’s reflected shortwave radiation and outgoing longwave radiation) as predictor variables.”
    Premise 1: The most skillful models to predict surface warming for the next century are the models that get the TOA radiation most correct.
    Data: We have good measurements of TOA radiation for the years 2001 to 2015.
    Premise 2: The timespan 2001 to 2015 is representative for TOA radiation. So the models which have the best fit to TOA radiation for the years 2001 to 2015 are the best models to predict future change.
    As Nic Lewis has shown these are the same models that are most skilful at hindcasting the seasonal OLR radiation for the same years. The authors have the following explanation ” -our study indicates that models that simulate the Earth’s recent energy budget with the most fidelity also simulate a larger reduction in the cooling effect from clouds in the future and thus more future warming.”
    So my question is: 1. Which models are we talking about?
    2. What is the change in net TOA radiation. And how are the SW and LW components changing?
    3. What is the OHC change?
    4. What is the lapse rate change?

  32. Just thought I would cross post a comment from frank climate from Climate Audit as it is very relevant to the use of GCM’s to project future warming.

    Steve, this citation of Bjorn Stevens could be enlightining when it comes to the ability of the GCM to replicate the aerosol (direct and indirect) forcing:
    “We are not adverse to the idea that Faer may be more negative than the lower bound of S15, possibly for reasons already stated in that paper. We are averse to the idea that climate models, which have gross and well-documented deficiencies in their representation of aerosol–cloud interactions (cf. Boucher et al. 2013), provide a meaningful quantification of forcing uncertainty. Surely after decades of satellite measurements, countless field experiments, and numerous finescale modeling studies that have repeatedly
    highlighted basic deficiencies in the ability of comprehensive climate models to represent processes contributing to atmospheric aerosol forcing, it is time to give up on the fantasy that somehow their output can be accepted at face value.”
    Source: http://pubman.mpdl.mpg.de/pubman/item/escidoc:2382803:9/component/escidoc:2464328/jcli-d-17-0034.1.pdf

    The point here is that GCM’s are well known to lack skill in many areas such as regional climate, tropical temperature profile, etc. that adjusting them to match recent data can yield uninformative results. For example if I “adjusted” GCM’s to match historical aerosol forcing data, they would show more future warming because they rely on unrealistically high aerosol forcings to balance high ECS’s to do a reasonable job of matching historical temperature trends.

  33. What Nic Lewis has done is equivalent to this.
    Let’s say you have fitted a line to 90 points of data and the gradient has an uncertainty. Nic has taken a subset of 10 points, fitted a line to that, found that the spread is smaller, so, because he conflates spread with skill, he trusts that different gradient that you get by throwing away 80 of 90 data points. Interesting, but misguided and not at all what the paper was doing. They want to use all 90 points of data.

    • The authors of the paper themselves measured skill by the reduction in magnitude of the spread. So your disagreement is with the authors of the paper, not me. But what the authors did is pretty standard, and makes good sense.
      Note that it is not the spread of predictions per se that is measured, but the RMS error in the predictions – a very logical measure of skill. Perhaps you were confused on this point.
      If includin the other eight sets of points makes the average prediction error larger, as it does, that strongly suggests they are adding wrong information and/or noise, rather than useful information, to the predictions.

      • You cannot compare the spread or rms when you have taken data points away too. That is apples and oranges, and besides you lose skill in the constraints you have removed so it is less skillful overall too. You should see that. Adding data is adding observational constraints. The more the better. You want to remove all measurements of OSR, so what you are left with would be more wrong in the OSR, but you have deliberately chosen not to care about that. Brown and Caldeira care about constraining the warming estimate with all their nine observation-based measures, and that is the key difference from you.

      • Jim D, I completely agree with your two comments here.

    • Yes, if a modeler removed observations to get a better fit to the models, the skeptics would be up in arms, but Nic Lewis is given a pass. Remarkable.

  34. I have a little non-technical article about this study, that also plugs this Climate Etc. post: “Computers say climate change will be worse than we thought, again (and again).”

    Two small excerpts:
    “The alarmist science community lives on studies that claim to find that “It’s worse than we thought” and two beauties have just come out. It is all just computer games but the green press loves it.”

    “As these two studies suggest, alarmist science is models all the way down. I have a little study (https://www.cato.org/blog/climate-modeling-dominates-climate-science} showing that there is more computer modeling in climate science than in all of the rest of science put together. There is very little actual science here, just endless computer games. And it is always worse than we thought. ”

    http://www.cfact.org/2017/12/18/computers-say-climate-change-will-be-worse-than-we-thought-again-and-again/

  35. These do not add up to a usable TOA radiative imbalance. The problem is calibration. Nothing to compare it to. There is a measurement where the changes are relatively well characterized but the absolute values are very approximate. You can see the interactive originals at CERES data products.

    https://watertechbyrie.files.wordpress.com/2017/12/ceres_ebaf-toa_ed4-0_incoming_solar_flux_march-2000tojuly-2017.png

    https://watertechbyrie.files.wordpress.com/2017/12/ceres_ebaf-toa_ed4-0_toa_longwave_flux-all-sky_march-2000tojuly-2017.png

    https://watertechbyrie.files.wordpress.com/2017/12/ceres_ebaf-toa_ed4-0_toa_shortwave_flux-all-sky_march-2000tojuly-2017.png

    The dynamic radiative imbalance overwhelmingly is seen in ocean heat.

    https://watertechbyrie.files.wordpress.com/2017/11/argo-e1512162547947.jpg

    Where the tangent to the curve (d(OH)/dt) is zero the radiative imbalance is zero. Twice a year – with a local low point around 2008.

    “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic.” Slingo and Palmer, 2011

    So what on Earth are they doing? Jigging the TOA data to equate to a derived quantity that has large natural variability due to a number of factors including hemispheric land/ocean asymmetry and ocean and atmospheric circulation. Recent ocean warming is the result of – largely – marine strato cumulus cloud variability in the eastern tropical and sub-tropical Pacific.

    They then compare this to a single representation of a climate model – or even an aggregation of singular representations – it makes little difference. As far as theoretically improbable climate science is concerned – this is in there with the worst of it.

    It was simple enough to conclude some months ago that conditions were in place for a cooler Pacific. Conditions are now in place for an intensifying La Niña over the Austral summer. Including more negative polar annular modes and a resultant spinning up of sub-polar winds and ocean gyres. Physical mechanisms that bias the system to more upwelling in the eastern Pacific and the resultant global feedbacks.

    The Pacific Ocean will shift state again – within a decade. An intense La Niña might signal a shift in extreme variability – dragon-kings – at a tipping points. The state of the Sun – and links in atmosphere pathways between solar UV/ozone chemistry and polar surface pressure – suggests a transition to a cooler Pacific state is in the wind. Try modelling that.

    • It already switched regimes. We experienced the worst of it. There was a mild downtrend in the GMST from 2005 to 2013, and then poof it was gone. That is the future of Pacific cooling. Ain’t gonna matter.

      • The Pacific state shifts at 20 to 30 year periods over at least as long as there are proxies for it. It may shift to another warm state within the decade. If this idea of blocking patterns in both hemispheres changing with solar UV intensity is correct – then that may be unlikely.

      • Start in around 1985. Count to 30. The Eastern Pacific cooled from around 1985 until 2013. So count.

  36. The Pacific state shifts at 20 to 30 year periods …

    Did you get to 28 yet?

    • That’s a singular perspective. He really is a lonesome cowboy.

    • Keep being wrong. It suits you.

    • The pacific surface cooled from 1985? The usual laughable nonsense.

    • Eastern Pacific. That’s a part of the Pacific. It’s like Australia, they have the outback and the outfront, right?.

      JC’s ‘forecast’ for the next 5 years: It looks like the AMO may have peaked, and we remain in the cool phase of the PDO with a predominance of La Nina events expected (unlikely to see a return to do El Nino dominance in the next decade). I predict we will see continuation of the ‘standstill’ in global average temperature for the next decade, with solar playing a role in this as well.

      See, the AMO doesn’t really do anything to the GMST; it just sort of follows it around.

      As for the ratio of El Niño events to La Niña events, as we can see, a butt kicker El Niño just blew out the entire work of all those La Niña events. Warm is on a roll. Not too late to latch your star to the winner.

  37. “Interdecadal variability of the Pacific sea surface temperatures (SSTs) has been documented in numerous studies (e.g., Mantua et al. 1997; Zhang et al. 1997; Power et al. 1999; Deser et al. 2004; Dai 2013). Various indices (Deser et al. 2004) have been developed to quantitatively describe the ENSO-like multi-decadal climate variations, often referred to as the Pacific Decadal Oscillation (PDO; Mantua et al. 1997) or the Interdecadal Pacific Oscillation (IPO; Zhang et al. 1997; Power et al. 1999; Liu 2012; Dai 2013). The PDO and IPO are essentially the same interdecadal variability (Deser et al. 2004), with the PDO traditionally defined within the North Pacific while the IPO covers the whole Pacific basin. Although the exact mechanism behind the IPO (and PDO) is not well understood, current work (Liu 2012) suggests that the IPO (and PDO) results from changes in wind-driven upper-ocean circulation forced by atmospheric stochastic forcing and its time scale is determined by oceanic Rossby wave propagation in the extratropics.” http://www.cgd.ucar.edu/cas/adai/papers/DongDai-CD2015-IPO.pdf

    The stochastic mechanism appears to be solar UV modulated polar annular modes. And the timing of phase shifts is known. Except by JCH of course.

  38. “Altmetric: 301More detail
    Article | OPEN

    Regional climate impacts of a possible future grand solar minimum
    Sarah Ineson, Amanda C. Maycock, Lesley J. Gray, Adam A. Scaife, Nick J. Dunstone, Jerald W. Harder, Jeff R. Knight, Mike Lockwood, James C. Manners & Richard A. Wood
    Nature Communications 6, Article number: 7535 (2015)
    doi:10.1038/ncomms8535
    Download Citation
    Atmospheric scienceClimate-change impactsSolar physics
    Received:
    23 May 2014
    Accepted:
    14 May 2015
    Published online:
    23 June 2015
    Abstract
    Any reduction in global mean near-surface temperature due to a future decline in solar activity is likely to be a small fraction of projected anthropogenic warming. However, variability in ultraviolet solar irradiance is linked to modulation of the Arctic and North Atlantic Oscillations, suggesting the potential for larger regional surface climate effects. Here, we explore possible impacts through two experiments designed to bracket uncertainty in ultraviolet irradiance in a scenario in which future solar activity decreases to Maunder Minimum-like conditions by 2050. Both experiments show regional structure in the wintertime response, resembling the North Atlantic Oscillation, with enhanced relative cooling over northern Eurasia and the eastern United States. For a high-end decline in solar ultraviolet irradiance, the impact on winter northern European surface temperatures over the late twenty-first century could be a significant fraction of the difference in climate change between plausible AR5 scenarios of greenhouse gas concentrations.” https://www.nature.com/articles/ncomms8535

    Equivalent to the difference between RCP 4.5 and 6.0 they suggest. Convincing any of these sort that they are wrong seems a hopeless cause. But the experiment is underway.

    • Solar activity has already sunk to well below its long-term average in the last few decades, so arguably, just from probabilities, it has more likelihood of going up than down in the future, and more room to increase than decrease.

      • Probably not.
        The problem is what you define as the future.
        And activity.
        Short term, like now, when you made your comment, the actual probability is that it will go down in the short term ( days weeks months ) precisely because of the downwards trend/ impetus.
        Counterbalancing this is a decades/centuries reversion to the mean which means, in the overall future, that the solar activity is most likely pegged to a mean with no real likelihood of ever going more or less active ever in in terms of thousands of years.
        In that grey inbetween you will be right Jim D, just like a broken clock.
        Look at it like El Niño .
        Every displacement upwards carries an actual decreasing promise (probability) that it will be higher tomorrow but with a rapidly increasing probability that at in the days weeks months afterwards it will turn lower.
        Overall though there is no promise that it will go one way or another on long time scales.

      • Yes, I was saying we’re at a century scale low so upwards is more likely at this point.

    • A number of studies have recently raised the possibility of a near-future descent of solar activity into a new grand minimum state, similar to the grand maunder minimum (Abreu et al 2008, Lockwood et al 2011, Roth and Joos 2013, Zolotova and Ponyavin 2014). The weakness of the current solar cycle number 24, and the unusually deep minimum in 2008–2009 (Janardhan et al 2011, Lockwood 2011, Nandy et al 2011) support this view.” http://iopscience.iop.org/article/10.1088/1748-9326/11/3/034015

      Anything off the top of his head will do. It is ignorance from a cultural bias. Very odd at the end of the day.

      http://lasp.colorado.edu/home/sorce/files/2011/09/TIM_TSI_Reconstruction-1.png

      • The interesting thing is that they simulate the minimum as a continuation of the current low solar activity through the rest of the century. I say it is more likely to rise mainly because it is unusually low. They don’t take the future low activity as having any evidence for, just as a scenario.

      • It’s called praying. He’s praying for another LIA? Because his politics needs one.

    • “The compositing study of past variations of Φ by Lockwood [2010] found that the chance of Φ falling below Maunder minimum values is 8% for within the next 40 years, rising to 43% for within the next 100 years. The MM data shown in Figure 4 confirms that a descent in peak group sunspot number as rapid as predicted by Barnard et al. [2011] is certainly possible and has occurred in the past. The open solar flux has, by each solar minimum, migrated to the polar photosphere and is thought to act as the seed field for the solar dynamo at the tachocline [Charbonneau, 2005]. This being the case, the decay in FS heralds a continuing slowing-down of the solar dynamo. The entry into the MM shows that the weak solar cycles were inadequate to prevent a fall into a GSMi. It seems that the HCS remained sufficiently tilted [Owens et al., 2011a; Owens and Lockwood, submitted manuscript, 2011] and/or other open flux loss mechanisms were sufficient to ensure that the decay in open flux continued. The study presented here shows that RG and Φ is predictable (R2L(t) > 0.5) for at least 4 and 3 cycles (respectively) into the future and thus, because the amended group sunspot numbers of Vaquero et al. [2011] show that the decay into MM conditions took less than 3 cycles, it should be possible to predict the onset of GSMi conditions.” http://onlinelibrary.wiley.com/doi/10.1029/2011GL049811/full

      The chance of activity falling below Maunder Minimum levels within 40 years has risen to 20%. Unlike Jimmy – I don’t make it up as I go along.

      https://watertechbyrie.files.wordpress.com/2017/01/nao_fig_4.jpg

      There is direct TSI effect on snow cover in northern Eurasia.

      https://watertechbyrie.files.wordpress.com/2017/12/ao-model.jpg

      But also a solar UV/ozone chemistry modulation of polar surface pressure through atmospheric pathways – in studies previously cited. There are influences on both storminess and temperature in central England as well as the rest of northern Eurasia and the eastern US. Data shows associated reductions in AMOC from a more negative AO/NAO.

      https://watertechbyrie.files.wordpress.com/2014/06/smeed-fig-72-e1501370181996.png

      These result from wind driven spinning up of gyres in both the Pacific and Atlantic Oceans. Multi-decadal variability in the Pacific is defined as the Interdecadal Pacific Oscillation (e.g. Folland et al,2002, Meinke et al, 2005, Parker et al, 2007, Power et al, 1999) – a proliferation of oscillations it seems. The latest Pacific Ocean climate shift in 1998/2001 is linked to increased flow in the north (Di Lorenzo et al, 2008) and the south (Roemmich et al, 2007, Qiu, Bo et al 2006)Pacific Ocean gyres. Roemmich et al (2007) suggest that mid-latitude gyres in all of the oceans are influenced by decadal variability in the Southern and Northern Annular Modes (SAM and NAM respectively – otherwise known as the AAO and AO) as wind driven currents in baroclinic oceans (Sverdrup, 1947). With resultant increases in cold water upwelling on the eastern margin of the Pacific especially.

      https://watertechbyrie.files.wordpress.com/2017/04/gyre-impact-e1492552484864.png

      Cold sea surfaces result in increases in closed cell marine stratocumulus cloud – with a cooling of the planet. “Closed cell cloud systems have high cloud fraction and are usually shallower, while open cells have low cloud fraction and form thicker clouds mostly over the convective cell walls and therefore have a smaller domain average albedo.4–6 Closed cells tend to be associated with the eastern part of the subtropical oceans, forming over cold water (upwelling areas) and within a low, stable atmospheric marine boundary layer (MBL), while open cells tend to form over warmer water with a deeper MBL.” http://aip.scitation.org/doi/10.1063/1.4973593

      Try modelling that. It is not only that they can’t – but the models are chaotic producing non-unique divergent solutions from small uncertainties in initial conditions. This has been known for 50 plus years. Do they really not know or is it a conspiracy of ignorance from a cultural bias? Either way it is all just appalling nonsense. I might add that far from all science is silent on this.

      We are quite likely to lose the millennial scale natural warming of the 20th century in the 21st. It is about 50% of the total. So as we go down the inevitable technological pathways of carbon sequestration in soils and ecosystems and in transition to – within decades – 21st century energy sources it seems there may be little enough to see here. Unless you love the science – I suggest it is time to move on. Of course the spatio-temporal chaos of the Earth system may yet surprise – but these guys don’t have a handle on that either.

      • So the other 80%, the sun just gets more active again as I said. OK, then. I’ll take an 80% chance of that as backing up what I said.

      • It is as usual a completely nutty interpretation of science from a cultural bias. The question is not whether solar activity is going up or down – but how far and how fast and hosw far it will decline on centennial scales. The incompetence of these people is spectacular.

      • RIE, OK, you can change the question if you want. I said what I am going to as an answer.

      • If you knew what the right question was you wouldn’t be such an irrelevant and motivated nuisance.

      • I’ve been saying the same thing consistently and you are all over the place, so I can just give the same comment on solar activity that I gave someone else – we’re at a century scale low so upwards is more likely at this point.

      • You specialize in saying the wrong thing over and over again. We are near millennial highs notwithstanding the 1930 levels at the last solar min. The SORCE TSI RECONSTRUCTION SHOWN ABOVE SHOWS THIS.

        You can’t even get the simplest things correct and have not the slightest clue about the difficult issues.

      • It’s one example of a historical reconstruction by someone’s guesswork since TSI is not known before satellites (the historical part is not from the paper mentioned on the plot by the way). When you just look at TSI peaks you miss the effect of the lengthened cycles, and especially the long minimum around 2009. This was a major factor in why cycle 24 is recognized as the weakest in a century because it took 13 years.

      • “The data record, jointly developed by the University of Colorado’s Laboratory for Atmospheric and Space Physics (LASP) and the Naval Research Laboratory (NRL), is constructed from solar irradiance models that determine the changes with respect to quiet sun conditions when facular brightening and sunspot darkening features are present on the solar disk where the magnitude of the changes in irradiance are determined from the linear regression of a proxy magnesium (Mg) II index and sunspot area indices against the approximately decade-long solar irradiance measurements of the Solar Radiation and Climate Experiment (SORCE).” http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-14-00265.1

        It is also consistent with the C and Be cosmogenic isotope record. So nuts to you.

      • Your reference used the low activity as of now to represent a weak sun in the future, so they know, and yes the sun is the least active in a century as you can see from this.
        http://woodfortrees.org/plot/sidc-ssn/from:1900/mean:132/plot/sidc-ssn/from:1900/mean:12
        Meanwhile if you are interested in TSI, you can use the fact that doubling CO2 is about a 1% solar increase which would be 14 W/m2 on your scale there, if you want to compare forcings. Do you still want to dismiss CO2 doubling now?

      • The study calibrated against SORCE data – as I quoted. It is the methodology and not someones guess as you said.

        The quantum – your so called scale – is the raw measurement assigned a value accurate within – perhaps – +/- 5W/m2. The solar input geometrically adjusted is 1/4 that. 14W/m2 seems your usual misguided nonsense. On which world do you think that makes any sense at all?

        The future is inevitably a low emissions trajectory – on multiple fronts – and warming at some 0.2 degrees C/W/m2 forcing. And about the same natural cooling from a millennial high. There is no pipeline except the one you should go hide in before the error of your ways catches up.

      • It’s a model with assumptions prior to the satellite observations. No plots before about 1980 appear in that BAMS paper. They say that the TSI has changed less than 1 W/m2 since the Maunder Minimum, which may be about right, and is a lot less than the equivalent of 14 W/m2 from a CO2 doubling that you want to ignore. The CO2 forcing so far is half a doubling or 7 W/m2 on that scale, so no wonder the warming has taken off and solar effects have not really done much by comparison, but this all still surprises you.

      • If you want to scale TSI to forcing it would be 0.25 times 0.7 for the albedo effect which is about 0.2 as a factor, but 1% is still 1%.

      • I think he is talking cloud feedback – 0.18 to 1.18 W/m2/K – and then multiplying it by 1/4 for some insane reason.

      • The plot I sourced from SORCE – and it is based as they say on this paper. But I was specifically talking about about UV/ozone pathways or have you forgotten? You know – real science? And your 14W/m2 is literally off the planet – as you are.

      • The plot was not part of the published work, and the scale is around 1400 W/m2 for which 1% would be 14 W/m2, and that scales to 3.5 W/m2 of forcing which is similar to doubling CO2.

      • http://lasp.colorado.edu/home/sorce/files/2011/09/TIM_TSI_Reconstruction-1.png

        Is he suggesting this is not a credible source?

        It is nowhere near 1400W/m2 – and this is a power flux at the satellite. In terms of W/m2 on the planet surface – because the planet is vaguely round – the incident radiation is 340W/m2.

        “Since irradiance variations are apparently minimal, changes in the Earth’s climate that seem to be associated with changes in the level of solar activity—the Maunder Minimum and the Little Ice age for example—would then seem to be due to terrestrial responses to more subtle changes in the Sun’s spectrum of radiative output. This leads naturally to a linkage with terrestrial reflectance, the second component of the net sunlight, as the carrier of the terrestrial amplification of the Sun’s varying output.” http://bbso.njit.edu/Research/EarthShine/literature/Goode_Palle_2007_JASTP.pdf

        The study I started with posit a solar UV/ozone link to polar surface pressure link and the resultant variability in blocking patterns. These latter explain much of the temperature variation in the central England record and climate variability across northern Eurasia and the western US.

      • Exactly! 1400 W/m2 is the normal flux on a surface perpendicular to the sun. You got it finally.
        Also yes the sunspot cycle is seen in the global temperature record. It is one to two tenths of a degree. A big deal to some.

      • Actually it does reach 1400 at some times of the year, like now. Elliptical orbit, see. Also, no, that graph is guesswork like the other half-dozen of these that exist with a wide range of values because those quantities can’t be measured and linking this to sunspots is very uncertain. You chose this one for some reason. Is it a consensus?

      • http://lasp.colorado.edu/data/sorce/total_solar_irradiance_plots/images/tim_level3_tsi_24hour_640x480.png

        An extra 40W/m2? That’s nuts.

        And I choose the reconstruction from the SORCE website because it is the most credible. You on the other hand are the least credible.

        Others I have seen are similar and it is consistent with the isotope record.

        https://wordpress.com/post/watertechbyrie.com

        Sunspots and isotope records were calibrated against SORCE data – as they said. UV varies considerably more – some 20% in max to min in the 1980’s.

      • As long as you realize it is a model and that the sun really is at its weakest in a century, so upwards is more likely as a future trend for that, which is the point you are trying to deviate from.

    • I am pretty sure it is called science – something that JCH is utterly unacquainted with.

  39. nobodysknowledge

    The only take-home message I have got from the Brown/Caldeira speculations is that climate models which have the best estimate of (largest) next century warming are the same models that have the largest reduction in cloud cooling effect.
    Brown: ” -our study indicates that models that simulate the Earth’s recent energy budget with the most fidelity also simulate a larger reduction in the cooling effect from clouds in the future and thus more future warming.”
    So let`s just cherrypick models with the greatest cloud warming effect and try to make some science out of it. No, it isn`t that easy. We first have to invent some methods for this cherrypicking. And then present it as science.

  40. Pingback: Greater future global warming (still) inferred from Earth’s recent energy budget | Patrick T. Brown, PhD

  41. Thanks for your response to Nic’s critique. I look forward to the comments from those more expert than me on this highly technical but interesting subject

    tonyb

  42. Pingback: Greater future global warming (still) predicted from Earth’s recent energy budget | Climate Etc.

  43. Pingback: Reply to Patrick Brown’s response to my article commenting on his Nature paper « Climate Audit

  44. So it’s a toss up between a simpler, higher-skill approach that predicts low warming, and a more complex lower-skill approach that predicts very high warming.

    • When you use more observational constraints you get higher warming, but when you toss out 8 of the 9 observational constraints you get less warming. Nic Lewis still has completely the wrong end of the stick on this one. It’s about constraining the models with observational information, and removing observations never improves that.

      • …. and removing observations SHOULD never improve that.

        And there’s the rub. Using the skill measure they select it does. Something is wrong with the design or methodology.

      • He conflates spread with skill, but that spread comparison means nothing when you also have more observations going into one of the spreads than the other. It has more skill to use all nine sets of observations because the result is optimized to them all, not just a subset. It’s apples and oranges when you use the spread as the skill without considering how many observations go into each of them. It is not surprising that using a subset of observations you can get a smaller spread. The Appendix of the Brown reply went into how some subsets can give less spread even if randomly chosen from the whole set using synthetic data: a situation that clearly is just statistical noise and not significant. This part was not addressed by Lewis in his response, and really there is no response.

  45. The La Niña wants to wish all global coolers and low climate sensitivity believers a Merry Christmas:

    https://i.imgur.com/SvdMtTX.png

    • Thank you. I woke far too early and opened my prezzies. But I have had a little nap now and I am feeling a lot better. What we are looking for in the Niño 3.4 region is sustained 3 month average anomalies less than -0.5 degrees C. What JCH looks for is something akin to divination. He remains hopeful that it is all going to go away and another El Niño will validate both his divination and global warming. It’s cold on the prairie this time of year for a lonesome cowboy and we should cut him some slack.

      Cool conditions continue to intensify in the eastern and central Pacific driven by cold polar winds spinning up ocean gyres in the north and south in response to low solar activity and high polar surface pressure. The emerging La Niña continues to intensify and will over the Austral summer with current cool surface and subsurface water across much of the Pacific – and will likely persist through much of 2018 if not beyond.

      https://www.esrl.noaa.gov/psd/enso/mei/lanina.png

      In the long term declining solar activity leads to blocking patterns, a more negative AMO, reduced AMOC and plunging northern hemisphere temperatures.

      https://en.wikipedia.org/wiki/Central_England_temperature#/media/File:CET_1659_-_2014_using_Hadley_Centre_Data.png

      In the south the 20th century El Niño intensity and frequency peak will transition to renewed La Niña dominance.

      https://watertechbyrie.files.wordpress.com/2016/02/vance2012-antartica-law-dome-ice-core-salt-content.jpg

      Welcome to the millennia of the girl child.

      • The broken link was CET – but you all know what that is.

      • What you’re getting is a great big fizzler: the 2nd warmest La Niña in the instrument record. Close on the heels of the warmest La Niña in the record record. Together, the two of them would melt children’s ice cream cones in minutes.

        This is how your money is talkin’:

        http://www.ospo.noaa.gov/data/sst/anomaly/2011/anomnight.1.3.2011.gif

        And this is how you BS is walking’:

        http://www.ospo.noaa.gov/data/sst/anomaly/2017/anomnight.12.21.2017.gif

        Warming soon, and it’s not like it’s been cold:

        https://i.imgur.com/JHr6BYm.png

        The Pacific will be red hot by May.

      • You do realize that these models have quite a range? And you have forgotten again about the spring predictability barrier.

        This is the picture in November.

        http://www.ospo.noaa.gov/data/sst/anomaly/2017/anomnight.11.20.2017.gif

        And here is now.

        http://www.ospo.noaa.gov/data/sst/anomaly/2017/anomnight.12.21.2017.gif

        Upwelling is intensifying and feedbacks will ensure that this will continue through the Austral summer. The only thing that could turn it around was if water levels in the western Pacific were at a high goepotential. They are not and years of recharge are required before the next El Niño.

        “Here we present a decadally resolved continuous sea surface temperature (SST) reconstruction from the IPWP that spans the past two millennia and overlaps the instrumental record, enabling both a direct comparison of proxy data to the instrumental record and an evaluation of past changes in the context of twentieth century trends. Our record from the Makassar Strait, Indonesia, exhibits trends that are similar to a recent Northern Hemisphere temperature reconstruction2. Reconstructed SST
        was, however, within error of modern values from about AD 1000 to AD 1250, towards the end of the Medieval Warm Period. SSTs during the Little Ice Age (approximately AD 1550–1850) were variable, and 0.5 to 1 degree C colder than modern values during the coldest intervals.” http://users.clas.ufl.edu/rrusso/gly6932/Oppo_etal_Nature09.pdf

        As the Pacific cools over this century Rayleigh–Bénard convection physics will result in an increase in low level marine stratocumulous cloud and a declining planetary heat content. You couldn’t expect it to happen all at once could you? But welcome to the millennia of the girl child.

  46. Pingback: Reply to Patrick Brown’s response to comments on his Nature article | Climate Etc.

  47. Pingback: Reply to Patrick Brown’s response to my article commenting on his Nature paper | Watts Up With That?