Explaining the Discrepancies Between Hausfather et al. (2019) and Lewis&Curry (2018)

by Ross McKitrick

Challenging the claim that a large set of climate model runs published since 1970’s are consistent with observations for the right reasons.

Introduction

Zeke Hausfather et al. (2019) (herein ZH19) examined a large set of climate model runs published since the 1970s and claimed they were consistent with observations, once errors in the emission projections are considered. It is an interesting and valuable paper and has received a lot of press attention. In this post, I will explain what the authors did and then discuss a couple of issues arising, beginning with IPCC over-estimation of CO2 emissions, a literature to which Hausfather et al. make a striking contribution. I will then present a critique of some aspects of their regression analyses. I find that they have not specified their main regression correctly, and this undermines some of their conclusions. Using a more valid regression model helps explain why their findings aren’t inconsistent with Lewis and Curry (2018) which did show models to be inconsistent with observations.

Outline of the ZH19 Analysis:

A climate model projection can be factored into two parts: the implied (transient) climate sensitivity (to increased forcing) over the projection period and the projected increase in forcing. The first derives from the model’s Equilibrium Climate Sensitivity (ECS) and the ocean heat uptake rate. It will be approximately equal to the model’s transient climate response (TCR), although the discussion in ZH19 is for a shorter period than the 70 years used for TCR computation. The second comes from a submodel that takes annual GHG emissions and other anthropogenic factors as inputs, generates implied CO2 and other GHG concentrations, then converts them into forcings, expressed in Watts per square meter. The emission forecasts are based on socioeconomic projections and are therefore external to the climate model.

ZH19 ask whether climate models have overstated warming once we adjust for errors in the second factor due to faulty emission projections. So it’s essentially a study of climate model sensitivities. Their conclusion, that models by and large generate accurate forcing-adjusted forecasts, implies that models have generally had valid TCR levels. But this conflicts with other evidence (such as Lewis and Curry 2018) that CMIP5 models have overly high TCR values compared to observationally-constrained estimates. This discrepancy needs explanation.

One interesting contribution of the ZH19 paper is their tabulation of the 1970s-era climate model ECS values. The wording in the ZH19 Supplement, which presumably reflects that in the underlying papers, doesn’t distinguish between ECS and TCR in these early models. The reported early ECS values are:

  • Manabe and Weatherald (1967) / Manabe (1970) / Mitchell (1970): 2.3K
  • Benson (1970) / Sawyer (1972) / Broecker (1975): 2.4K
  • Rasool and Schneider (1971) 0.8K
  • Nordhaus (1977): 2.0K

If these really are ECS values they are pretty low by modern standards. It is widely-known that the 1979 Charney Report proposed a best-estimate range for ECS of 1.5—4.5K. The follow-up National Academy report in 1983 by Nierenberg et al. noted (p. 2) “The climate record of the past hundred years and our estimates of CO2 changes over that period suggest that values in the lower half of this range are more probable.” So those numbers might be indicative of general thinking in the 1970s. Hansen’s 1981 model considered a range of possible ECS values from 1.2K to 3.5K, settling on 2.8K for their preferred estimate, thus presaging the subsequent use of generally higher ECS values.

But it is not easy to tell if these are meant to be ECS or TCR values. The latter are always lower than ECS, due to slow adjustment by the oceans. Model TCR values in the 2.0–2.4 K range would correspond to ECS values in the upper half of the Charney range.

If the models have high interval ECS values, the fact that ZH19 find they stay in the ballpark of observed surface average warming, once adjusted for forcing errors, suggests it’s a case of being right for the wrong reason. The 1970s were unusually cold, and there is evidence that multidecadal internal variability was a significant contributor to accelerated warming from the late 1970s to the 2008 (see DelSole et al. 2011). If the models didn’t account for that, instead attributing everything to CO2 warming, it would require excessively high ECS to yield a match to observations.

With those preliminary points in mind, here are my comments on ZH19.

There are some math errors in the writeup.

The main text of the paper describes the methodology only in general terms. The online SI provides statistical details including some mathematical equations. Unfortunately, they are incorrect and contradictory in places. Also, the written methodology doesn’t seem to match the online Python code. I don’t think any important results hang on these problems, but it means reading and replication is unnecessarily difficult. I wrote Zeke about these issues before Christmas and he has promised to make any necessary corrections to the writeup.

One of the most remarkable findings of this study is buried in the online appendix as Figure S4, showing past projection ranges for CO2 concentrations versus observations:

Bear in mind that, since there have been few emission reduction policies in place historically (and none currently that bind at the global level), the heavy black line is effectively the Business-as-Usual sequence. Yet the IPCC repeatedly refers to its high end projections as “Business-as-Usual” and the low end as policy-constrained. The reality is the high end is fictional exaggerated nonsense.

I think this graph should have been in the main body of the paper. It shows:

  • In the 1970s, models (blue) had a wide spread but on average encompassed the observations (though they pass through the lower half of the spread);
  • In the 1980s there was still a wide spread but now the observations hug the bottom of it, except for the horizontal line which was Hansen’s 1988 Scenario C;
  • Since the 1990s the IPCC constantly overstated emission paths and, even more so, CO2 concentrations by presenting a range of future scenarios, only the minimum of which was ever realistic.

I first got interested in the problem of exaggerated IPCC emission forecasts in 2002 when the top-end of the IPCC warming projections jumped from about 3.5 degrees in the 1995 SAR to 6 degrees in the 2001 TAR. I wrote an op-ed in the National Post and the Fraser Forum (both available here) which explained that this change did not result from a change in climate model behaviour but from the use of the new high-end SRES scenarios, and that many climate modelers and economists considered them unrealistic. The particularly egregious A1FI scenario was inserted into the mix near the end of the IPCC process in response to government (not academic) reviewer demands. IPCC Vice-Chair Martin Manning distanced himself from it at the time in a widely-circulated email, stating that many of his colleagues viewed it as “unrealistically high.”

Some longstanding readers of Climate Etc. may also recall the Castles-Henderson critique which came out at this time. It focused on IPCC misuse of Purchasing Power Parity aggregation rules across countries. The effect of the error was to exaggerate the relative income differences between rich and poor countries, leading to inflated upper end growth assumptions for poor countries to converge on rich ones. Terence Corcoran of the National Post published an article on November 27 2002 quoting John Reilly, an economist at MIT, who had examined the IPCC scenario methodology and concluded it was “in my view, a kind of insult to science” and the method was “lunacy.”

Years later (2012-13) I published two academic articles (available here) in economics journals critiquing the IPCC SRES scenarios. Although global total CO2 emissions have grown quite a bit since 1970, little of this is due to increased average per capita emissions (which have only grown from about 1.0 to 1.4 tonnes C per person), instead it is mainly driven by global population growth, which is slowing down. The high-end IPCC scenarios were based on assumptions that population and per capita emissions would both grow rapidly, the latter reaching 2 tonnes per capita by 2020 and over 3 tonnes per capita by 2050. We showed that the upper half of the SRES distribution was statistically very improbable because it would require sudden and sustained increases in per capita emissions which were inconsistent with observed trends. In a follow-up article, my student Joel Wood and I showed that the high scenarios were inconsistent with the way global energy markets constrain hydrocarbon consumption growth. More recently Justin Ritchie and Hadi Dowladabadi have explored the issue from a different angle, namely the technical and geological constraints that prevent coal use from growing in the way assumed by the IPCC (see here and here).

IPCC reliance on exaggerated scenarios is back in the news, thanks to Roger Pielke Jr.’s recent column on the subject (along with numerous tweets from him attacking the existence and usage of RCP8.5) and another recent piece by Andrew Montford. What is especially egregious is that many authors are using the top end of the scenario range as “business-as-usual”, even after, as shown in the ZH19 graph, we have had 30 years in which business-as-usual has tracked the bottom end of the range.

In December 2019 I submitted my review comments for the IPCC AR6 WG2 chapters. Many draft passages in AR6 continue to refer to RCP8.5 as the BAU outcome. This is, as has been said before, lunacy—another “insult to science”.

Apples-to-apples trend comparisons requires removal of Pinatubo and ENSO effects

The model-observational comparisons of primary interest are the relatively modern ones, namely scenarios A—C in Hansen (1988) and the central projections from various IPCC reports: FAR (1990), SAR (1995), TAR (2001), AR4 (2007) and AR5 (2013). Since the comparison uses annual averages in the out-of-sample interval the latter two time spans are too short to yield meaningful comparisons.

Before examining the implied sensitivity scores, ZH19 present simple trend comparisons. In many cases they work with a range of temperatures and forcings but I will focus on the central (or “Best”) values to keep this discussion brief.

ZH19 find that Hansen 1988-A and 1988-B significantly overstate trends, but not the others. However, I find FAR does as well. SAR and TAR don’t but their forecast trends are very low.

The main forecast interval of interest is from 1988 to 2017. It is shorter for the later IPCC reports since the start year advances. To make trend comparisons meaningful, for the purpose of the Hansen (1988-2017) and FAR (1990-2017) interval comparisons, the 1992 (Mount Pinatubo) event needs to be removed since it depressed observed temperatures but is not simulated in climate models on a forecast basis. Likewise with El Nino events. By not removing these events the observed trend is overstated for the purpose of comparison with models.

To adjust for this I took the Cowtan-Way temperature series from the ZH19 data archive, which for simplicity I will use as the lone observational series, and filtered out volcanic and El Nino effects as follows. I took the IPCC AR5 volcanic forcing series (as updated by Nic Lewis for Lewis&Curry 2018), and the NCEP pressure-based ENSO index (from here). I regressed Cowtan-Way on these two series and obtained the residuals, which I denote as “Cowtan-Way adj” in the following Figure (note both series are shifted to begin at 0.0 in 1988):

The trends, in K/decade, are indicated in the legend. The two trend coefficients are not significantly different from each other (using the Vogelsang-Franses test). Removing the volcanic forcing and El Nino effects causes the trend to drop from 0.20 to 0.15 K/decade. The effect is minimal on intervals that start after 1995. In the SAR subsample (1995-2017) the trend remains unchanged at 0.19 K/decade and in the TAR subsample (2001-2017) the trend increases from 0.17 to 0.18 K/decade.

Here is what the adjusted Cowtan-Way data looks like, compared to the Hansen 1988 series:

The linear trend in the red line (adjusted observations) is 0.15 C/decade, just a bit above H88-C (0.12 C/decade) but well below the H88-A and H88-B trends (0.30 and 0.28 C/decade respectively)

The ZH19 trend comparison methodology is an ad hoc mix of OLS and AR1 estimation. Since the methodology write-up is incoherent and their method is non-standard I won’t try to replicate their confidence intervals (my OLS trend coefficients match theirs however). Instead I’ll use the Vogelsang-Franses (VF) autocorrelation-robust trend comparison methodology from the econometrics literature. I computed trends and 95% CI’s in the two CW series, the 3 Hansen 1988 A,B,C series and the first three IPCC out-of-sample series (denoted FAR, SAR and TAR). The results are as follows:

The OLS trends (in K/decade) are in the 1st column and the lower and upper bounds on the 95% confidence intervals are in the next two columns.

The 4th and 5th columns report VF test scores, for which the 95% critical value is 41.53. In the first two rows, the diagonal entries (906.307 and 348.384) are tests on a null hypothesis of no trend; both reject at extremely small significance levels (indicating the trends are significant). The off-diagonal scores (21.056) test if the trends in the raw and adjusted series are significantly different. It does not reject at 5%.

The entries in the subsequent rows test if the trend in that row (e.g. H88-A) equals the trend in, respectively, the raw and adjusted series (i.e. obs and obs2), after adjusting the sample to have identical time spans. If the score exceeds 41.53 the test rejects, meaning the trends are significantly different.

The Hansen 1988-A trend forecast significantly exceeds that in both the raw and adjusted observed series. The Hansen 1988-B forecast trend does not significantly exceed that in the raw CW series but it does significantly exceed that in the adjusted CW (since the VF score rises to 116.944, which exceeds the 95% critical value of 41.53). The Hansen 1988-C forecast is not significantly different from either observed series. Hence, the only Hansen 1988 forecast that matches the observed trend, once the volcanic and El Nino effects are removed, is scenario C, which assumes no increase in forcing after 2000. The post-1998 slowdown in observed warming ends up matching a model scenario in which no increase in forcing occurs, but does not match either scenario in which forcing is allowed to increase, which is interesting.

The forecast trends in FAR and SAR are not significantly different from the raw Cowtan-Way trends but they do differ from the adjusted Cowtan-Way trends. (The FAR trend also rejects against the raw series if we use GISTEMP, HadCRUT4 or NOAA). The discrepancy between FAR and observations is due to the projected trend being too large. In the SAR case, the projected trend is smaller than the observed trend over the same interval (0.13 versus 0.19). The adjusted trend is the same as the raw trend but the series has less variance, which is why the VF score increases. In the case of CW and Berkeley it rises enough to reject the trend equivalence null; if we use GISTEMP, HadCRUT4 or NOAA neither raw nor adjusted trends reject against the SAR trend.

The TAR forecast for 2001-2017 (0.167 K/decade) never rejects against observations.

So to summarize, ZH19 go through the exercise of comparing forecast to observed trends and, for the Hansen 1988 and IPCC trends, most forecasts do not significantly differ from observations. But some of that apparent fit is due to the 1992 Mount Pinatubo eruption and the sequence of El Nino events. Removing those, the Hansen 1988-A and B projections significantly exceed observations while the Hansen 1988 C scenario does not. The IPCC FAR forecast significantly overshoots observations and the IPCC SAR significantly undershoots them.

In order to refine the model-observation comparison it is also essential to adjust for errors in forcing, which is the next task ZH19 undertake.

Implied TCR regressions: a specification challenge

ZH19 define an implied Transient Climate Response (TCR) as

where T is temperature, F is anthropogenic forcing, and the derivative is computed as the least squares slope coefficient from regressing temperature on forcing over time. Suppressing the constant term the regression for model i is simply

The TCR for model i is therefore where 3.7 (W/m2) is the assumed equilibrium CO2 doubling coefficient. They find 14 of the 17 implied TCR’s are consistent with an observational counterpart, defined as the slope coefficient from regressing temperatures on an observationally-constrained forcing series.

Regarding the post-1988 cohort, unfortunately ZH19 relied on an ARIMA(1,0,0) regression specification, or in other words a linear regression with AR1 errors. While the temperature series they use are mostly trend stationary (i.e. stationary after de-trending), their forcing series are not. They are what we call in econometrics integrated of order 1, or I(1), namely the first differences are trend stationary but the levels are nonstationary. I will present a very brief discussion of this but I will save the longer version for a journal article (or a formal comment on ZH19).

There is a large and growing literature in econometrics journals on this issue as it applies to climate data, with lots of competing results to wade through. On the time spans of the ZH19 data sets, the standard tests I ran (namely Augmented Dickey-Fuller) indicate temperatures are trend-stationary while forcings are nonstationary. Temperatures therefore cannot be a simple linear function of forcings, otherwise they would inherit the I(1) structure of the forcing variables. Using an I(1) variable in a linear regression without modeling the nonstationary component properly can yield spurious results. Consequently it is a misspecification to regress temperatures on forcings (see Section 4.3 in this chapter for a partial explanation of why this is so).

How should such a regression be done? Some time series analysts are trying to resolve this dilemma by claiming that temperatures are I(1). I can’t replicate this finding on any data set I’ve seen, but if it turns out to be true it has massive implications including rendering most forms of trend estimation and analysis hitherto meaningless.

I think it is more likely that temperatures are I(0), as are natural forcings, and anthropogenic forcings are I(1). But this creates a big problem for time series attribution modeling. It means you can’t regress temperature on forcings the way ZH19 did; in fact it’s not obvious what the correct way would be. One possible way to proceed is called the Toda-Yamamoto method, but it is only usable when the lags of the explanatory variable can be included, and in this case they can’t because they are perfectly collinear with each other. The main other option is to regress the first differences of temperatures on first differences of forcings, so I(0) variables are on both sides of the equation. This would imply an ARIMA(0,1,0) specification rather than ARIMA(1,0,0).

But this wipes out a lot of information in the data. I did this for the later models in ZH19, regressing each one’s temperature series on each one’s forcing input series, using a regression of Cowtan-Way on the IPCC total anthropogenic forcing series as an observational counterpart. Using an ARIMA(0,1,0) specification except for AR4 (for which ARIMA(1,0,0) is indicated) yields the following TCR estimates:

 

 

The comparison of interest is OBS1 and OBS2 to the H88a—c results, and for each IPCC report the OBS-(startyear) series compared to the corresponding model-based value. I used the unadjusted Cowtan-Way series as the observational counterparts for FAR and after.

In one sense I reproduce the ZH19 findings that the model TCR estimates don’t significantly differ from observed, because of the overlapping spans of the 95% confidence intervals. But that’s not very meaningful since the 95% observational CI’s also encompass 0, negative values, and implausibly high values. They also encompass the Lewis & Curry (2018) results. Essentially, what the results show is that these data series are too short and unstable to provide valid estimates of TCR. The real difference between models and observations is that the IPCC models are too stable and constrained. The Hansen 1988 results actually show a more realistic uncertainty profile, but the TCR’s differ a lot among the three of them (point estimates 1.5, 1.9 and 2.4 respectively) and for two of the three they are statistically insignificant. And of course they overshoot the observed warming.

The appearance of precise TCR estimates in ZH19 is spurious due to their use of ARIMA(1,0,0) with a nonstationary explanatory variable. A problem with my approach here is that the ARIMA(0,1,0) specification doesn’t make efficient use of information in the data about potential long run or lagged effects between forcings and temperatures, if they are present. But with such short data samples it is not possible to estimate more complex models, and the I(0)/I(1) mismatch between forcings and temperatures rule out finding a simple way of doing the estimation.

Conclusion

The apparent inconsistency between ZH19 and studies like Lewis & Curry 2018 that have found observationally-constrained ECS to be low compared to modeled values disappears once the regression specification issue is addressed. The ZH19 data samples are too short to provide valid TCR values and their regression model is specified in such a way that it is susceptible to spurious precision. So I don’t think their paper is informative as an exercize in climate model evaluation.

It is, however, informative with regards to past IPCC emission/concentration projections and shows that the IPCC has for a long time been relying on exaggerated forecasts of global greenhouse gas emissions.

I’m grateful to Nic Lewis for his comments on an earlier draft.

Comment from Nic Lewis

These early models only allowed for increases in forcing from CO2, not from all forcing agents. Since 1970, total forcing (per IPCC AR5 estimates) has grown more than 50% faster than CO2-only forcing, so if early model temperature trends and CO2 concentration trends over their projection periods are in line with observed warming and CO2 concentration trends, their TCR values must have been more than 50% above that implied by observations.

Moderation note:  As with all guest posts, please keep your comments civil and relevant.

249 responses to “Explaining the Discrepancies Between Hausfather et al. (2019) and Lewis&Curry (2018)

  1. Reblogged this on Climate Collections and commented:
    Conclusion: “…I don’t think their paper [ZH19] is informative as an exercize in climate model evaluation.

    It is, however, informative with regards to past IPCC emission/concentration projections and shows that the IPCC has for a long time been relying on exaggerated forecasts of global greenhouse gas emissions.”

    • Jacques Lemiere

      exaggerated? to exagerrate is intentional you can no prove that or hardly.
      just use wrong.

  2. Ross, this post is of interest to me on several levels, but I will comment on only one point here.

    Using an arima(0,1,0) model to obtain stationarity with a temperature or forcing series does not make sense from a physical point of view as it implies a random walk. An arfima model with a fractional d value would not imply a random walk but with a d value less than 0.5 would imply a long memory model which might not be applicable and difficult to impossible to test with a short series.

    What if a secular trend could be extracted from a non-linear and non-stationary series with an empirical method like complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and then regressing the resulting temperature and forcing CEEMDAN trends.

  3. Hi Ross

    An important summary. I hope you will also submit for publication.

    My comment:

    The acceptance of the surface temperature anomaly as quantitatively robust remains an issue. We have shown several issues with its accuracy on land.

    Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229. http://pielkeclimatesci.wordpress.com/files/2009/10/r-321.pdf

    The clearest issue is the use of minimum temperatures and temperatures at high latitudes in the winter when the surface layer is stably stratified. As shown in

    McNider, R.T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S. Mackaro, A. Pour Biazar, J.T. Walters, U.S. Nair, and J.R. Christy, 2012: Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing. J. Geophys. Res., 117, D14106, doi:10.1029/2012JD017578. Copyright (2012) American Geophysical Union. https://pielkeclimatesci.files.wordpress.com/2013/02/r-371.pdf

    where is is concluded

    “Based on these model analyses, it is likely that part of the observed long-term increase in minimum temperature is reflecting a redistribution of heat by changes in turbulence and not by an accumulation of
    heat in the boundary layer. Because of the sensitivity of the shelter level temperature to parameters and forcing, especially to uncertain turbulence parameterization in the SNBL, there should be caution about the use of minimum temperatures as a diagnostic global warming metric in either observations or models.”

    This will introduce a warm bias in your use of surface temperature when applying to estimate global warming.

    Best Regards Roger Sr.

  4. “…past IPCC emission/concentration projections and shows that the IPCC has for a long time been relying on exaggerated forecasts of global greenhouse gas emissions.”

    When we know the ECS and the TCR we aren’t done. Controlling CO2 emissions for one, and land use for another brings in more uncertainty. Where is the control on these two things? Governments and individuals. And governments don’t seem to be reaching a lot of agreement and most individuals in this world don’t care enough to do anything. Because of their economic situation, they can’t.

    Using emissions to drive Antarctic ice sheet collapse. Whatever the prediction, it’s based on emissions. So when they say by the year 2050, it’s based upon what individuals and governments do. It’s science based on what people do in the next 10, 20 and 25 years. It’s science that says, if you don’t want this to happen, do this. It is prescriptive. I can’t see that someone can argue otherwise.

    • To attempt to clarify, any study that mentions a year in the future such as 2050, relies on X amount of emissions. No future year can exist without an emission assumption.

    • “It’s science that says, if you don’t want this to happen, do this. It is prescriptive. I can’t see that someone can argue otherwise.”
      An interpretation of the data allows some to predict catastrophe implying that we not only know its cause but have the means to avoid it. I believe you are correct in your interpretation that the predictions are all based on emissions. An alternate view of the data rejects that emissions control temperature or even atmospheric concentration of CO2. Same science, different assumptions, different conclusions. Temperature leads CO2 concentration on all time scales. Solid physical analysis concludes that only about 15% of the recent increase in CO2 is due to human activity. There is no correlation of emissions to temperature. I do not think reduction of fossil fuel use would make any measurable difference in any climate attributes.

  5. @Judith: Gavin Schmidt had a nice thread yesterday summarizing the key scientific evidence behind the CO2-as-main-driver hypothesis. It would be nice to read a qualified response: https://threader.app/thread/1217885474502729728

    • I’m not Judy, however: the question is NOT if the anthropogenic forcing is a main reason of the observed increase of the GMST BUT how much? In other words: what sensitivity do we observe? Following this paper https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019MS001838?af=R which made a plausibility test in fig. 14 it’s clear that every TCR estimate above 1.6 K/2*CO2 gives too much warming:

      The model mean is 1.83! Only 5 models are within a 20% margin and it’s mean is 1.45 K/2*CO2 but IMO the whole approach is questionable due to too less skill.

      • So the paper you linked argues that with a TCR of 1.45 K and CO2 as the main driver you can explain all of modern warming since 1880? If so, why does the IPCC a much higher TCR?

      • Tobias, a last response: The IPCC doesen’t argue with “much higher TCR”-indeed it calculates with an interval 1K/ 2*CO2….2.5K/2*CO2 – but the CMIP5 climate models. After evaluating with the observed warming retrospectively to 2018 it seems that the model mean TCR is (much) too high, when following the cited paper which excludes values above 2K/2*CO2 as “unlikely”, see the attached figure.

      • Thanks Frank, but this wasn’t my question. I asked about Gavin Schmidt and the IPCC, not the cmip5 model

    • @Frank: It’s not about “a main driver” but “the main driver”, as the IPCC suggests. If it’s the main driver, it has to fully explain modern warming. In fact it has to explain more than observed warming, as the IPCC includes cooling effects.

      • Tobias, sorry…the TCR approach I mentioned also includes “cooling effects”, e.g. from aerosols. It’s helpful to take notice of the “ERF” data, you’ll find it in AR5 WG1. Good luck for educating.

      • See my response above. Why does the IPCC assume much higher values to explain observed modern warming?

    • If you neglect solar indirect effects and multi-decadal and longer internal variation of ocean circulations, you are easily led to a conclusion of CO2 as main driver.

      • Thanks Judith. Gavin writes in his thread:

        We can also look at the testable, falsifiable, theories that were tested, and failed.
        Solar forcing? Fails the strat cooling test.❌
        Ocean circulation change? Fails the OHC increase test ❌
        Orbital forcing? Fails on multiple levels ❌

        Are these points false, or contested?

      • Surely Willie (Soon) and solar scientists are right about the primacy of the sun. Why? Because the observable real world is the final test of science. And the data – actual evidence – shows that global temperatures follow changes in solar brightness on all time-scales, from decades to millions of years. On the other hand, CO2 and temperature have generally gone their own separate ways on these time scales.

        https://wattsupwiththat.com/2018/12/02/dr-willie-soon-versus-the-climate-apocalypse/

      • We can contrast low solar intensity and Holocene maximum temperature with low solar intensity and the LIA. Both seem associated with more upwelling in the eastern Pacific Ocean – that may indeed be indirectly solar triggered through the Mansurov effect. But changes in solar intensity are minor. To explain the contrast we would need to consider ice, cloud or CO2 changs – or all three and more.

      • curryja:
        “If you neglect solar indirect effects and multi-decadal and longer internal variation of ocean circulations…”

        The AMO is an inverse response to solar wind variability. With stronger solar wind we see global cooling like in the 1970’s, and with weaker solar wind we see global warming like post 1995. That is the most important dynamic in the climate system.

      • Brava, Judith. Bravo,Ordvic

      • “Changes in the Earth’s radiation budget are driven by changes in the balance between the thermal emission from the
        top of the atmosphere and the net sunlight absorbed. The shortwave radiation entering the climate system depends
        on the Sun’s irradiance and the Earth’s reflectance. Often, studies replace the net sunlight by proxy measures of solar
        irradiance, which is an oversimplification used in efforts to probe the Sun’s role in past climate change. With new helioseismic data and new measures of the Earth’s reflectance, we can usefully separate and constrain the relative roles of the net sunlight’s two components, while probing the degree of their linkage. First, this is possible because helioseismic data provide the most precise measure ever of the solar cycle, which ultimately yields more profound physical limits on past irradiance variations. Since irradiance variations are apparently minimal, changes in the Earth’s climate that seem to be associated with changes in the level of solar activity—the Maunder Minimum and the Little Ice age for example would then seem to be due to terrestrial responses to more subtle changes in the Sun’s spectrum of radiative output. This leads naturally to a linkage with terrestrial reflectance, the second component of the net sunlight, as the carrier of the terrestrial amplification of the Sun’s varying output. Much progress has also been made in determining this difficult to measure, and not-so-well-known quantity. We review our understanding of these two closely linked, fundamental drivers of climate.” http://bbso.njit.edu/Research/EarthShine/literature/Goode_Palle_2007_JASTP.pdf

        Although the adults are talking indirect effects – in this and in decadal variability;

    • Gavin plus + and minuses x
      “First off, we start with the observations:
      1) spectra from space showing absorption of upward infra-red radiation from the Earth’s surface.
      – yes +

      2) Measurements from around the world showing increases in CO2, CH4, CFCs, N2O.
      -No x
      Fails to mention biggest GHG atmpospheric water.
      Fails to quantify the amount of increase.
      Fails to mention all possible causes of these increases som3 of which are natural in a warming world

      3) In situ & space based observations of land use change
      No x
      Fails to tie in why valid
      Fails to mention increased vegetation observed from space
      Fails to mention highest CO2 in jungle areas

      We develop theories.
      1) Radiative-transfer (e.g. Manabe and Wetherald, 1967)
      yes+
      2) Energy-balance models (Budyko 1961 and many subsequent papers)
      yes+
      3) GCMs (Phillips 1956, Hansen et al 1983, CMIP etc.)
      Yes +

      We make falsifiable predictions. Here are just a few:
      1967: increasing CO2 will cause the stratosphere to cool
      Blatant half truth, the corollary was that there should be a detectable hot spot.
      Sorry that part of the hypothesis was falsified
      1981: increasing CO2 will cause warming at surface to be detectable by 1990s No x will cause warming is correct. Detectable warming is not due to large natural variability it is quite easy to have such a small signal hidden
      1988: warming from increasing GHGs will lead to increases in ocean heat content. No brainer
      1991: Eruption of Pinatubo will lead to ~2-3 yrs of cooling. No brainer
      2001: Increases in GHGs will be detectable in space-based spectra
      No brainer spectra known for centuries
      2004: Increases in GHGs will lead to continued warming at ~0.18ºC/decade.
      No xxx
      This is so blatantly wrong by Gavin. There are so many predictions of continued warming. Most of them much higher and much scarier and most supported by him. So what does he do?
      Picks out the current warming rate and claims that was the prediction.
      Wimp

      We test the predictions:
      Stratospheric cooling? ✅
      Detectable warming? X . Not detectable CO2 warming, Gavin, no fingerprint of CO2.
      OHC increase? X who would know? Vast tracts of made up data 0.01 -0.2 C in the upper ocean over 60 years with a higher yearly SD error multiplied x60 Don’t claim what you cannot test precisely enough.
      Pinatubo-related cooling?✅
      Space-based changed in IR absorption? ✅
      post-2004 continued warming? Conveniently excluding his little increases in GHG? XXX There is warming , but no link to CO2 is needed for warming or cooling XXX

      With this validated physics, we can estimate contributions to the longer term trends.
      Hold on, the physics is not validated

      This too is of course falsifiable. If one could find a model system that matches all of the previously successful predictions in hindcasts, and gives a different attribution, we could test that. [Note this does not (yet) exist, but let’s keep an open mind].
      xx like Tamino Gavin is of course telling others not to prejudge because he has already judged it for them

      We can also look at the testable, falsifiable, theories that were tested, and failed.
      Solar forcing? Fails the strat cooling test.❌
      Ocean circulation change? Fails the OHC increase test
      Returns to the great misinformation. When all else fails claim unprovable OHC increase.
      When the models fail blame it on Ocean circulation.
      All that missing heat has gone in currents deep under the ocean and will emerge in hundreds of years time
      When the current theory threatens the CO2 theory , flush it down the drain.
      Orbital forcing? Fails on multiple levels ❌
      Clouds? x Can’t trust those Spencer and Christie fellows, heads in the clouds you know.
      If you have a theory that you don’t think has been falsified, or you think you can falsify the mainstream conclusions, that’s great!
      Join us at modifythedata.com and we can make your theory just as good as ours.

      • Thanks angech, that’s a concise rebuttal. But do you agree with his point that “Solar forcing? Fails the strat cooling test.”, or is this also moot?

      • Tobias
        “Thanks angech, that’s a concise rebuttal.”
        Thank you. Appreciated.

        But do you agree with his point that “Solar forcing? Fails the strat cooling test.”, or is this also moot?

        I do not know.
        In default I would defer to Gavin unless Ross or others here wish to contest it.

        I would say that the sun shows a remarkably narrow range of temperature flux which indicates an extremely well mixed substrate of homogenous material and while some solar forcing is possible this is unlikely to be a significant cause of temperature variation.

        The reasons for “natural” temperature variations over the course of hours to centuries is the incredible mixing of the gases, water, ice, land and subterranean water in the cement mixer of the rotating earth combined with the “steam” ( clouds) that come off from the earth and hide it (change the albedo constantly) from the sun.
        These changes in circulation and distribution of the heat load can cause widespread temperature changes in some of the substrates that can persist for up to hundreds of years despite the constant return of the overall heat input from the sun.

        The concept of solar forcing and providing provenance is arcane and arguable, like the hot spot . If wrong on one does that give him double credit for being right on the other?

      • angech | January 17, 2020 at 4:44 pm

        Thank you angech; that was a good post.

    • ” If so, why does the IPCC a much higher TCR?”
      Because if there’s a low TCR, then we can’t see the scary big rises in temperature, so no need for CO2 taxes, indulgences &, indeed, the IPCC itself. A complete collapse in the global scares market, no more Carry on Partyings, Funding for Climastrological research reduced by a logarithmic amount, mass redundancies amongst Climastrological departments at universities & producers of wind & solar subsidy farming equipment. Plenty of politicians trying to salvage their careers & a few billion people asking some very hard questions as to why they’ve paid all this money in taxes to “solve” a non-problem.

    • stevenreincarnated

      He’s being ridiculous. He knows full well that changing ocean currents can both warm the surface and increase OHC at the same time. Did the stratosphere start cooling? The last I’d heard that stopped about 1995 but that doesn’t it hasn’t been re-evaluated since the last time I heard.

  6. Ross,
    It’s been a few decades since I had to sign off on any design control documents, but I concur with you that at a minimum .”this graph should have been in the main body of the paper. It shows:….”.

    I would of included some comment about the graph in the Abstract as well.

  7. From Section 1 of The coming cooling: usefully accurate climate forecasting for policy makers. https://journals.sagepub.com/doi/pdf/10.1177/0958305X16686488
    and an earlier accessible blog version at http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html See also https://climatesense-norpag.blogspot.com/2018/10/the-millennial-turning-point-solar.html

    “For the atmosphere as a whole therefore cloud processes, including convection and its interaction with boundary layer and larger-scale circulation, remain major sources of uncertainty, which propagate through the coupled climate system. Various approaches to improve the precision of multi-model projections have been explored, but there is still no agreed strategy for weighting the projections from different models based on their historical performance so that there is no direct means of translating quantitative measures of past performance into confident statements about fidelity of future climate projections.The use of a multi-model ensemble in the IPCC assessment reports is an attempt to characterize the impact of parameterization uncertainty on climate change predictions. The shortcomings in the modeling methods, and in the resulting estimates of confidence levels, make no allowance for these uncertainties in the models. In fact, the average of a multi-model ensemble has no physical correlate in the real world.
    The IPCC AR4 SPM report section 8.6 deals with forcing, feedbacks and climate sensitivity. It recognizes the shortcomings of the models. Section 8.6.4 concludes in paragraph 4 (4): “Moreover it is not yet clear which tests are critical for constraining the future projections, consequently a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed”
    What could be clearer? The IPCC itself said in 2007 that it doesn’t even know what metrics to put into the models to test their reliability. That is, it doesn’t know what future temperatures will be and therefore can’t calculate the climate sensitivity to CO2. This also begs a further question of what erroneous assumptions (e.g., that CO2 is the main climate driver) went into the “plausible” models to be tested any way. The IPCC itself has now recognized this uncertainty in estimating CS – the AR5 SPM says in Footnote 16 page 16 (5): “No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.” Paradoxically the claim is still made that the UNFCCC Agenda 21 actions can dial up a desired temperature by controlling CO2 levels. This is cognitive dissonance so extreme as to be irrational. There is no empirical evidence which requires that anthropogenic CO2 has any significant effect on global temperatures.
    The climate model forecasts, on which the entire Catastrophic Anthropogenic Global Warming meme rests, are structured with no regard to the natural 60+/- year and, more importantly, 1,000 year periodicities that are so obvious in the temperature record. The modelers approach is simply a scientific disaster and lacks even average commonsense. It is exactly like taking the temperature trend from, say, February to July and projecting it ahead linearly for 20 years beyond an inversion point. The models are generally back-tuned for less than 150 years when the relevant time scale is millennial. The radiative forcings shown in Fig. 1 reflect the past assumptions. The IPCC future temperature projections depend in addition on the Representative Concentration Pathways (RCPs) chosen for analysis. The RCPs depend on highly speculative scenarios, principally population and energy source and price forecasts, dreamt up by sundry sources. The cost/benefit analysis of actions taken to limit CO2 levels depends on the discount rate used and allowances made, if any, for the positive future positive economic effects of CO2 production on agriculture and of fossil fuel based energy production. The structural uncertainties inherent in this phase of the temperature projections are clearly so large, especially when added to the uncertainties of the science already discussed, that the outcomes provide no basis for action or even rational discussion by government policymakers. The IPCC range of ECS estimates reflects merely the predilections of the modellers – a classic case of “Weapons of Math Destruction” (6).
    Harrison and Stainforth 2009 say (7): “Reductionism argues that deterministic approaches to science and positivist views of causation are the appropriate methodologies for exploring complex, multivariate systems where the behavior of a complex system can be deduced from the fundamental reductionist understanding. Rather, large complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior. The practical implication is that there exist system behaviors and structures that are not amenable to explanation or prediction by reductionist methodologies. The search for objective constraints with which to reduce the uncertainty in regional predictions has proven elusive. The problem of equifinality ……. that different model structures and different parameter sets of a model can produce similar observed behavior of the system under study – has rarely been addressed.” A new forecasting paradigm is required.”

    • “The climate model forecasts, on which the entire Catastrophic Anthropogenic Global Warming meme rests, are structured with no regard to the natural 60+/- year and, more importantly, 1,000 year periodicities that are so obvious in the temperature record. The modelers approach is simply a scientific disaster and lacks even average common sense”

      Ignoring the evidence of natural variability allows them to put their brains on sterile autopilot. No mess. No questions. No complexity. It’s a great public relations/propaganda strategy, known as fast food science, just stop in for a few seconds and fill up with the latest orders on how to think. An appeal to the lowest common denominator.

      • Good posts.

        One doesn’t need to be a scientist to see that there’s a good amount of so called science that “lacks even average common sense”. It’s probably not in the peripheral vision of many serious scientists how the blame game complex seriously damages science, or even how it works; how brand marketing is sadly leveraged against the best interests of scientific truth, and how entrenched the forces that create it are. But it’s these forces that drive confirmation biases, desires for particular political outcomes, world views; all feeding the consensus “malarkey” beast (thanks Joe for the revival of malarkey, it all comes into focus now).

        Certain “peer consensus” labels add leverage to advance falsehoods, in cases where the label is abused to facilitate biased work for use in branding efforts. It enables a cloak, spearheading unreproachable science (at least where it counts, with politics). Such branded work provides just enough fuel to drive politics, but it’s much louder than real science. This is a basic methodology for how nonsense science usurps real science. These ideas are birthed to the media, who then lobby scientific falsehoods, or exaggerations to the public. Some of the false science: climate change causing wildfires; more hurricanes occurring; harsher hurricanes; lower crop yields; higher rates of disease; the list goes on and on, all of these are designs to coerce policy.

        While I believe there’s a good argument that AGW has caused some of the recent warming, there’s yet too many variables that make it unconvincing relative to advertised “degree”. Science proves CO2 is an aggravator of temperature because it’s a GHG, but I still haven’t seen anything that has ruled out CO2 being primarily follower of temperature as demonstrable in the historical record: http://joannenova.com.au/global-warming-2/ice-core-graph/

        Mann’s hockey stick doesn’t look like much of a stick within a 12k year temperature record, it looks like a small blip in fact; a blip that’s only demonstrable because contemporary science has the means to add granularity to data over the last 150 years. If it were possible to add this level of “noise” to an 800k year chart, overlayed with a CO2 chart, my bet is there would probably have to be a lot of “Mann-spraining” be done from the massive population of “sticks” at peak temperature cycles. The paleo record is rounded off, all noisy “sticks” are absorbed in the peaks.

      • Sorry, there must have been a freudian slip, “Mann-splaining”, not “spraining”.

  8. Thank-you.

    Just a very minor point you say :”Yet the IPCC repeatedly refers to its high end projections as “Business-as-Usual” and the low end as policy-constrained.”

    It isn’t quite that bad – AR5 WG1 doesn’t use the term as far as I can see, and Riahi et al originally presented it as “RCP 8.5—A scenario of comparatively high greenhouse gas emissions” and only uses “business as usual” qualified by “relatively conservative” and “high-emission”.

    • HAS, I researched how the IPCC named RCP8.5 their “Business as Usual” (BAU) case. As you point out, they didn’t use the term in AR5, instead they began to use the term during press conferences and presentation of AR5 contents in late 2013 and early 2014. The BAU term became the standard they used in interviews and discussions, this was picked up by the media, and almost immediately we began to see papers referring to RCP8.5 as BAU.

      This extended to training material used in universities, and to position statements written by scientific organizations in numerous countries. By 2015 almost all climate change papers referred to RCP8.5 as BAU, and this was also picked up by US government agencies during the Obama administration.

      The period 2013 to 2018 saw a significant number of comments, articles and papers explaining RCP8.5 wasn’t BAU. I myself saturated the comments sections in newspapers and blogs with repetitive remarks about this error. In my case I came tendencies we observed: fossil fuel resources were increasingly more difficult to extract, competing technology prices were dropping, and the assumptions in RCP8.5 didn’t make sense (I’m not going to get into it here, but do remember the RCPs were scenarios prepared to meet an arbitrary IPCC request: they wanted four cases with four forcings, and the team preparing RCP8.5 had to include absurd system behavior to reach the 8.5 watts per m2).

      We can’t blame the RCP8.5 authors because they were asked to deliver the target forcing. But I think a case can be made to accuse IPCC principals of scientific fraud for: 1. using the BAU term for RCP8.5 on a consistent basis, and 2. Failing to inform the scientific community and decision makers that RCP8.5 wasn’t really BAU.

      I think we can also consider the ongoing (but increasingly feeble) defense of RCP8.5 as BAU without a corresponding correction by the IPCC as a sign that it can be considered a political organization with clear political goals, no regard for the quality of its products or their adequate use by the scientific community, and lacking in ethics to such an extent that it deserves to be shut down and replaced by a new organization outside of the UN structure.

      The problems we see with the RCP8.5 use are a symptom of a very serious disease which has pervaded thus field for decades, a disease which is now entering the realm of criminal behavior, because faked science is ysed to justify trillions of dollars in spending which are going to bring hefty profits to certain actors, and give geopolitical advantages to nations such as China and Russia (because they aren’t about to commit economic suicide cutting CO2 emissions to zero).

      Criminal behavior can also be seen in the use of exaggerated alarmism to scare children, and put teenagers on the street asking for political changes (which conveniently demand the end of capitalism and parrot Neomarxist lines about climate justice, the white patriarchate, and etc). Scaring and traumatizing children using false information is a criminal act, and such abuse ought to stop, but we already know that radical political movements will stop at nothing, and unfortunately the climate change problem is now a weapon used by Neomarxist radicals as a means to take over.

      And when we combine the economic harm they will cause with their repressive and social engineering methods, we may be about to see the West fall in the hands of a political faction which may eventually rival Stalin, Mao, Castro, and Chavez when it comes to its innate evil nature.

  9. John Ferguson

    Ross M: ” Using a more valid regression model helps explain why their findings aren’t inconsistent with Lewis and Curry (2018) which did show models to be inconsistent with observations.”

    I cannot understand this.

    • John Ferguson

      Never mind, I think I got it. If you use more valid regression model, then Zack’s findings are more parallel to L&C’s. Is that it?

  10. There are assumptions that are insupportable. That models have unique deterministic solutions (McWilliams 2007, Slingo and Palmer 2012). That “the outlines and dimension of anthropogenic climate change are understood and that incremental improvement to and application of the tools used to establish this outline are sufficient to provide society with the scientific basis for dealing with climate change” (Palmer and Stevens 2019). That surface temperature is a measure of surface energy flux (Pielke 2004). That internal variability is short term, self cancelling noise superimposed on a forced signal (Koutsoyiannis 2013).

    The behavior of long term climate series is defined by the Hurst law. A value for the Hurst exponent (H) of 0.5 is the statistical expectation of reversion to the mean. The value of 0.72 calculated by Hurst from 849 years of Nile River flow records reveal an underlying tendency for climate data to cluster around a mean and a variance for a period and then shift to another state. This is best understood in modern terms of dynamical complexity – or given the nature of the Earth system – patterns of spatio-temporal chaos in a turbulent flow field.

    R(n)/S(n) ∝ n^H

    where R(n) is referred to as the adjusted range of cumulative departures of a time series of n values (Table 1), S(n) is the standard deviation and R(n)/S(n) is the rescaled range of cumulative departures.

    e.g. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2016WR020078

    It leads to a perspective on the future evolution of climate in which change and uncertainty are essential parts (Koutsoyiannis, 2013). What is far more fruitful at this time than the simple math and simpler assumptions of TCR and ECS calculations – would be a creative metaphysical synthesis such as is essential to the fundamental advancement of science (Baker 2017).

    The emission pathways have – btw – been superseded by new ‘Shared Socioeconomic Pathways’. SSP5 is I suspect intended as a cautionary tale – but it seems to me to be our inevitable future. Whatever the sensitivity.

    “SSP5 Fossil-fueled Development – Taking the Highway (High challenges to mitigation, low challenges to adaptation) This world places increasing faith in competitive markets, innovation and participatory societies to produce rapid technological progress and development of human capital as the path to sustainable development. Global markets are increasingly integrated. There are also strong investments in health, education, and institutions to enhance human and social capital. At the same time, the push for economic and social development is coupled with the exploitation of abundant fossil fuel resources and the adoption of resource and energy intensive lifestyles around the world. All these factors lead to rapid growth of the global economy, while global population peaks and declines in the 21st century. Local environmental problems like air pollution are successfully managed. There is faith in the ability to effectively manage social and ecological systems, including by geo-engineering if necessary.” https://www.sciencedirect.com/science/article/pii/S0959378016300681

    I suggested the other day that we geoengineer the hell out of the place. Some of us have been doing just that with great success for forty years.

    https://judithcurry.com/2020/01/10/climate-sensitivity-in-light-of-the-latest-energy-imbalance-evidence/#comment-907471

    • Curious George

      “The behavior of long term climate series is defined by the Hurst law.” An interesting proposal – and testable. What exactly is a “climate series”? Can we describe climate by a single number? Would it be local or global?

      • Hurst studied the Nile River for sixty years and published his seminal analysis in 1951. A series is a collection of observations made over time – aka time series. The answer is 42 and it is universal. Now – what is the question? Do you mean 0.72? It is calculated on Nile River data. The value itself is not critical – the departure from the value of 0.5 expected in Gaussian distributions is the point. The method has been applied to many series from economics to nerve impulses, There have been many attempts at explanations.

        Here’s an explanation in terms of regimes.

        https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1538-4632.1997.tb00947.x

        Julia Slingo and Tim Palmer discuss it in terms of a ‘fractionally dimensioned state space’. It’s just words.

        “Hints that the climate system could change abruptly came unexpectedly from fields far from traditional climatology. In the late 1950s, a group in Chicago carried out tabletop “dishpan” experiments using a rotating fluid to simulate the circulation of the atmosphere. They found that a circulation pattern could flip between distinct modes.” https://history.aip.org/climate/rapid.htm

        I am old enough to remember physical models in a fluid dynamics lab. I like the physicality of it.

        “You can see spatio-temporal chaos if you look at a fast mountain river. There will be vortexes of different sizes at different places at different times. But if you observe patiently, you will notice that there are places where there almost always are vortexes and they almost always have similar sizes – these are the quasi standing waves of the spatio-temporal chaos governing the river. If you perturb the flow, many quasi standing waves may disappear. Or very few. It depends.” https://judithcurry.com/2011/02/10/spatio-temporal-chaos/

        That’s the explanation I favor.

      • Yes, rapid climate changes have occurred in pre history, so without anthropogenic CO2, CH4 etc. Is present human society unable to react to rapid changes? How fast? The 2004 Indian Ocean tsunami was too fast /hours: in 2020 the number of deaths would be less because of new undersea warning systems.

      • Why did rapid climate change(s) occur before?

        What’s evidence those causes are behind modern warming?

      • Why did rapid climate change(s) occur before?
        What’s evidence those causes are behind modern warming?

        In warm times it snows more and ice piles up until it advances. When the ice advances it causes rapid cooling.
        In cold times it snows less and ice depletes until it retreats.
        When the ice retreats it allows rapid warming.

        Simple stuff, that is recorded in ice core data and history.

        We just came out it the little ice age. They claim the warming caused the ice retreat. The ice retreat caused the warming.

        Your ice chest warms when the ice is depleted, climate works the same way. Then the oceans thaw and more evaporation and snowfall rebuilds the ice. If you watch the nightly news, the ice is being rebuilt as we watch. An open Arctic Ocean is necessary to keep Greenland supplied with ice. When the oceans are cold enough, the Arctic freezes and shuts down the ice machine. When the oceans are warm and thawed, the Arctic turns on the ice machine.

        Consensus science builds ice in cold times and removes ice in warm times. They never explain how that they get evaporation and snowfall from frozen polar oceans.

      • I focus on change and uncertainty in the context of building prosperity and resilience along the lines of SSP5. But change can be very rapid and involve biology and hydrology.

    • Robert I Ellison: The behavior of long term climate series is defined by the Hurst law. A value for the Hurst exponent (H) of 0.5 is the statistical expectation of reversion to the mean. The value of 0.72 calculated by Hurst from 849 years of Nile River flow records reveal an underlying tendency for climate data to cluster around a mean and a variance for a period and then shift to another state.

      A process requires more than one summary statistic (or its theoretical large sample limit) to “define” its behavior. For a process that shifts from state to state, you would want at minimum the means and variances and autocorrelations within states, and the frequencies of shifting (the distributions of the times between states).. With more than 2 states, you would want the state-to-state transition probability matrix as well, and you’d want to know if state-to-state transitions were Markovian..

      • You need to understand that it is about data analysis and not theory or modelling.
        The Hurst exponent is a measure of the departure of the series from an idealized random Gaussian process.

        “We are then in one of those situations, so salutary for theoreticians, in which empirical discoveries stubbornly refuse to accord with theory. All of the researches described above lead to the conclusion that in the long run, E(Rn) should increase like n^0.5 , whereas Hurst’s extraordinarily well documented empirical law shows an increase like n^H where H is about 0.7. We are forced to the conclusion that either the theorists interpretation of their own work is inadequate or their theories are falsely based: possibly both conclusions apply.” Lloyd 1967


        We know what the data did in the past. That’s a given. This is the spatial pattern of a relatively new temperature based Interdecadal Pacific Oscillation index.


        Click to access Henley_ClimDyn_2015.pdf

        It can be plotted.

        It can be mapped.

        What it can’t be is predicted or modelled. There is as I have said to you a difference between a hydrologist and a statistician.

        “It also is only natural and perfectly in order for the mathematician to concentrate on the formal geometric structure of a hydrologic series, to treat it as a series of numbers and view it in the context of a set of mathematically consistent assumptions unbiased by the peculiarities of physical causation. It is, however, disturbing if the hydrologist adopts the mathematician’s attitude and fails to see that his mission is to view the series in its physical context, to seek explanations of its peculiarities in the underlying physical mechanism rather than to postulate the physical mechanism from a mathematical description of these peculiarities.” Klemes 1974

      • This is the spatial basis of the TPI IPO index. Shifts in Pacific Ocean circulation cause megadroughts and megafloods across the planet and modulate the global energy budget.

      • groan… drought in Australia…

      • Robert I Ellison: You need to understand that it is about data analysis and not theory or modelling.
        The Hurst exponent is a measure of the departure of the series from an idealized random Gaussian process.

        Either way, in modeling or in data series, one statistic is not sufficient to define a series.

        You need to understand that you wrote a false statement.

      • “By ‘Noah Effect’ we designate the observation that extreme precipitation can be very extreme indeed, and by ‘Joseph Effect’ the finding that a long period of unusual (high or low) precipitation can be extremely long. Current models of statistical hydrology cannot account for either effect and must be superseded.” Benoit B. Mandelbrot James R. Wallis, 1968, Noah, Joseph, and Operational Hydrology

        You need to understand that there is a big picture that shadowing me with trivial nitpicking doesn’t begin to encompass.

      • Robert I Ellison: You need to understand that there is a big picture that shadowing me with trivial nitpicking doesn’t begin to encompass.

        It is not trivial to point out that the following sentence is false:The behavior of long term climate series is defined by the Hurst law.

        There might be an underlying tendency for climate data to cluster around a mean and a variance for a period and then shift to another state. , but such a system can not be defined by a Hurst coefficient of 0.72.

        It is, however, disturbing if the hydrologist adopts the mathematician’s attitude and fails to see that his mission is to view the series in its physical context, to seek explanations of its peculiarities in the underlying physical mechanism rather than to postulate the physical mechanism from a mathematical description of these peculiarities

        You need to understand that there is a big picture that shadowing me with trivial nitpicking doesn’t begin to encompass.

        There is no good justification for writing and then defending actual mistakes in the mathematical presentations. The Hurst coefficient discredits one particularly simple and naive mathematical model of a time series, and that is all it does. The Hurst exponent is a measure of the departure of the series from an idealized random Gaussian process. Full Stop.

      • The Hurst law was a 1951 revolution in the understanding of geophysical time series. It has been utilized across many fields of study. Implicit in the rescaling method is standard deviation across the sample and variance of a sub-sample. As usual you miss the point in favor of generic – and petty – complaints. You shadow me with nonsense like this until you get especially rude and your comments disappear. Surely a pointless exercise.


        https://www.tandfonline.com/doi/full/10.1080/02626667.2015.1125998

        And I have cited that and other reference above.

      • Robert I Ellison: And I have cited that and other reference above.

        Perhaps you are merely having trouble with the concept of define . Once you infer from the Hurst coefficient that a complex model is needed, the Hurst coefficient provides little information toward defining a model. That might require, among other things, computing the spectral density function and the partial autocorrelation function.

      • The underlying ‘structured random’ nature of geophysical time series is revealed – for which there is still yet no statistical ‘model’. We have instead data analysis in the natural sciences.

      • Robert I Ellison: The underlying ‘structured random’ nature of geophysical time series is revealed

        Your earlier comment is close to the mark: The Hurst exponent is a measure of the departure of the series from an idealized random Gaussian process. That is a tiny amount of “The underlying ‘structured random’ nature of geophysical time series.” Certainly not a “definition”.

      • “The empirical investigation of several geophysical time series indicates that they are composed of segments representing different natural regimes, or periods when events are strongly autocorrelated. Using a data transformation method developed by Hurst, these regimes are diferentiated by rescaling the time series and examining the resulting transformed trace for inflections. As regime signals are not completely mixed and have rather long run lengths, Hurst rescaling produces a clustering of extremes of the same sign and elevates the Hurst exponent to values greater than 0.5. These regimes have a characteristic distribution, as defined by the mean and standard deviation, which differ from the statistical characteristics of the complete record.” https://wattsupwiththat.files.wordpress.com/2012/07/sio_hurstrescale-1.pdf

        Yeah right.

      • Robert I Ellison: The behavior of long term climate series is defined by the Hurst law.

        The Hurst “law” has morphed into a procedure:

        Robert I Ellison: Using a data transformation method developed by Hurst, these regimes are diferentiated by rescaling the time series and examining the resulting transformed trace for inflections. As regime signals are not completely mixed and have rather long run lengths, Hurst rescaling produces a clustering of extremes of the same sign and elevates the Hurst exponent to values greater than 0.5. These regimes have a characteristic distribution, as defined by the mean and standard deviation, which differ from the statistical characteristics of the complete record.”

        Or has it?

        Now the regimes (plural) have “a” characteristic distribution as “defined by” “the mean” and standard deviation. What, no autocorrelation?

        Robert I Ellison: Yeah right.

        In my youth this was called a “snow job”: following up a false statement with a bunch of thematically (associatively?) related stuff that does not show the false statement to have in fact been a true statement.

      • “For some 900 annual time series comprising streamflow and precipitation records, stream and lake levels, Hurst established the following relationship, referred to as Hurst’s Law:

        https://www.tandfonline.com/na101/home/literatum/publisher/tandf/journals/content/thsj20/2016/thsj20.v061.i09/02626667.2015.1125998/20160613/images/thsj_a_1125998_m0001.gif (1)

        where Rn is referred to as the adjusted range of cumulative departures of a time series of n values (Table 1), Sn is the standard deviation, Rn/Sn is referred to as the rescaled range of cumulative departures, and H is a parameter, henceforth referred to as the Hurst coefficient.”

        I have dealt with that. And what I call this is perpetual dissembling.

      • Robert I Ellison: “For some 900 annual time series comprising streamflow and precipitation records, stream and lake levels, Hurst

        henceforth referred to as the Hurst coefficient.”

        So you can compute a statistic from time series data. I’m happy for you; I have never disputed that. That statistic does not define the time series.

        The value of 0.72 calculated by Hurst from 849 years of Nile River flow records reveal an underlying tendency for climate data to cluster around a mean and a variance for a period and then shift to another state.

        [ Revealing] that underlying tendency required more than the computation of the Hurst coefficient, namely Hurst’s other data analytic procedures that you referred to.

        You object when I point out your misuse of words, but you thought it was a big deal when I misspelled Ghil as Gihl. We all make mistakes.

      • Still rattling on about what you imagine was a misuse of the word define. Give it a rest.

        “What are the main characteristics and implications of the Hurst-Kolmogorov stochastic dynamics (also known as the Hurst phenomenon, Joseph effect, scaling behaviour, long-term persistence, multi-scale fluctuation, long-range dependence, long memory)?”

        It revealed a big picture you don’t seem to get.

      • This was an extremely interesting and informative discussion. RE (Robert Ellison), what you posted was extremely interesting and the broad thrust of it isn’t being challenged. But I agree with MM (Matthew Marler) that precision in how it is articulated IS important. It’s not that MM doesn’t understand the main point and is “dissembling” or “nitpicking” it’s that someone like me, lurking and absorbing may take away imprecise wording as a fact, like Chinese whispers.

        It’s how conceptually we approach certain words – it means one thing to a person from one discipline and something different to someone from another, hydrologist versus statistician. Words really really matter. Also, when RE was challenged on the point, the subsequent posts were really interesting (particularly the graphic), giving further detail that illuminated the original post. Minus the barbs – which weren’t necessary.

        This looks like communication problem – we often argue more about the meanings of words than the ideas we are trying to convey with them.

        Anyway, I thank you both for an interesting discussion.

      • “Overall, the “new” HK approach presented herein is as old as Kolmogorov’s (1940) and Hurst’s (1951) expositions. It is stationary (not nonstationary) and demonstrates how stationarity can coexist with change at all time scales. It is linear (not nonlinear) thus emphasizing the fact that stochastic dynamics need not be nonlinear to produce realistic trajectories (while, in contrast, trajectories from linear deterministic dynamics are not representative of the evolution of complex natural systems). The HK approach is simple,
        parsimonious, and inexpensive (not complicated, inflationary and expensive) and is transparent (not misleading) because it does not hide uncertainty and it does not pretend to predict the distant future deterministically.” https://www.itia.ntua.gr/en/getfile/1001/1/documents/2010JAWRAHurstKolmogorovDynamicsPP.pdf

        I don’t resile at all from describing as defining the pioneering works of Hurst on natural processes and of Kolmogorov on turbulence. These defined departures of natural systems and turbulent flow from expectations of random Gaussian noise – an idea of noise that is still promulgated – but that are best understood in modern terms of dynamical complexity. How would other than a pettifogging statistician express it?

      • Robert I Ellison: It revealed a big picture you don’t seem to get.

        LOL! The Hurst coefficient reveals as much of the big picture as a tree does about the forest in dwells in. For the big picture you have to step back from a single statistic.

        I don’t resile at all from describing as defining the pioneering works of Hurst on natural processes and of Kolmogorov on turbulence.

        You meant to write something along the lines of: “The HK approach can, with much work and attention to detail and many statistics, be used to characterize the climate time series.” Instead you made a false claim about a single statistic.

        “Overall, the “new” HK approach presented herein is as old as Kolmogorov’s (1940) and Hurst’s (1951) expositions. It is stationary (not nonstationary) and demonstrates how stationarity can coexist with change at all time scales. It is linear (not nonlinear) thus emphasizing the fact that stochastic dynamics need not be nonlinear to produce realistic trajectories (while, in contrast, trajectories from linear deterministic dynamics are not representative of the evolution of complex natural systems).

        I am glad to see that you are back to stochastic dynamics, away from asserting that all climate processes are deterministic. How to tell whether data should be represented by a stationary or nonstationary stochastic process is another problem. What process in climate science has been shown to be stationary? Even in the presence of “abrupt” climate changes?

      • Koutsoyiannis was exploring ideas of stationarity and nonstationarity with reference to geophysical time series. An equivalent idea when climate series are viewed with a God’s eye may be the ergodic theory of dynamical systems. Within which are seen shifts and Hurst-Kolmogorov regimes. Koutsoyiannis has as well – as a practical hydrologist – defined deterministic as predictable and random as not. But everything in the world obeys the laws of classical physics – thus everything is deterministic in principle if not soluble in practice. Tim Palmer and Julia Slingo put it in terms of Lorenzian dynamical complexity.

        “The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.”

        Edward Lorenz in 1969 expressed the problem in terms of the immense computational expense of solving the laws of motion – embodied in the Navier-Stokes equation – in atmosphere and oceans.

        “Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist.”

        So who to believe? Koutsoyiannis with his sophisticated but practical approach to geophysical time series. Where words matter less than the ability to design and fill dams. Or Matthew whose major occupation at CE is shadowing me with pettifogging criticism.

      • You meant to write something along the lines of: “The HK approach can, with much work and attention to detail and many statistics, be used to characterize the climate time series.” Instead you made a false claim about a single statistic.

        The method used to empirically derive Hurst’s law I believe he means.

      • Robert I Ellison: Or Matthew whose major occupation at CE is shadowing me with pettifogging criticism.

        for completeness, i.e. more “nitpicking”, here is the Wikipedia entry for the Hurst Coefficient:

        https://en.wikipedia.org/wiki/Hurst_exponent

        The Hurst exponent, H, is defined in terms of the asymptotic behaviour of the rescaled range as a function of the time span of a time series as follows;[6][7]

        As I wrote it is the limit of the time series E[R(n)/S(n)]. I did not mention the E, which denotes the expected value with respect to a hypothetical distribution. It’s estimated via the mean of the series, not simply the last value.

        Sadly, Wikipedia doesn’t have an entry for “Hurst Law”. Perhaps this is it:
        The Hurst exponent is referred to as the “index of dependence” or “index of long-range dependence”. It quantifies the relative tendency of a time series either to regress strongly to the mean or to cluster in a direction.[5] A value H in the range 0.5–1 indicates a time series with long-term positive autocorrelation, meaning both that a high value in the series will probably be followed by another high value and that the values a long time into the future will also tend to be high. A value in the range 0 – 0.5 indicates a time series with long-term switching between high and low values in adjacent pairs, meaning that a single high value will probably be followed by a low value and that the value after that will tend to be high, with this tendency to switch between high and low values lasting a long time into the future. A value of H=0.5 can indicate a completely uncorrelated series, but in fact it is the value applicable to series for which the autocorrelations at small time lags can be positive or negative but where the absolute values of the autocorrelations decay exponentially quickly to zero. This in contrast to the typically power law decay for the 0.5 < H < 1 and 0 < H < 0.5 cases.

      • For completeness – Wikipedia is not a source to be relied on. Several claims on this seem inconsistent with the academic literature. But wading in detail through Wikipedia misconceptions is beyond my level of interest.

        More importantly – it says nothing about just what was revealed – the fundamental point – about the behavior of these hydroclimatic series. Mandelbrot called it the Joseph effect and the Noah effect. There are periods with a cluster of similar sized flows – high or low – and the highs and lows can be very high or very low.

        “”We are then in one of those situations, so salutary for theoreticians, in which empirical discoveries stubbornly refuse to accord with theory. All of the researches described above lead to the conclusion that in the long run, E(Rn) should increase like n^0.5, whereas Hurst’s extraordinarily well documented empirical law shows an increase like n^K where K is about 0.7. We are forced to the conclusion that either the theorists interpretation of their own work is inadequate or their theories are falsely based: possibly both conclusions apply.” Lloyd 1967

        It may be called autocorrelation – but that begs the question of physical causality. For that we need to look at spatio-temporal patterns of sea surface temperature.

        “The so-called Hurst phenomenon, detected in many geophysical processes, has been regarded by many as a puzzle. The “infinite memory”, often associated with it, has been regarded as a counterintuitive and paradoxical property. However, it may be easier to perceive the Hurst-Kolmogorov behaviour if one detaches the “memory” interpretation and associates it with the rich patterns apparent in real world phenomena, which are absent in purely random processes. Furthermore, as our senses are more familiar with spatial objects rather than time series, understanding the Hurst-Kolmogorov behaviour becomes more direct and natural when the domain, in which we study a geophysical process, is no longer the time but the 2D space.” https://www.researchgate.net/publication/251473280_Two-dimensional_Hurst-Kolmogorov_process_and_its_application_to_rainfall_fields

        It is a space in which the hydroclimatic time series of Hurst meets the spatial turbulence patterns of Kolmogorov.

        “We are living in a world driven out of equilibrium. Energy is constantly delivered from the sun to the earth. Some of the energy is converted chemically, while most of it is radiated back into space, or drives complex dissipative structures, with our weather being the best known example. We also find regular structures on much smaller scales, like the ripples in the windblown sand, the intricate structure of animal coats, the beautiful pattern of mollusks or even in the propagation of electrical signals in the heart muscle. It is the goal of pattern formation to understand nonequilibrium systems in which the nonlinearities conspire to generate spatio-temporal structures or pattern. Many of these systems can be described by coupled nonlinear partial differential equations, and one could argue that it is the field of pattern formation (that) is trying to find unifying concepts underlying these equations.” http://www.ds.mpg.de/LFPB/chaos

        Statistical niceties sans any attempt at physical interpretations – and that requires exposure to the natural sciences – are pointless.

        “It also is only natural and perfectly in order for the mathematician to concentrate on the formal geometric structure of a hydrologic series, to treat it as a series of numbers and view it in the context of a set of mathematically consistent assumptions unbiased by the peculiarities of physical causation. It is, however, disturbing if the hydrologist adopts the mathematician’s attitude and fails to see that his mission is to view the series in its physical context, to seek explanations of its peculiarities in the underlying physical mechanism rather than to postulate the physical mechanism from a mathematical description of these peculiarities.” Klemeš (1974)

        It puts into perspective the current craze for econometric analysis of geophysical time series.

      • Robert I Ellison: For completeness – Wikipedia is not a source to be relied on. Several claims on this seem inconsistent with the academic literature. But wading in detail through Wikipedia misconceptions is beyond my level of interest.

        It worked for the Helmholtz decomposition, didn’t it?

        And it gave the correct definition of the Hurst coefficient, which you misstated.

        As to Mandelbrot and climate science, that will have to wait until later.

      • I did not rely on Wikipedia for Helmholtz decomposition as used in Forget and Ferreira 2019 – Global ocean heat transport dominated by
        heat export from the tropical Pacific. Though this fiasco is beginning to resemble that insistence that the paper was not about ocean heat transport. Nor do the eminent hydrologists I cite misstate Hurst’s law.

        “By ‘Noah Effect’ we designate the observation that extreme precipitation can be very extreme indeed, and by ‘Joseph Effect’ the finding that a long period of unusual (high or low) precipitation can be extremely long. Current models of statistical hydrology cannot account for either effect and must be superseded. As a replacement, ‘self‐similar’ models appear very promising. They account particularly well for the remarkable empirical observations of Harold Edwin Hurst. The present paper introduces and summarizes a series of investigations on self‐similar operational hydrology.” https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/WR004i005p00909

        As for Mandelbrot – what could you possibly say about the behavior of hydroclimatic series in the real world.

      • Robert I Ellison: Nor do the eminent hydrologists I cite misstate Hurst’s law.

        LOL!

        The mistakes were all yours.

  11. Alberto Zaragoza Comendador

    “Bear in mind that, since there have been few emission reduction policies in place historically (and none currently that bind at the global level), the heavy black line is effectively the Business-as-Usual sequence.”

    Even more to the point, one can just look at the evolution of emissions versus economic growth over time, and see if there was a change after the 1990s or 2000s when mitigation policies began to be implemented. The answer is no, there was no change; many countries didn’t bother mitigating emissions, and among the ones who did most implemented ineffective or even counterproductive policies. So emissions over the last 20 years or so have been pretty much as one would have expected if no policies had been implemented.

    (I and most authors talk about “policies” and “governments”, but the situation with corporations is funny too. Everybody and their dog in the corporate world claims to have gone carbon-neutral, and yet the emissions stats somehow don’t notice).

    Though the authors don’t push this line very hard in the paper, elsewhere they are now claiming that if the very-high scenarios can now be discarded it’s thanks to emissions mitigation. Which is dead wrong; the fact that the emission forecasts turned out to be too high has nothing to do emissions mitigation. I explain a bit more here.
    View at Medium.com

    Besides, it’s not clear what “airborne fraction” the old climate models worked with, but it’s also established that modern climate models overstate it, so for a given level of CO2 emissions they project too high a concentration. That’s why the model-based concentration forecasts have turned out to be wronger than the emissions forecasts, as this article has pointed out. The error in the airborne fraction forecast compounds the error in the emissions forecast.

    And that’s before talking about climate sensitivity itself.

  12. A question to Ross: does it really make sense and is it useful to regard the anthropogenic forcing as a random process instead of a given function of time (possibly with some errors in its estimates)?

  13. ProgramThyself

    The IPCC FAR makes temperature projections from “Scenario A” where emission rates of various substances are tabulated. The values seem pretty close to reality. But ZH19 starts with observed CO2 concentration. So isn’t the paper evaluating just part of the climate model? Shouldn’t the connection between GtC emitted and CO2 also be considered “model physics”?

    • “FAR” = 4th, right?
      If so, it’s over 10 years out of date.

      • ProgramThyself

        I meant First Assessment Report. Sorry if that was the wrong acronym. “Out of date” is needed, because I wanted to understand for myself the performance of longer standing predictions. It’s frustrating that every aspect of climate is controversial.

      • First?? Thanks.

        First is long, long ago. Citing it is pointless. You might as well be citing Bohr’s model of the atom.

      • It’s frustrating that every aspect of climate is controversial.

        No, consensus climate science is very wrong and it is encouraging that it is considered controversial. Everyone should be skeptical, trust but verify, that they cannot do.

        They say warmer is taking away sequestered ice from Greenland and Antarctica. Ice Core Data shows that most of the sequestered ice was deposited during the warmest times. Warm times with thawed oceans are absolutely necessary to increase sequestered ice. There is no ocean or lake effect snowfall when the water is covered with ice.
        Cold places are cold enough to keep snowfall frozen. Warm and thawed oceans are necessary to provide evaporation and snowfall to cause the snowfall.

        Warm times promote thawed oceans and evaporation and snowfall and increasing sequestered ice.

        Cold times cover oceans with ice and prevent evaporation and snowfall and sequestered ice depletes.

        This causes self correcting climate cycles. Roman warm period followed by a colder period followed by the Medieval warm period followed by the Little Ice Age followed by this modern warm period that is no yet as warm as past warm periods.
        This is recorded in ice core records, other records and history.
        If warm periods could take away all the ice, past warm periods would have already done that.

      • ProgramThyself

        ZH19 also use FAR to mean First Assessment Report. If citing the FAR is pointless, then why did Hausfather et al. do so?

  14. Surely, there are far more incisive methods for analyzing relationships between time-series than regression analysis. Reliance upon the latter is what keeps much of “climate science” from rising above the sandlot level.

  15. Dr David Mannock

    As a retired biophysicist armed with a certain physical understanding, I can only say that while I can follow all but the higher level math, it is no wonder that the climate alarmist’s say the science is solved. They do not understand this physical level of complexity in the system. Can someone here present a simple point by point summary for these people? Discussing this subject at this level is for academic specialists, not for the general public. You have to help yourselves in refuting existing models, the data they use & the numbers the models generate!

    • Why do you think the explanation should be “simple?”

      Ultimately it comes down to the absorption spectra of the GHGs, including water vapor. These aren’t simple — there are hundred of thousands of them i the emission/absorption window. But one has to do the numerical integration nonetheless.

      • The notion that the climate system “ultimately…comes down to the absorption spectra of the GHGs” is beyond simple. It’s hopelessly simplistic in it’s lack of any accounting of moist convection in regulating surface temperatures.

    • Just as there is Newtonian and Quantum mechanics, so one can study the higher level climate relationships found in the globe and models (as is being done in this post), or you can build things up from molecular behavior as David Appell suggests.

      The latter is obviously very complex, just like trying to understand human scale mechanics using quantum mechanics. On the other hand the attribution debate, for example, comes down to some relatively simple ideas, of which estimating climate sensitivity is one. This in turn leads to a consideration of the accuracy etc of GCMs, which is another.

    • Discussing this subject at this level is for academic specialists, not for the general public.

      Climate change is really simple. In warmer times, polar oceans thaw and it snows on cold places where sequestered ice grows until it flows faster and causes ice extent increase and cooling.
      In colder times, polar oceans are covered with ice and evaporation and snowfall is reduced and that causes the sequestered ice to deplete as it flows and thins and thaws and keeps land and oceans colder until it runs out of ice. Then the ice retreats and the next warm cycle rebuilds it.

      This is a robust self correcting process.

      Consensus theory promotes a diverging climate. We have history and data that proves it does not work that way.

    • There are simple rules at the heart of climate’s complexity.

      Models have no unique deterministic solution – solutions diverge exponentially from feasible differences in initial conditions. And they miss internal variability – because some things are not able to be expressed in equations that can be programmed into a computer.

      There are a new class of far more interesting models in development.

      e.g. https://www.gfdl.noaa.gov/earth-system-science/

  16. Obvious problems with this blog analysis. For example:

    1) IPCC 1990 FAR included at least 4 emissions scenarios, not just 1. McKitrick cherry-picks the highest emissions scenario, while ignoring the other 3 lower emissions scenarios. That’s what he needs to do to get to his blogpost conclusion that [a conclusion ZH19 do not make, by the way]:
    “It is, however, informative with regards to past IPCC emission/concentration projections and shows that the IPCC has for a long time been relying on exaggerated forecasts of global greenhouse gas emissions.”

    Ye it’s quite easy to check FAR and see those other scenarios. It then becomes clear that we largely followed the GHG-induced radiative forcing from FAR’s scenario B, and ended up with scenario B’s warming trend. That makes sense since, for instance, the IPCC explicitly notes that the scenario A [BaU or business-as-usual] scenario McKitrick selects does not include the full projected impact of GHG-limiting policies like the Montreal Protocol. McKitrick makes no mention of that, of course.

    FAR’s projected scenarios:
    https://pbs.twimg.com/media/EOgPyxLWAAAgXix?format=png&name=900×900
    [see pages 333 – 337 of FAR: https://www.ipcc.ch/site/assets/uploads/2018/03/ipcc_far_wg_I_full_report.pdf ]

    Observed GHG-induced radiative forcing:
    https://www.esrl.noaa.gov/gmd/aggi/aggi.html [update to: 10.1111/j.1600-0889.2006.00201.x ]
    10.5194/gmd-10-2057-2017 (supplemental figures include additional greenhouse gas concentrations)

    2) McKitrick selectively adjusts for volcanic and ENSO effects in the observational analyses, but not Hansen et al. 1988’s (H88’s) model-based projections. His adjustment leaves out, for instance, the cooling effect of drops in total solar irradiance. It also constitutes an apples-to-oranges comparison of the observational analyses vs. model-based projections, especially since McKitrick knows full-well that H88’s scenarios include volcanic eruptions:

    “Hansen added in an Agung-strength volcanic event in Scenarios B and C in 2015, which caused the temperatures to drop well below trend, with the effect persisting into 2017. This was not a forecast, it was just an arbitrary guess, and no such volcano occurred.
    Thus, to make an apples-to-apples comparison, we should remove the 2015 volcanic cooling from Scenarios B and C and add the 2015/16 El Nino warming to all three Scenarios.”
    https://judithcurry.com/2018/07/03/the-hansen-forecasts-30-years-later/

    McKitrick also conveniently fails to mention that H88 scenarios B and C include a volcanic eruption in 1995, not just 2015, as clearly stated in section 4.2 on page 9345 of the paper:

    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.405.9&rep=rep1&type=pdf

    3) As per the above linked blogpost above, McKitrick still avoids adequately addressing the fact that observed forcings fell between H88’s scenarios B and C, predominately because non-CO2 greenhouse gas levels did not increase as much as in scenario B [ex: Montreal Protocol limited CFC levels]. McKitrick previously obscured this point by:

    – [falsely] claiming H88’s scenarios B and C left out non-CO2 GHGs,
    – then [falsely] claiming scenario B includes CFCs and methane, but not the other non-CO2 GHG’s, despite it clearly including N2O,
    – and finally [falsely] claiming the issue was how models converted GHG increases to radiative forcing, when the issue was instead that the observed non-CO2 GHG increases were less than scenario B.

    See below for context on McKitrick doing this:
    https://judithcurry.com/2018/07/03/the-hansen-forecasts-30-years-later/#comment-876139

    McKitrick has done no better in this new post. If H88 are right, one would expect warming to fall between scenarios B and C, since that’s where observed radiative forcing fell. And that’s just what happened, regardless of whether McKitrick chooses to admit this. That’s clearly shown in supplemental figure S4 (and figure 1) from ZH19, the supplemental figure right before the one McKitrick chose to show in his blogpost. So McKitrick is clearly aware of this, but remains conveniently silent on it’s implications.

    There are other obvious problems with this blogpost. But the list above should be enough to show why this blogpost’s conclusion should not be relied upon.

    • Elaboration on some additional problems:

      1) On pages 331 and 337, IPCC 1990 FAR cites their following document on emissions scenarios:

      “Climate change: The IPCC response strategies”

      Click to access ipcc_far_wg_III_full_report.pdf

      Pages xxiii – xxiv of that document explain how their FAR’s scenario A (i.e. their BaU, business-as-usual scenario) doesn’t include the full effects of the Montreal Protocol, which limited CFC emissions. That’s important because those CFCs were powerful greenhouse gases FAR included in their warming projections. There’s a well-established literature on the role of the Montreal Protocol in limiting global warming and emissions away from business-as-usual scenarios. For instance:
      10.1038/NGEO1999 ; 10.1002/2017GL074388 ; 10.1029/2008GL034590 ; 10.1073pnas.0610328104 ; page 27.44 of 10.1175/AMSMONOGRAPHS-D-19-0003.1

      Moreover, the collapse of the Soviet Union not only mitigated methane emissions, but changed land use practices in a way that increased land uptake of CO2, further mitigating net anthropogenic CO2 increases. The CO2 point is covered in sources such as:
      10.1016/j.catena.2015.06.002 ; 10.1088/1748-9326/6/4/045202 ; 10.1111/gcb.12379 ; 10.1002/2013GB004654

      Thus, a combination of policies and non-policy effects mitigated GHG emissions, pushing observed reality away from a BaU scenario. Therefore McKitrick is being misleading when he writes:

      “Bear in mind that, since there have been few emission reduction policies in place historically (and none currently that bind at the global level), the heavy black line is effectively the Business-as-Usual sequence.”

      2) McKitrick side-steps the fact that ZH19 already addressed the issue of some of the earlier models that had lower ECS values. For example, ZH19 note that these models lacked non-CO2 forcings and had the atmosphere too rapidly equilibrate to external forcing:

      “This is likely due to their assumption that the atmosphere equilibrates instantly with external forcing, which omits the role of transient ocean heat uptake (Hansen et al. 1985). However, despite this high implied TCR, a number of the models (e.g. Ma70, Mi70, B70, B75) still end up providing temperature projections in-line with observations as their forcings were on the lower-end of observations due to the absence of any non-CO2 forcing agents in their projections.”

      Click to access inp_Hausfather_ha08910q.pdf

      By avoiding addressing this point, McKitrick can insinuate that the actual explanation is “multidecadal internal variability”, an account that’s already been debunked by much of the published literature. For instance:
      10.1175/JCLI-D-18-0555.1 ; 10.1175/JCLI-D-16-0803.1 ; 10.1126/sciadv.aao5297 ; 10.1007/s00382-016-3179-3 ; 10.1038/s41467-019-13823-w

      3) McKitrick voices surprise at the lower ECS values for some of the earlier model-based projections. But that is not surprising since many of these use energy budget models [EBMs], and EBM-based approaches are already known to often under-estimate longer-term climate sensitivity, even if they are closer to getting shorter-term sensitivity right than they are to getting longer-term sensitivity right. In effect, many EBM-based approaches miss out how much effective climate sensitivity increases from the early phase of warming to the middle phase of warming, consistent with the higher ECS values from paleoclimate data that covers longer periods of warming. These issues are covered in sources such as:
      10.1146/annurev-earth-100815-024150 ; 10.1038/s41561-018-0146-0 ; 10.1029/2018EF000889 ; 10.1038/NCLIMATE3278; 10.1007/s00382-019-04825-x ; 10.1175/JCLI-D-12-00544.1 ; 10.1126/sciadv.1602821 ; 10.5194/acp-18-5147-2018 ; 10.1002/2017GL076468

      So, again, McKitrick cannot use the ECS from these models to leap to his pre-determined, debunked conclusion of “multidecadal internal variability” causing over-estimation of ECS.

      • Alberto Zaragoza Comendador

        “McKitrick voices surprise at the lower ECS values for some of the earlier model-based projections. But that is not surprising since many of these use energy budget models [EBMs], and EBM-based approaches are already known to often under-estimate longer-term climate sensitivity, even if they are closer to getting shorter-term sensitivity right than they are to getting longer-term sensitivity right.”

        Values of climate sensitivity for climate models have been calculated in different manners, but it’s easy (in fact easiest) to find results that are equivalent to the ECS estimates energy-balance models. These model results average 3º C for a doubling of CO2. To be clear, I mean these estimates make the assumption that climate sensitivity does not vary over time; the papers you cite discuss precisely that – whether sensitivity varies over time or not.

        In climate models sensitivity does seem to increase over time, so that true long-term ECS is above 3ºC. But model ECS as calculated assuming constant sensitivity (also called ECS_hist, ECS_infer or ICS) is still around 3ºC. So it’s well above the values for the old climate models.

        Among the papers you cite:
        -Several have nothing to do with the estimation of ECS_hist for climate models.
        -The Proistosescu and Huybers paper claims ECS_hist for CMIP5 models of 2.5ºC; there’s a rebuttal here that finds about 3ºC https://climateaudit.org/2017/07/08/does-a-new-paper-really-reconcile-instrumental-and-model-based-climate-sensitivity-estimates/
        -Armour’s 2017 paper likewise finds that, among the 21 models it uses, average ECS_infer (Armour’s term for ECS_hist) is 2.87 ºC.
        -Dessler’s paper isn’t very representative because it only deals with one climate model. But for what it’s worth, in that one case ECS_hist is about 2.8ºC.

        Also: I don’t see in what sentence of the article McKitrick “use[s] the ECS from these models to leap to” any conclusion. This section of McKtrick’s article could be written more clearly, because in the old models TCR = ECS, so there’s no need to speculate about what their ECS ‘would be’ if the figures quoted in Hausfather’s paper represented TCR. But the fact that these models have low ECS values does not imply anything internal or multidecadal variability, nor does McKitrick make that argument in the article.

      • Again, thanks for the response, Alberto.

        Re: “Values of climate sensitivity for climate models have been calculated in different manners, but it’s easy (in fact easiest) to find results that are equivalent to the ECS estimates energy-balance models. These model results average 3º C for a doubling of CO2. To be clear, I mean these estimates make the assumption that climate sensitivity does not vary over time; the papers you cite discuss precisely that – whether sensitivity varies over time or not.
        In climate models sensitivity does seem to increase over time, so that true long-term ECS is above 3ºC. But model ECS as calculated assuming constant sensitivity (also called ECS_hist, ECS_infer or ICS) is still around 3ºC.”

        First, I’m going to need you to cite evidence for your claim that “model ECS as calculated assuming constant sensitivity (also called ECS_hist, ECS_infer or ICS) is still around 3ºC.” As I said elsewhere [ https://judithcurry.com/2020/01/17/explaining-the-discrepancies-between-hausfather-et-al-2019-and-lewiscurry-2018/#comment-907997 ], I expect cited evidence for the claims people make on scientific points.

        Second, see the EBM-based estimates of Lewis+Curry, for examples of EBM-based ECS estimates that are biased low: 10.1007/s00382-014-2342-y ; 10.1175/JCLI-D-17-0667.1

        Re: “Among the papers you cite:
        -Several have nothing to do with the estimation of ECS_hist for climate models.”

        Because that was not the sole point being made. As I wrote:

        “In effect, many EBM-based approaches miss out how much effective climate sensitivity increases from the early phase of warming to the middle phase of warming, consistent with the higher ECS values from paleoclimate data that covers longer periods of warming.”

        Thus, for example, I made a point about “ECS values from paleoclimate data that covers longer periods of warming”. Hence the first two papers I cited covering higher ECS estimates on paleoclimate time-scales, in comparison to models.

        Re: “there’s a rebuttal here that finds about 3ºC”
        https://climateaudit.org/2017/07/08/does-a-new-paper-really-reconcile-instrumental-and-model-based-climate-sensitivity-estimates/

        I don’t place much stock in non-peer-reviewed responses on contrarian blogs, for obvious reasons:

        “Like the vast range of other non-peer-reviewed material produced by the denial community, book authors can make whatever claims they wish, no matter how scientifically unfounded.”
        https://journals.sagepub.com/doi/pdf/10.1177/0002764213477096

        Re: “Also: I don’t see in what sentence of the article McKitrick “use[s] the ECS from these models to leap to” any conclusion.”

        Right here:

        “If the models have high interval ECS values, the fact that ZH19 find they stay in the ballpark of observed surface average warming, once adjusted for forcing errors, suggests it’s a case of being right for the wrong reason. The 1970s were unusually cold, and there is evidence that multidecadal internal variability was a significant contributor to accelerated warming from the late 1970s to the 2008 (see DelSole et al. 2011). If the models didn’t account for that, instead attributing everything to CO2 warming, it would require excessively high ECS to yield a match to observations.”

        So, as I said, McKitrick uses the ECS values to help insinuate his pre-determined, debunked conclusion on “multidecadal internal variability”.

        Re: “This section of McKtrick’s article could be written more clearly, because in the old models TCR = ECS, so there’s no need to speculate about what their ECS ‘would be’ if the figures quoted in Hausfather’s paper represented TCR.”

        You’ve cited no evidence that “in the old models TCR = ECS”.

      • Alberto Zaragoza Comendador

        In old models TCR = ECS because the old models don’t have an ocean module: the response of atmospheric temperatures to forcing is instantaneous. In fact this is mentioned in the text from Hausfather’s paper that you quoted above. As one of the paper’s co-authors says:

        Maybe you’re reading the post on a device that has problems displaying tweets. The IPCC’s statement on CO2 emissions from land use is also a tweet, linked to in a comment further below.

      • Re: “In old models TCR = ECS because the old models don’t have an ocean module: the response of atmospheric temperatures to forcing is instantaneous. In fact this is mentioned in the text from Hausfather’s paper that you quoted above.”

        Fair enough. But the other issues remain.

        Re: “The IPCC’s statement on CO2 emissions from land use is also a tweet, linked to in a comment further below.”

        I don’t consider unsourced tweets to be evidence. If someone says the IPCC claims something, then they need to show where the IPCC says it. And there’s no way in the world I consider a Pielke Jr. tweet to be a credible source.

        For your tweet above from Drake, I checked the implied TCR values from the paper and they match the ECS values from the supplementary material. Hence me accepting Drake’s tweeted claims.

        Anyway, I’m off again for awhile. Thanks for the responses.

    • Ross McKitrick

      Atomsk:

      1) ZH19 data includes the following forcings for the FAR projections: FAR_EBM_High_F, FAR_EBM_Low_F, and FAR_EBM_Best_F. Like I said in my post, to keep the discussion brief I used the ‘Best’ one. If you think it is cherry-picked to make the IPCC look bad, that’s a criticism you should direct to the ZH19 authors. Also, Figure S4 is from ZH19 and if you think it doesn’t do justice to the IPCC scenario ranges, again, direct your comments to the authors.

      2 & 3) That part of the discussion is simply a trends comparison. Normalizing for having gotten forcings wrong comes later. But since the models don’t include El Nino and Pinatubo events, I find it interesting to see if removing them from the observational series makes a difference in the trend comparison. If I also remove from Hansen B&C the non-existent volcanic cooling in 2015 it would make the model overshoot even worse.

      3) I showed S4, you’re referring to S3, which shows CO2 followed the H88A&B projections closely, N2O was close to B but below it, and CH4/CFC11/CFC12 tracked close to Scenario C.It would not be obvious from that chart why one would expect observed temperatures to track closest to C.

      To those who object to me writing a blog post about a published article, presumably you don’t object to people discussing published articles on weblogs generally, but if you do, your remedy is not to read the blog post. I haven’t ruled out writing a formal comment on the article, but as I mentioned at the end of my post, the regression problem is the different order of integration between temperatures and anthropogenic forcings, and that’s a big topic that will need extended discussion.

      At a certain point the radians issue gets a bit stale.

      • Ross

        Some of the criticism of your paper -which I am not competent to comment on technically-seems to be that it is published on a blog. To me that means any discrepancies will be ruthlessly exposed by commentators during a very short and intense period. These are in public view.

        Some here, notably Atomsk, infer that publication in a ‘peer reviewed’ publication automatically makes it superior.

        Am I wrong in thinking that the publication of Hausfather et als article was carried out following the payment of a fee to the journal, with additional responsibilities towards the journal, in as much authors have to agree in turn to examine and peer review other articles?

        The peer reviewing of the Hausfather article is consequently not carried out in the public domain and this, together with any payment and on-going obligations by the authors, does not seem to me to be a process that is scientifically transparent.

        As I say, I have no comment on the technical competence of the original article, just its process, which is not carried out in the blinding public spotlight that blog articles are

        tonyb

      • Ross,

        Atomski has a long track record of denying the obvious and well documented problems with the peer review system. It’s quite stark actually how prevalent positive bias and how completely the science system is to control it. That’s why proof text quote mining from the literature is so misleading.

        I have a List of over 40 papers in to; journals on this crisis. Here’s a striking one that should be shocking.

        https://www.nature.com/news/registered-clinical-trials-make-positive-findings-vanish-1.18181

      • Re: “If you think it is cherry-picked to make the IPCC look bad, that’s a criticism you should direct to the ZH19 authors.”

        False, as I already explained elsewhere:

        “McKitrick differed from Hausfather et al. in another way: McKitrick, but not Hausfather et al., claimed that the paper shows the IPCC relied on scenarios that over-estimated GHGs (“the IPCC has for a long time been relying on exaggerated forecasts of global greenhouse gas emissions”). But that conclusion of McKitrick’s fails, since he conveniently ignored the other 3 lower emissions scenarios the IPCC included. So no, McKitrick did not do the same thing Hausfather et al.; he cherry-picked relative to the conclusion he made, and which Hausfather et al. didn’t. Referencing what Hausfather et al. did is thus no defense of what McKitrick did.”
        https://judithcurry.com/2020/01/17/explaining-the-discrepancies-between-hausfather-et-al-2019-and-lewiscurry-2018/#comment-907973

        “1) [McKitrick] grossly expands the error bars for the observational analyses’ TCR in comparison to FAR’s TCR to avoid confirming FAR’s TCR accuracy, even though Hausfather et al.’s analyses does otherwise and the comparison to FAR scenario B [which I showed before] confirms Hausfather’s et al.’s result without McKitrick’s exaggerated uncertainty”
        https://judithcurry.com/2020/01/17/explaining-the-discrepancies-between-hausfather-et-al-2019-and-lewiscurry-2018/#comment-907997

        Re: “But since the models don’t include El Nino and Pinatubo events, I find it interesting to see if removing them from the observational series makes a difference in the trend comparison. If I also remove from Hansen B&C the non-existent volcanic cooling in 2015 it would make the model overshoot even worse.”

        You’re again conveniently leaving out the fact that Hansen et al. 1988’s (H88’s) model-based projections include a volcanic eruption in 1995, not just 2015. I literally showed this in the comment you’re responding to, and Nick Stokes again pointed it out later as well:

        “In fact, Hansen’s scenarios B and C did include a major eruption in 1995, very closely matching the effect of Pinatubo. It’s effect is evident in his results.”
        https://judithcurry.com/2020/01/17/explaining-the-discrepancies-between-hausfather-et-al-2019-and-lewiscurry-2018/#comment-907928

        So you’re still doing an apples-to-oranges comparison where you remove volcanic effects from the observational analyses, but not from H88’s model-based projections. On top of that, as I said before, you’re unjustifiably selective in what effects you remove, such as not removing the effect of changes in total solar irradiance. Moreover, you also claim “the models don’t include El Nino and Pinatubo events”. But H88’s model-based temperature projections clearly include some level of internal variability, since the relative temperature lines in H88 figure 3a are not as smooth as the forcings temperature curve shown in figure 2:

        http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.405.9&rep=rep1&type=pdf

        That makes your comparison even more of an apples-to-oranges comparison, since you’re removing internal variability from the observational analyses, but not from H88’s model-based projections.

        Re: “3) I showed S4, you’re referring to S3, which shows CO2 followed the H88A&B projections closely, N2O was close to B but below it, and CH4/CFC11/CFC12 tracked close to Scenario C.It would not be obvious from that chart why one would expect observed temperatures to track closest to C.”

        First, thank you finally admitting to what the GHG concentrations show, something both you and Dr. Christy should have done in your post from over a year ago, instead of making false claims about the observed GHG concentrations vs. H88 scenarios’ projected concentrations. It’s also a walk-back from your initial claim that observed warming should fall between H88’s scenarios A and B:

        “Note that Scenarios A and B also represent upper and lower bounds for non-CO2 forcing as well, since Scenario A contains all trace gas effects and Scenario B contains none. So, we can treat these two scenarios as representing upper and lower bounds on a warming forecast range that contains within it the observed post-1980 increases in greenhouse gases”
        https://judithcurry.com/2018/07/03/the-hansen-forecasts-30-years-later/#comment-876139

        Second, we know what the effect of that GHG concentration difference will be, since we can calculate the radiative forcing from those GHG increases. H88 even gives us equations for how they did that calculation on page 9360 of their paper. ZH19 did said calculation. And since people are apparently OK with non-peer-reviewed sources now if it means accepting your non-peer-reviewed blogpost, Gavin Schmidt (one of the co-authors of ZH19) on RealClimate did a version of it as well, as have others:


        [from: http://www.realclimate.org/index.php/archives/2018/06/30-years-after-hansens-testimony/ ]

        https://i.guim.co.uk/img/media/c298f0ada7e4991d6e817a4fba72a31f8017503b/0_0_1406_956/master/1406.png?width=620&quality=45&auto=format&fit=max&dpr=2&s=636997b4f9963d7821d85b49e1884e70
        [from: https://www.theguardian.com/environment/climate-consensus-97-per-cent/2018/jun/25/30-years-later-deniers-are-still-lying-about-hansens-amazing-global-warming-prediction ]

        Warming is where we expect it to be, given the forcings that occurred (including less observed GHG-induced forcing that in scenario B, due to less non-CO2 GHG increases than in scenario B). Thus H88 accurately represented warming per unit of forcing; i.e. they accurately represented shorter-term climate sensitivity.

        Please feel free to inform Dr. Christy of that, as it will join the list of many things Dr. Hansen was right about, including going back to when Dr. Christy used his since-rebutted UAH satellite-based analysis to (falsely) claim the bulk troposphere wasn’t warming:

        “But Hansen believes— based on projections from current radiosonde readings— that the troposphere will continue to warm. The discrepancy that Christy found, he says, will disappear as climate models and measurements improve.
        Christy thinks it equally likely that the Earth’s surface will cool.”

        https://www.discovermagazine.com/environment/the-gospel-according-to-john#.UP3fMX3LdRw

      • Ross, Atomski has a track record of denial of the serious problems in peer review and science generally that make his proof text mining approach quite invalid.

        I have a list of over 40 articles 9n top flight journals documenting how serious positive and deletion bias are in the literature. Here’s a striking one.

        https://www.nature.com/news/registered-clinical-trials-make-positive-findings-vanish-1.18181

      • Oh, and for if+when my reply to you makes it through moderation, I’ll add this quickly before I go for awhile:

        In my reply I said H88’s model-based projections include internal variability, since the relative temperature lines in H88 figure 3a are not as smooth as the forcings temperature curve shown in H88 figure 2. Hansen, in his 2006 follow-up to H88 (H06), confirms this by noting that there’s internal variability in the H88’s model-based temperature trend projections:

        10.1073pnas.0606291103 :
        “Close agreement of observed temperature change with simulations for the most realistic climate forcing(scenario B) is accidental, given the large unforced variability in both model and real world.”

        Click to access 14288.full.pdf

        So yes, you are engaged in an apples-to-oranges comparison when you use an ENSO adjustment to remove some internal variability from the observational analyses, but not from H88’s model-based projections. Over the longer-term, the effects of the shorter-term internal variability should even out as it becomes overcome by the longer-term anthropogenic forcing, allowing one to leave the internal variability in for both the model-based projections and the observational analyses. Thus leaving internal variability in should be a better comparison for ZH19’s time-period of 1988 – 2017, but not as much for H06’s time-period of 1988 – 2005.

      • Re: “Some here, notably Atomsk, infer that publication in a ‘peer reviewed’ publication automatically makes it superior.”

        Competent peer review means that experts have gone over the paper, though they don’t have to agree with what the paper says. On a contrarian blog like this one, McKitrick can post whatever he wants, even if competent experts would easily correct him on it. For example, I already went over the false claim he initially posted about how scenario B for Hansen et al. 1988 did not include non-CO2 trace gases. Such a statement likely not have made it through competent peer review, since it’s so easily falsified; all one would need to do to falsify it is read Hansen et al. 1988.

        So as noted elsewhere:

        “Like the vast range of other non-peer-reviewed material produced by the denial community, book authors can make whatever claims they wish, no matter how scientifically unfounded.”
        https://journals.sagepub.com/doi/pdf/10.1177/0002764213477096

        Re: “Am I wrong in thinking that the publication of Hausfather et als article was carried out following the payment of a fee to the journal, with additional responsibilities towards the journal, in as much authors have to agree in turn to examine and peer review other articles?”

        Do you have a shred of evidence to actually support that claim? Because so far, it looks like you’re abusing implication by question, which is explained at a layman’s level here:

        “One form of misleading discourse involves presupposing and implying something without stating it explicitly, by phrasing it as a question.”
        https://en.wikipedia.org/wiki/Complex_question

        In my experience, it would be extremely unusual for a journal to require that authors who submit papers also review for the journal. In fact, I haven’t seen this happen. After all, for instance, many papers have numerous authors, and it would be quite difficult to get all of them to serve as reviewers. And even if one focused only on the lead author / corresponding author, many of them just don’t want to be reviewers. A competent journal won’t want to miss out on well-done research, just because the lead author is too busy (or doesn’t feel like) reviewing papers on their behalf.

        You’re also likely misunderstanding the incentive structure and business model here. Geophysical Research Letters (GRL), the journal ZH19 is published in, is a very high-tier, reputable journal, as reflected in its impact factor and history of publications. Of course, it costs money to run the journal. But a large chunk of the money doesn’t come from charging authors who submit papers, but instead charging libraries, universities, researchers, etc., who pay for subscriptions to the journal.

        And here’s the crucial part: many of those people want to subscribe to journals that produce meaningful research, not trash. For instance, if I’m a cancer researcher, then I’m not going to pay to access a trash, predatory journal that’s known for posting dubious nonsense on cancer. I’m going to pay to access the better journals, since they are more likely to give me better information on advancements in my field. Parallel point for a cancer institute paying for access to a journal, or a medical library paying, or… Thus this business model incentivizes journals to publish competent research and maintain competent peer review, so that people are willing to pay for the journal.

        That contrasts with a lot of predatory open-access journals, who don’t ask for paid subscriptions, but instead charge very high costs to authors who submit papers. Such predatory journals don’t care about paper quality; they want a large quantity of papers, so they can make more money off the authors submitting papers. So many open-access journals will charge a fee must higher than the $500 fee GRL charges; GRL’s small fee can usually be easily covered in the funding that led to the research anyway.

        Are there down-sides of this system and can it better? Sure. For instance, it makes it harder for people to access papers. But with the advent of stuff like Sci-hub, one can access almost any paper one wants anyway. That allows everyday people to access the papers, while keeping the incentive for large institutions to pay for subscriptions, since those institutions technically shouldn’t be using sources like Sci-hub. Furthermore, GRL allows authors to make their papers publicly available, if the authors pay $2500 (to offset the money GRL loses from not being able to charge for access to the paper).

        And anyway, this peer review system works well enough and is reliable enough. It’s the peer review system that under-girds many of the medical, engineering, etc. technologies you rely on today. So one doesn’t get to claim that system is unreliable, when one has continued to rely on it working. To put it another way: David Young (dpy6629) complains that the tree limb that supported him from birth, isn’t reliable enough to support him.

        Re: “The peer reviewing of the Hausfather article is consequently not carried out in the public domain and this, together with any payment and on-going obligations by the authors, does not seem to me to be a process that is scientifically transparent.
        As I say, I have no comment on the technical competence of the original article, just its process, which is not carried out in the blinding public spotlight that blog articles are”

        First, you provided no evidence of “on-going obligations”.

        Second, private, anonymous peer review is a feature, not a bug. One idea behind it is to allow reviewers to free speak their mind about the paper, without fear of reprisal. Some journals are switching to a more open peer review system where they publicly reveal an under-review manuscript, for everyone to look at. That system has flaws as well, especially if there’s lack of anonymity for reviewers and if there’s no insurance that at least some competent experts reviewed the paper. Just having a bunch of non-experts publicly look at a manuscript isn’t helpful for catching technical issues, as your own comment shows. Such open peer review also increases the risk of the authors being scooped by other researchers, if the research is rejected for publication at that journal after an extended process.

        Third, peer-reviewed papers undergo post-publication review, which often includes attempts at replication by other researchers, comment papers, criticisms in the peer-reviewed literature, etc. Hence the peer-reviewed work of Lindzen, Spencer, Christy, McKitrick, Curry, etc., being repeatedly debunked in the peer-reviewed literature. So no, the ‘public, post-publication review’ going on in this blog is not some advantage it holds over the peer-review system. If anything, the current peer-review system is better, since it insures that at least some of the post-publication review is done by competent experts, not politically-motivated, non-expert contrarians online who will assent to almost any nonsense if they think it will help them object to mainstream science on AGW.

      • atomsk

        Thanks for your response

        I did provide a link regarding author fees etc two days ago, but repeat it here as a direct link for your convenience as no doubt it got lost in the flurry of comments

        https://www.agu.org/Publish-with-AGU/Publish/Author-Resources/Publication-fees

        I have no comment on the technical competence or merits of the original article, just its process, which is not carried out in the blinding public spotlight that blog articles are.

        In that respect I think blog articles are very useful and can quickly gain very informed criticism very rapidly. However if the article is ‘important’ and moves on knowledge, I do agree that it should then go forward for peer review and subsequent publication.

        Hence my question as to whether or not a fee has been paid by the authors to the magazine. Surely if payments are made in order to facilitate publication its scientific integrity is compromised?

        Perhaps you can confirm that no payments were made by the authors?

        tonyb

      • atomsk

        I think your response at 7.33 has actually covered my comments. A fee is required, albeit modest. The system mitigates against the article being widely available to those who might have useful knowledge of the subject but are not practising scientists with ready access to the publication through a subscription by the organisation they work for.

        In that respect few authors are likely to agree to pay the $2500 for public access. Consequently a blog article does bring important arguments into the wider public domain and facilitates rapid input by those who might have something worthwhile to say.

        I do reaffirm that, where appropriate, blog articles should go forward for peer review and publication

        tonyb

      • Their website indicates they charge all authors a fee. It’s an open-access journal. Unless there actually are pennies from heaven, they have to recoup the revenue lost from losing the fees for access to articles.

      • JCH

        I understand your comment, but surely a fee is a fee is a fee? If someone pays a fee so their article can appear in a publication, does that ever impact on the journals agreement to print that article, especially as some journals are less worthy than others?.

        It is not as transparent a process as a good blog article which can be rapidly criticised on line but is no substitute for then proceeding as an actual peer reviewed publication.

        tonyb

      • I think they say the fee is not due until after peer review.

        So send in one of your blog articles and photocopies of your accounts. It’s AGU, so I doubt they will print it.

      • jch

        I shall send them one of my blog articles plus a copy of YOUR accounts in the hope that will impress them more

        tonyb

      • It is hard to imagine a more unenlightening comment about peer review than Atomsk’s comment here. Of course many journals do require publication charges to be paid, usually by the authors institution or from grant funds.
        
Peer review means little as anyone who has published a lot knows full well. Reviews are usually superficial and there is no time or resources to check much of anything. You are lucky if the reviewer actually read your paper. Often they don’t. Reviewers are uncompensated and reviews have little benefit for the reviewer’s career. Further, negative reviews are often ignored. The whole system depends on a steady stream of papers which often are very incremental or contain nothing new or are wrong.
        
Selection bias infects perhaps 75% of the papers in my field for example. This is also true in the literature on climate models which is biased. The problem here is that you run and adjust your model until you get a convincing result. You then publish that result, ignoring all the previous less convincing results. This gives a vastly too positive impression of model skill. You can fool yourself of course into believing that all those previous results were due to “obviously poor choices.” Classic rationalization.
        
There are huge problems with the scientific literature generally. Above I gave a link to a paper documenting how pervasive positive results bias is. There are 40 more that I have compiled. Here is just one of them:
https://royalsocietypublishing.org/doi/full/10.1098/rsos.160384
        
Just one example that is scandalous really is the sad story of the medical consensus that dietary fat was causing heart disease. This idea was actually the basis of government policy for at least 50 years and harmed the health of tens of millions. In fact, a balanced diet is the healthiest. Reduced fat products generated by the food industry based on government policy usually contain added sugars and carbohydrates to make the food palatable. These added carbohydrates often lead to obesity, metabolic syndrome, and early death.
        
One would be well advised to do your own research on medical issues. Often effect sizes are small and statistical significance low. Even widely accepted therapies such as statin therapy are not hugely effective and have side effects that are often downplayed in the literature. Statins significantly increase HA1C and diagnoses of diabetes.
        
Atomsk seems to be a classic denier of the replication crisis and an acolyte of science and the scientific literature. Since he is anonymous we have no way to judge his contributions or his proof text citations. Such citations can be very misleading when the person presenting them is biased. Atomsk is biased in favor of the scientific establishment and the scientific literature (or at least those parts of it whose conclusions he likes).

 That’s why his proof text approach seems a lot like medieval theological argumentation.

      • dpy

        “Reviews are usually superficial and there is no time or resources to check much of anything”

        I’ve often wondered about that. There are many papers I’ve read where there were hundreds of assumptions, estimates, calculations and so called “novel approaches”, and it seemed in order to make an adequate review someone would have had invest a prohibitive number of hours themselves.

        Thanks for confirming my suspicions.

      • People who want reviewers to be scientific cops, or scientific auditors, are asking for a scientific dark age. And that is what they want. A world where science cannot endanger their religious/political beliefs.

        But reviewers should read the paper. The reason John Ioannidis is generally complimentary of climate science is most climate science reviewers read the paper.

      • JCH, The real problem in this thread is the scientism of people like Sanakan, who worship at the peer review temple of shibboleths of “modern science.” Science has been pressed into the cultural and political wars as a weapon by the political left despite the fact that the track record of “science” in public policy is not too sterling. In fact the track record is bad because popularizations of science often tend to be garbage such as social Darwinism.

        Science is in trouble and needs to take measures to improve. The paper on preregistration of trials is a good start. Modeling fields are particularly in need of reforms. Journals should start requiring sensitivity studies of all the parameters of the models. That would keep many climate modeling papers from being published just by itself. Science should go back to basics and simpler problems where it is actually feasible to do valid science.

      • Re: “3) I showed S4, you’re referring to S3, which shows CO2 followed the H88A&B projections closely, N2O was close to B but below it, and CH4/CFC11/CFC12 tracked close to Scenario C.It would not be obvious from that chart why one would expect observed temperatures to track closest to C.”

        In my previous reply I explained how one can use radiative forcings to show that the observed warming trend is where one would expect it to be, based on H88, since observed radiative forcing was between the projected forcings for scenarios B and C. I also wanted to point at that 2019 relative temperature ended up closer to scenario B than scenario C, and that the observed warming trend was about half-way between scenario B and scenario C, based on analyses with more global coverage:


        “Original discussion (2007), Last discussion (2018). Scenarios from Hansen et al. (1988). Observations are the GISTEMP LOTI annual figures. Trends from 1984: GISTEMP: 0.21°C/dec, Scenarios A, B, C: 0.33, 0.28, 0.15°C respectively (all 95% CI ~+/-0.03). Last updated: 26 Jan 2020.”
        https://web.archive.org/save/http://www.realclimate.org/index.php/climate-model-projections-compared-to-observations/

        Also see:
        “Climate models vs. real world”

        Click to access 20200203_ModelsVsWorld.pdf

        The trends begin in 1984 since although the H88 paper was published in 1988, its model-based projection forecast wasn’t initiated in 1988. Other 1984 – 2019 warming trends can be checked at sources such as:

        http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html
        https://www.esrl.noaa.gov/psd/cgi-bin/data/testdap/timeseries.pl
        https://climexp.knmi.nl/selectindex.cgi?id=someone@somewhere
        http://climexp.knmi.nl/select.cgi?id=someone@somewhere&field=cmst

        Examples of some trends in °C/decade:

        Cowtan+Way with HadSST4 : 0.21
        NOAA [1] : 0.19
        HadCRUT4 [2] : 0.19
        CMA (until end of 2018) : 0.18
        JMA [3] : 0.15
        ERA5 : 0.21
        JRA-55 : 0.19
        NCEP-2 : 0.22
        MERRA-2 [4] : 0.16

        Notes:
        1) NOAA : recently improved coverage; DOIs: 10.1175/JCLI-D-19-0395.1 , 10.5194/essd-11-1629-2019
        2) HadCRUT4 : poorer coverage; DOIs: 10.1002/qj.2949 , 10.1175/JCLI-D-14-00250.1 , 10.1002/qj.2297
        3) JMA : poorer coverage; DOIs: 10.1002/jgrd.50239 , 10.1029/2011JD017187 , 10.1175/JCLI-D-16-0628.1
        4) MERRA-2 : under-estimated Arctic warming: 10.1002/qj.2949 , 10.1007/s00704-019-02952-3 , 10.1175/JCLI-D-16-0758.1

      • And in case people missed the obvious from my previous reply:

        GISTEMP, Berkeley Earth, and Cowtan+Way each have the same 1984 – 2019 warming trend of 0.21 °C/decade. In the graph I posted, GISTEMP’s end-value looks higher than the other two analyses, due to the time-period used for the baseline. Hence GISTEMP’s relative temperature being slightly higher than the other two analyses (and scenario B) for the year 1984, just as GISTEMP is slightly higher for the year 2019. Removing that offset would have all three analyses end at about the same point in 2019, ending closer to scenario B’s 2019 value than to scenario C’s 2019 value, and with a warming trend about halfway between scenarios B and C. James Hansen’s article (which Roger Pielke Sr. called “[i]mformative, and which Judith Curry re-tweeted) that I linked in my previous reply has a pretty good depiction of that for GISTEMP:

        http://archive.is/OXTTQ/899d3ff38d62313f6d9fff483f5fd9f704679c39.pneg
        “Climate models vs. real world”

        Click to access 20200203_ModelsVsWorld.pdf

        http://archive.is/is9hC

        I also didn’t bother including CFSR since its long-term post-2010 results are known to be wrong, due to a shift in the underlying model used for data assimilation in this re-analysis. Even contrarians like Ryan Maue note that:

        Click to access tin10-55cfs_aad.pdf


        page 204: https://books.google.com/books?id=6r1TDwAAQBAJ&pg=PA204&lpg=PA204&dq=CFSR+data+discontinuity&source=bl&ots=tK6wralVAU&sig=ACfU3U0qfS0la6ZYh6I9E_KY70IkYOkvvg&hl=en&ppis=_c&sa=X&ved=2ahUKEwin_JWjjNzlAhUvjK0KHdyVBPMQ6AEwB3oECAkQAQ#v=onepage&q=%22CFSR%20has%20a%20discontinuity%20in%202010%22&f=false
        http://archive.is/m5ZyH#selection-558.0-727.4

    • Above I said this:
      “2) McKitrick side-steps the fact that ZH19 already addressed the issue of some of the earlier models that had lower ECS values. For example, ZH19 note that these models lacked non-CO2 forcings and had the atmosphere too rapidly equilibrate to external forcing:”

      I recently found out that the lead author of ZH19 made much the same point in response to McKitrick’s blogpost, even before me:

      So for those who are still buying into McKitrick’s blogpost: actually read ZH19, the paper McKitrick is discussing. Because this is looking more and like the previous time in which McKitrick made false claims about Hansen et al. 1988, and the only people who caught him were those who bothered to read Hansen et al.’s paper:

      https://judithcurry.com/2018/07/03/the-hansen-forecasts-30-years-later/#comment-876139

  17. So Zeke H et al published a peer reviewed paper in a major journal, GRL.

    McKitrick published a…non-peer reviewed comment in…a blog.

    Sorry, but these are in no way equivalent.

    Ross, publish your comments in a decent place, and get back to us.

  18. Judith, why do I have to log out every time I come here, and then log in under a different identity before I can post a comment?

    • Curious George

      To refresh your memory:
      “Sent this to the chair of Judith’s department, cc’ing the two associate chairs. Will send to her Dean, and University president, if necessary.

      Dr. Huey,
      As Chair of Georgia Tech’s Dept of Earth and Atmospheric Sciences, are you aware that one of your faculty members, Judith Curry, has made an amusement of sexual assault?

      Atop her blog of 10/14/15, she included this quotation:

      “Some new batch of hacked emails is saying Trump groped Bob Dylan according to Putin.” – David Phinney
      …..
      David Appell”

      https://judithcurry.com/2016/10/14/week-in-review-politics-edition-13/#comment-817514

      I wonder why you are even allowed to log in.

      • Isn’t that funny?
        Not to defend Appell, although there’s nothing wrong with that…
        But still… How much of a sense of humor do we need?

    • David Appell: “…. why do I have to log out every time I come here, and then log in under a different identity before I can post a comment?”

      I’m using Mozilla Firefox on a PC. I log in using my WordPress access account, not from Twitter or from Facebook.

      I find that once I have logged in, I have to refresh my connection one time using the Firefox refresh button before I am allowed to comment.

      Once that simple step is done, I can post any number of comments before I end my session.

      I don’t use a mobile device, nor do I log in from Twitter or from Facebook, so this advice may not apply for that situation.

      I also save the contents of every comment to a Notepad file just before I post it, just in case the upload process fails for some reason.

  19. McKitrick wrote:
    The reported early ECS values are:
    Manabe and Weatherald (1967) / Manabe (1970) / Mitchell (1970): 2.3K
    Benson (1970) / Sawyer (1972) / Broecker (1975): 2.4K
    Rasool and Schneider (1971) 0.8K
    Nordhaus (1977): 2.0K

    It’s funny that in a non peer-reviewed comment about ECS, McKitrick cites several peer reviewed papers.

    Peer review matters. Non peer reviewed blog comments aren’t even close to real science.

    • I tend to find I’m as good at assessing what I read than many peers. If however you find critical analysis difficult you should indeed wait for the peer reviewed paper.

      • “Peer” means expert. Are you an expert?

      • Peer means a member of a group that has accepted a consensus. That group will kick anyone out who disagrees with the consensus. When you read a peer reviewed paper, consider the agreed to consensus of that group.

        Peer reviewed science means consensus science.
        Science must always be skeptical, consensus science is not any kind of proper science. Proper science has come out against consensus, time and time again. Consensus is used to destroy any who disagree. New correct different science always must fight against established consensus.

        Climate changes in natural cycles and one more molecule in ten thousand is not the control knob of modern warming.

        We are warming into a natural and necessary and unstoppable warm period that is not yet as warm as most of the past warm times in the past ten thousand years. What ever caused warming in the past natural cycles is still working. We cannot stop natural climate cycles by destroying modern civilization.

      • David Appell, as I said, if you can’t keep up, wait and get some else to do the heavy lifting for you.

      • Well, HAS, we’re all expert in what we’ve experienced. The first derivative of that is knowledge. The second derivative is wisdom. And then there’s reading and evaluating the experiences of others.

    • David, learn from the history: https://judithcurry.com/2018/11/06/a-major-problem-with-the-resplandy-et-al-ocean-heat-uptake-paper/ .
      Resplandy et al (2018) was published in the leading science journal and was in fact NOT enough reviewed. The mistakes were clearly. And after publishing the findings in a blog ( in this one) the paper was improved ( the uncertainty and the mean was corrected). This is the way science works.
      To come back to the core message: In fig.2 (bottom) of Hausfather et al (2019) they calculated the TCR of every early estimation. From an also peer reviewed paper we know the observed TCR when taking into account a sufficient time window.

      I’m not quite clear why the peers of H.19 dind’t demand a discussion of the sources of the deviations between the findings of the draft and the state of knowledge. IMO that’s all to say about H19.

    • The AGU has a formal process for technical comments. But informal discussion from someone as experienced as Ross sometimes helps to clarify and communicate. Unless David disagrees of course. Then it’s not science.

    • David Appell, do you have any idea how ridiculous this sounds? It’s being peer reviewed right here, right now. Zeke H will respond and so will others, and it will be worked out. If Ross is right, eventually Zeke will issue a correction a couple of years down the line – but by then you will have moved on to running interference for someone else. If Zeke is right he’ll post a response and we’ll know that too.
      In the meantime, if you have nothing to contribute to the actual discussion, leave it to those who do. This is science, not sports, and cheerleaders are not needed.

  20. “To make trend comparisons meaningful, for the purpose of the Hansen (1988-2017) and FAR (1990-2017) interval comparisons, the 1992 (Mount Pinatubo) event needs to be removed since it depressed observed temperatures but is not simulated in climate models on a forecast basis.”
    In fact, Hansen’s scenarios B and C did include a major eruption in 1995, very closely matching the effect of Pinatubo. It’s effect is evident in his results.

    “Likewise with El Nino events.”
    And La Nina too?

    • Nick
      Regarding the effect of ENSO on climate, it matters what kind of El Niño it is. They are not all equal. Generally there are two types, the classic Bjerknes type and the Modoki type. The classic type like 1997-1999 engages the Bjerknes feedback, trade winds and Peruvian upwelling are briefly fully interrupted (trades can even reverse) and there is a warm tongue of the Peruvian coast and eastern equatorial Pacific. This is followed by a strong La Niña where upwelling becomes strong and a large pulse of warmed water is “pumped” toward the pole. Such classical El Niño-La Nina cycle have a big warming effect on (eventually) global climate.

      By contrast, the long term effect on climate of Modoki (“different yet the same”) El Niño events is weak or nonexistent. The Bjerknes feedback is never engaged. Neither trade winds nor Peruvian upwelling are fully interrupted. The equatorial Pacific surface warming is in the mid, not eastern Pacific. The 2016 “el Niño” was of this type. No step up of warming followed. If anything, a step down seems more likely.

    • Ross McKitrick

      If I filter Hansen88B&C in the same way as I do Cowtan&Way, by regressing it on IPCC volcanic forcing + ENSO and taking the residuals, I get the same qualitative trend comparisons, except this time Hansen88-C is significantly below the unadjusted observational series (though not obs2). The H88-B trend drops to 0.245 and H88-C drops to 0.101. The VF score comparing H88-B to obs2 is 45.622. Smaller but still significant.

    • Note that Hansen 1988 had a very high ECS, something like 4.2 C/doubling, a point he commented on in a 1998 (?) review of the original.

  21. Pingback: Explaining the Discrepancies Between Hausfather et al. (2019) and Lewis&Curry (2018) – All My Daily News

  22. Alarmists whose story has already seated them on a mountain of wealth and power, are asked to check their own story to see if it is true. And what do they find? Any prizes for guessing?

    Meanwhile back in a pointlessly impoverished real world, real data tells a different story:

    https://notrickszone.com/2020/01/14/comprehensive-data-recent-studies-in-top-journals-antarctica-stable-temps-falling-ice-mass-growing/

  23. archibaldtuttle

    Ross:

    “Since the 1990s the IPCC constantly overstated emission paths and, even more so, CO2 concentrations by presenting a range of future scenarios, only the minimum of which was ever realistic.”

    Trying to relate this comment to the one model in the graph attributed to the first decade of the 21st century which does deviate upward from observations but at a much less exaggerated rate, both in sum at the conclusion of the series of the 21st century. Does that model perhaps represent a minimum scenario or a choice amongst RCPs whereas the two from 1990 were averaged or likely concentrations not separated with the RCP meme?

    Do you find behind the pay wall any list in the paper itself of which models are treated by which lines? From the appendix or supplementary materials which anyone can read, I would infer that both the TAR and AR4 are included but those are both in the 2000s, yet there is only one red line in the graph. Maybe the concentration assumptions of both of those overlap although the origin for diversion from the baseline should have been later in time for AR4 (2001 vs . 2007 I think, give or take or maybe it is assumed that the papers and models in the TAR are actuall 1990s but then There are the First and Second Assessments and the Mantabe paper listed for the 1990s with only two orange lines.

    And my recollection is that both of the latter IPCC assessments use the RCP format, so they wouldn’t have identified one concentration regime. Does the paper indicate which RCP was modeled in that red line? Just seems relevant to me because the choices made there may or may not infer a conscious or subconscious attempt to make a graph that makes earlier models look well wide of the mark but, by the turn of the century, models were more focused on a likely range of CO2 concentration.

    I am not at all suggesting slop or conspiracy, but perhaps some confusion on the part of reader. It does seem to me that Hausfeather has generally been less apocalyptic and more responsive to criticisms of modeling with ‘skeptical’ pedigree, rather than sniffingly dismissive. But this does seem to be a bit of a case of burying the lead. Maybe he’s watching and will respond as he could answer these questions much more easily than you could.

  24. Pingback: Explaining the Discrepancies Between Hausfather et al. (2019) and Lewis&Curry (2018) – Daily News

  25. “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” https://royalsocietypublishing.org/doi/full/10.1098/rsta.2011.0161


    “Example of the partitioning of uncertainty in projections of southeast England rainfall for the (a) 2020s and (b) 2080s from UKCP09.”


    “Schematic of ensemble prediction system on seasonal to decadal time scales based on figure 1, showing (a) the impact of model biases and (b) a changing climate. The uncertainty in the model forecasts arises from both initial condition uncertainty and model uncertainty.”

    Feasible solution trajectories in comprehensive models diverge until saturating at an ‘irreducible imprecision’ intrinsic to the structure of the model. And as models miss internal variability the planet continues on its shifting and unpredictable spatio-temporal chaotic path between ‘fractionally dimensioned state space’.

    The known unknown of the planetary energy dynamic is the state of the Pacific – which varies on all scales – fractal according to Tim Palmer in the lecture video I linked above – as upwelling waxes and wanes.


    https://www.mdpi.com/2225-1154/6/3/62/htm

    “As our nonlinear world moves into uncharted territory, we should expect surprises. Some of these may take the form of natural hazards, the scale and nature of which are beyond our present comprehension. The sooner we depart from the present strategy, which overstates an ability to both extract useful information from and incrementally improve a class of models that are structurally ill suited to the challenge, the sooner we will be on the way to anticipating surprises, quantifying risks, and addressing the very real challenge that climate change poses for science. Unless we step up our game, something that begins with critical self-reflection, climate science risks failing to communicate and hence realize its relevance for societies grappling to respond to global warming.” https://www.pnas.org/content/116/49/24390

    These are technical and social reasons why comprehensive modelling evolved as it did. But the ability of these models to provide useful information on the future evolution of climate has been wildly and irresponsibly overstated.

    • Robert I Ellison: These are technical and social reasons why comprehensive modelling evolved as it did. But the ability of these models to provide useful information on the future evolution of climate has been wildly and irresponsibly overstated.

      Good to remember.

  26. Alberto Zaragoza Comendador

    Atom’s commented:
    “IPCC 1990 FAR included at least 4 emissions scenarios, not just 1. McKitrick cherry-picks the highest emissions scenario, while ignoring the other 3 lower emissions scenarios.”
    Er, it’s Hausfather who uses only Scenario A from FAR. Should McKitrick dispute something Hausfather didn’t say?

    “Ye it’s quite easy to check FAR and see those other scenarios. It then becomes clear that we largely followed the GHG-induced radiative forcing from FAR’s scenario B, and ended up with scenario B’s warming trend. That makes sense since, for instance, the IPCC explicitly notes that the scenario A [BaU or business-as-usual] scenario McKitrick selects does not include the full projected impact of GHG-limiting policies like the Montreal Protocol. McKitrick makes no mention of that, of course.”

    GHG-limiting policies have had no discernible effect on emissions, other than for CFCs. We *are* living in a business-as-usual world, except for CFCs.

    Anyway, let’s check the possible forcing effect of CFCs:
    https://www.esrl.noaa.gov/gmd/aggi/aggi.html
    The combined forcing of CFC11 and CFC12 increased from 0.132w/m2 in 1979 (first year for which NOAA has data) to 0.212w/m2 in 1989 (when forcing growth started to slow down). That’s 0.008w/m2 per year; over the 1990-2017 period that would be an additional 0.216w/m2. Coincidentally, 2017 forcing for CFC11 + CFC12 was 0.219 w/m2 – essentially unchanged from the 1989 value.

    I have to mention that, per LC18, in the 1980s stratospheric ozone depletion was causing a negative forcing of 0.02w/m2/year. This would mean the net warming effect of CFCs would about one fourth smaller than the paragraph above estimated.

    So in the absence of mitigation CFC forcing might have added 0.216w/m2; essentially, CFC forcing might have doubled instead of remaining at the same level as before. It may have even more than doubled, if forcing growth had kept accelerating. But that is *not* what FAR’s Scenario A predicted; here’s the description from FAR itself:
    “For CFCs the Montreal Protocol is implemented albeit with only
    partial participation.”
    What this meant in practice can be seen in Figure 5 of FAR’s Summary for Policymakers, which depicts forecasted concentrations of CFC-11. Rather than accelerating growth, as had been seen in the 1980s, FAR expected concentration growth to slow down sharply. I haven’t digitized the chart, but it seems to show CFC-11 concentrations of 270 pptv in 1990 and 430 pptv in 2017. That’s growth of 56% – far from a doubling.

    Now, this is admittedly a crude calculation; FAR’s Figure 5 does not show forecasted CFC-12 concentrations, and I’m not sure if CFC forcing increases proportional to concentrations or in some other manner. But the point is that the mismatch between FAR’s expected CFC forcing and the reality was small. If CFC forcing between 1990 and 2017 had grown by 56%, instead of remaining stable as it did, then the additional forcing would have been about 0.12w/m2. From that we’d have to subtract the negative forcing from stratospheric ozone depletion.

    And by how much did FAR’s Scenario A over-estimate future forcing? A lot. Going by Figure 1 in Hausfather’s paper, FAR expected forcing growth of 0.61w/m2/decade, versus 0.39w/m2/decade that happened in reality (at other point in the paper they mention that FAR over-forecasted forcing by 55%). Over 27 years that’s a difference of 0.57w/m2.

    So McKitrick didn’t try “cherrypick” a scenario. He chose the scenario that Hausfather used, and Hausfather (I presume) used it because the IPCC repeatedly refers to this scenario as Business-as-usual; the scenario over-forecasted actual forcings dramatically, and only about 1/6 of the over-forecast can be blamed on CFC mitigation. It’s unclear how much of the rest of the mismatch corresponds to CO2 versus other GHGs, but there was definitely a CO2 mismatch too. Figure 5 of FAR’s SPM seems to show concentrations of 420-425 ppm by 2017, about 15ppm higher than in reality. Concentration was already at 355ppm by 1990, so the increase in concentration was about 30% higher in FAR’s Scenario A than in reality (about 65ppm vs 50ppm).

    You also say:
    “Moreover, the collapse of the Soviet Union not only mitigated methane emissions, but changed land use practices in a way that increased land uptake of CO2, further mitigating net anthropogenic CO2 increases.”

    The fall of the Soviet Union is almost imperceptible in historical statistics of fossil fuel combustion in absolute terms; global emissions essentially stayed the same in 1991 as in 1990. In statistical terms, it’s noise; of course historical emissions would have been higher if the USSR had survived, but this is just one of many factors affecting emissions. China’s coal boom was likewise unpredictable; nobody in 1999 was forecasting Chinese emissions would increase as quickly as they did over the following decade.

    So it would be shocking if one thing we can measure accurately (CO2 release from combustion) shows that the fall of the USSR was noise in global terms, while a thing we cannot measure well (CO2 release from land use) does show a significant effect. For what it’s worth, the IPCC does not think anything special happened to global land-use CO2 emissions after 1990; there may have been a decline in such emissions in former Soviet countries, but if it happened it was just noise at the global level.

    • Thanks for the response.

      Re: “GHG-limiting policies have had no discernible effect on emissions, other than for CFCs. We *are* living in a business-as-usual world, except for CFCs.”

      No, we’re not, for the reasons already explained (ex: fall of the Soviet Union affecting CH4 emissions and CO2 uptake).

      Re: “Now, this is admittedly a crude calculation; FAR’s Figure 5 does not show forecasted CFC-12 concentrations”

      Why are you referencing figure 5, which is on page xix? It only includes 3 greenhouse gases: CO2, CH4, and CFC-11. FAR’s projections are for 6 greenhouse gases, not just 3. And I told you the relevant pages for finding those:

      “[see pages 333 – 337 of FAR: https://www.ipcc.ch/site/assets/uploads/2018/03/ipcc_far_wg_I_full_report.pdf ]”
      https://judithcurry.com/2020/01/17/explaining-the-discrepancies-between-hausfather-et-al-2019-and-lewiscurry-2018/#comment-907893

      And as per what I told you, figure A.3 on page 333 includes projections for all 6 GHGs for the four scenarios. They are: CO2, CH4, N2O, CFC-11, CFC-12, and HCFC-22.

      Re: “So McKitrick didn’t try “cherrypick” a scenario. He chose the scenario that Hausfather used, and Hausfather (I presume) used it because the IPCC repeatedly refers to this scenario as Business-as-usual; the scenario over-forecasted actual forcings dramatically, and only about 1/6 of the over-forecast can be blamed on CFC mitigation.”

      It’s cherry-picking. Hausfather et al. actually bothered to competently account for the difference in IPCC BaU’s (scenario A’s) projected forcing vs. observed forcing. McKitrick didn’t, instead obscuring the over-estimated forcing while he claimed IPCC 1990 FAR over-estimated warming (without admitting this over-estimated warming was due to over-estimated forcing due to over-estimated GHG increases, not over-estimating shorter-term climate sensitivity).

      Moreover, as I explained in my initial comment, McKitrick differed from Hausfather et al. in another way: McKitrick, but not Hausfather et al., claimed that the paper shows the IPCC relied on scenarios that over-estimated GHGs (“the IPCC has for a long time been relying on exaggerated forecasts of global greenhouse gas emissions”). But that conclusion of McKitrick’s fails, since he conveniently ignored the other 3 lower emissions scenarios the IPCC included. So no, McKitrick did not do the same thing Hausfather et al.; he cherry-picked relative to the conclusion he made, and which Hausfather et al. didn’t. Referencing what Hausfather et al. did is thus no defense of what McKitrick did.

      Re: “And by how much did FAR’s Scenario A over-estimate future forcing?”

      Again, you seem not to have read the pages of FAR I pointed you towards. Look at figure A.6 on page 335 of FAR; it shows the projected GHG-induced forcing for the four scenarios. Based on the AGGI link you posted from me, observed post-1990 GHG-induced net forcing increase followed scenario B (you could make a case for scenarios C and D as well, since they follow scenario B so closely up 2020). And anyone can check observational analyses (ex: GISTEMP, ERA5, Cowtan+Way, etc) to see that the observed warming trend basically followed scenario B in figure A.9 on page 336, as I said in my original comment:

      http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html
      https://www.esrl.noaa.gov/psd/cgi-bin/data/testdap/timeseries.pl
      https://climexp.knmi.nl/selectindex.cgi?id=someone@somewhere

      That shows the IPCC 1990 FAR accurately represented warming per unit of radiative forcing; i.e. it accurately represented shorter-term climate sensitivity, just as Hausfather et al. said. And that extends to scenario A (BaU) as well, since it uses the same shorter-term climate sensitivity as scenario B. So using scenario B provides a further line of confirmation for Hausfather et al’s conclusion.

      Re: “The fall of the Soviet Union is almost imperceptible in historical statistics of fossil fuel combustion in absolute terms; global emissions essentially stayed the same in 1991 as in 1990. […] For what it’s worth, the IPCC does not think anything special happened to global land-use CO2 emissions after 1990; there may have been a decline in such emissions in former Soviet countries, but if it happened it was just noise at the global level.”

      The fall of the Soviet Union is not necessarily going to all manifest as an effect in one year. And as much as you or others might respect your personal opinion on what the effect was, I prefer peer-reviewed published sources. In my initial comment, I already cited some sources on how the fall of the Soviet Union affected land use practices, with respect to uptake of CO2. Here are some on how the fall impacted CH4 emissions; I’ll even cite the IPCC for you, since you mentioned them, without actually citing them saying what you claim:

      10.1098/rsta.2010.0341 ; 10.1088/1748-9326/ab1cf1 ; 10.1029/2003GL018126 ; 10.1029/2008JD011239

      “The growth rate of CH4 has declined since the mid-1980s, and a near zero growth rate (quasi-stable concentrations) was observed during 1999–2006, suggesting an approach to steady state where the sum of emissions are in balance with the sum of sinks […]. The reasons for this growth rate decline after the mid-1980s are still debated, and results from various studies provide possible scenarios: (1) a reduction of anthropogenic emitting activities such as coal mining, gas industry and/or animal husbandry, especially in the countries of the former Soviet Union […], (2) […] [page 506]”

      Click to access WG1AR5_Chapter06_FINAL.pdf

      • Alberto Zaragoza Comendador

        1) The IPCC does show what I claim they show about CO2 emissions from land use. I didn’t discuss methane in that regard. It says such emissions didn’t decline appreciably after the collapse of the Soviet Union; in fact it says they increased. I personally believe estimates of such emissions are almost worthless, but hey, I didn’t make any claim regarding their increase or decrease.

        2) You make this claim:
        “Hausfather et al. actually bothered to competently account for the difference in IPCC BaU’s (scenario A’s) projected forcing vs. observed forcing. McKitrick didn’t, instead obscuring the over-estimated forcing while he claimed IPCC 1990 FAR over-estimated warming (without admitting this over-estimated warming was due to over-estimated forcing due to over-estimated GHG increases, not over-estimating shorter-term climate sensitivity).”
        Well, McKitrick’s chart is admittedly blurry, but it seems to show an implied TCR of about 1.5ºC for FAR; which is to say around the same as Hausfather!

        So McKitrick did account for the difference in historical forcings versus the forcings that were forecasted by FAR; that’s whe whole point of calculating an “implied TCR”. If he hadn’t accounted for this, FAR’s “implied TCR” would be well above 2ºC.

        3) You then say:
        “Moreover, as I explained in my initial comment, McKitrick differed from Hausfather et al. in another way: McKitrick, but not Hausfather et al., claimed that the paper shows the IPCC relied on scenarios that over-estimated GHGs (“the IPCC has for a long time been relying on exaggerated forecasts of global greenhouse gas emissions”). But that conclusion of McKitrick’s fails, since he conveniently ignored the other 3 lower emissions scenarios the IPCC included.”

        McKitrick’s article focuses on CO2 concentrations and shows that in the real world they have been consistently lower than forecasted by the IPCC’s business-as-usual scenarios. This isn’t specific to FAR, it’s been an issue pretty much with all reports.
        The other 3 scenarios are ignored for a good reason: they assume effective mitigation, and in the real world CO2 mitigation policies have had negligible impact. So the over-forecasting of CO2 concentrations by the IPCC cannot be due to mitigation policies.

        4) I already read the table with the six GHGs, as explained in my comment below. The table changes almost nothing; mitigation of CFCs accounted for about 0.1w/m2 of the forecast error in FAR, and mitigation of other HCFCs (minus the ‘unmitigation’ caused by the spread of HFCs) accounted for perhaps another 0.05w/m2. The bulk of the forecast error remains (0.4 or 0.45w/m2).

        One last thing. The IPCC may have gotten unlucky with regards to methane, because looking at the rate of concentration increase the 1980s most people would have expected said increase to continue or accelerate (as happened with CO2). As of the 2013 report, the IPCC still had pretty much no idea why concentrations were growing so slowly; in fact the very next sentence to the one you quote says that the relatively stable methane concentrations may be due to “a compensation between *increasing* anthropogenic emissions and decreasing wetland emissions”. The section mentions almost every imaginable explanation.

        So it’s not clear why FAR’s methane forecast turned out to be wrong. But it was wrong, and one can discard the idea that intentional mitigation played a significant role. Climate policies didn’t affect the emissions we can measure precisely (combustion), so they couldn’t have affected emissions that are difficult to measure even now.

      • Again, thanks for your response. This will be my last for awhile, since I have other things to attend to.

        Re: “1) The IPCC does show what I claim […]”

        Then cite them saying it. Because when I claim a source says something, I cite the document where they say it.

        Re: “So McKitrick did account for the difference in historical forcings versus the forcings that were forecasted by FAR; that’s whe whole point of calculating an “implied TCR”.”

        Notice I said competently account”. I justifiably don’t rely on McKitrick to do it competently for a number of reasons, including:

        1) he grossly expands the error bars for the observational analyses’ TCR in comparison to FAR’s TCR to avoid confirming FAR’s TCR accuracy, even though Hausfather et al.’s analyses does otherwise and the comparison to FAR scenario B [which I showed before] confirms Hausfather’s et al.’s result without McKitrick’s exaggerated uncertainty;
        2) McKitrick previously and repeatedly made easily falsified claims about model-based temperature trend projections to obscure their accuracy and their relation to over-estimated greenhouse gas levels; in my first comment, I went over examples of him doing that for Hansen et al. 1988:
        https://judithcurry.com/2020/01/17/explaining-the-discrepancies-between-hausfather-et-al-2019-and-lewiscurry-2018/#comment-907893 ;
        3) this is a non-peer-reviewed blog article on a contrarian website, and so needs multiple competent experts to review it, not just Nic Lewis;
        4) even if one says looking at a non-peer-reviewed blog article is OK, then McKitrick still has a history of debunked analyses made for undermining mainstream conclusions in climate science; one example is the ‘radians vs. degrees’ debacle, among others:
        pages 25 to 30: http://www.scottchurchdirect.com/docs/MSU-Troposphere-Skeptic01.pdf
        10.1002/joc.1831
        https://tamino.wordpress.com/2019/09/29/climate-deniers-long-term-annoyance/
        “Are temperature trends affected by economic activity? Comment on McKitrick & Michaels (2004)”

        So basically, I’m not going to rely on McKitrick’s blogpost to do this correctly.

        Re: “McKitrick’s article focuses on CO2 concentrations and shows that in the real world they have been consistently lower than forecasted by the IPCC’s business-as-usual scenarios. This isn’t specific to FAR, it’s been an issue pretty much with all reports.
        The other 3 scenarios are ignored for a good reason: they assume effective mitigation, and in the real world CO2 mitigation policies have had negligible impact. So the over-forecasting of CO2 concentrations by the IPCC cannot be due to mitigation policies.”

        But you’re engaging in the same flawed reasoning as McKitrick. For example, you’re simply assuming, without providing any evidence, that real-world mitigation policies had a negligible effect. Sorry, but I’m going to need cited evidence for that claim. Because when I claimed the Montreal Protocol had an effect, I cited published evidence showing that. So I expect you to do the same for your claims on mitigation policies having a “negligible impact”.

        Moreover, “business-as-usual” is not just with respect to mitigation; it can include non-policy effects as well. For instance, I already gave an example of the collapse of the Soviet Union, which is not a mitigation policy, but which would still represent a deviation from a business-as-usual scenario made in the 1980s. So the claim below from McKitrick is a non sequitur:

        “[…] since there have been few emission reduction policies in place historically (and none currently that bind at the global level), the heavy black line is effectively the Business-as-Usual sequence.”

  27. “I think it is more likely that temperatures are I(0)”

    This claim is completely ridiculous.
    1. It’s completely unphysical! Obviously, it is desirable to adopt econometric methods to make better inferences of climate data, but that doesn’t mean we should completely forgo physical information.
    2. It is in complete contradiction with this paper entitled “A multicointegration model of global climate change”:
    https://www.sciencedirect.com/science/article/pii/S0304407619301137
    Temperatures are I(1) or I(2), not I(0).

  28. Alberto Zaragoza Comendador

    Okay, a tiny correction to the comment of mine above. Table 2.7 of FAR includes projections for HCFC-22; in Scenario A, forcing is 0.04w/m2 from the start of history until year 2000 and then 0.13w/m2 between 2000 and 2025. That’s 0.0052w/m2/year for the second period, which would mean 0.0884w/m2 over a 17-year period.

    Let’s round it down to 0.08w/m2 for forcing between 2000 and 2017, since the forcing was accelerating and therefore over the beginning. To this we would have to add some part of forcing that happened between 1990 and 2000, let’s say 0.03w/m2, because there was almost no HCFC-22 forcing prior to 2000, see Figure 2 in https://www.esrl.noaa.gov/gmd/aggi/aggi.html. Thus we have 0.08 + 0.03 = an estimated 0.11w/m2 of HCFC-22 forcing between 1990 and 2017; remember that we’re talking about FAR’s Scenario A forecast.

    How does that compare with reality? According to NOAA, minor GHGs (basically HFCs and HCFCs) increased their forcing by 0.059w/m2 between 1990 and 2017, or 0.05w/m2 less than per FAR’s Scenario A. Now, you might argue that NOAA’s forcing figure things other HCFC-22, but that’s the point: FAR’s Scenario didn’t just overestimate future forcing by HCFCs. It also underestimated forcing of gases that were not in widespread use at the time (i.e. HFCs).

    In short, mitigation of HCFCs might account for another 0.05w/m2 of FAR’s forcing forecast error. That’s 0.15w/m2 for CFCs + HCFCs. I previously said the total error in forecast was 0.57w/m2, but that’s a mistyping; actually the over-forecast seems to be 0.59w/m2 (0.61 – 0.39 w/m2/decade, multiplied by 2.7 decades).

    So even accounting for mitigation of minor gases, and the use of other minor gases to replace the mitigated ones, FAR over-forecasted actual 1990-2017 forcing by 0.4 or 0.45w/m2. Which is to say by about 40%.

    • Alberto Zaragoza Comendador

      “Now, you might argue that NOAA’s forcing figure *includes* things other *than* HCFC-22”

  29. January 18, 2020 JCH wrote:
    The only way to deny the imbalance is to create energy out.
    I said that the Earth has an imbalance in energy radiated out in to radiant heat retained from the sun.
    The radiant heat out is relatively constant. The radiant heat in striking the earth is relatively constant. The difference is in the radiant heat reflected to the black sky. Right now, we are 18,000 years into the new Ice Age. We are now reflecting more heat to the black sky than we retain.
    Mother Nature has been moving water vapor to the poles and dropping it at the poles It is doing this to maintain a relatively constant surface temperature.

  30. With root zone water storage – the change is the gains and losses above and deep percolation loss below. The root zone should be full of fungi, worms, nematodes, etc. That facilitates breakdown of substrate to release nutrients – and intercepts, infiltrates and stores rainfall. Good for food security, flood and drought resilient agriculture, dry weather flow in streams, aquifer recharge and reducing sea level rise – as much as that is. And good to sequester 80 GtC (Rattan Lal). Even more with reclaiming desert and restoring rangeland and woodland.

    dS/dt = inflows – outflows

    where S is stock in storage.

    “What are the main characteristics and implications of the Hurst-Kolmogorov stochastic dynamics (also known as the Hurst phenomenon, Joseph effect, scaling behaviour, long-term persistence, multi-scale fluctuation, long-range dependence, long memory)?” Demetris Koutsoyiannis

    There is a great void between the behavior of geophysical series – and an energy dynamic that varies over days to millennia – and ideas of ERF and sensitivity.


    Still – there’s alway Shared Socioeconomic Pathway No. 5 and geoengineering the hell out of the place.

    “SSP5 Fossil-fueled Development – Taking the Highway (High challenges to mitigation, low challenges to adaptation) This world places increasing faith in competitive markets, innovation and participatory societies to produce rapid technological progress and development of human capital as the path to sustainable development. Global markets are increasingly integrated. There are also strong investments in health, education, and institutions to enhance human and social capital. At the same time, the push for economic and social development is coupled with the exploitation of abundant fossil fuel resources and the adoption of resource and energy intensive lifestyles around the world. All these factors lead to rapid growth of the global economy, while global population peaks and declines in the 21st century. Local environmental problems like air pollution are successfully managed. There is faith in the ability to effectively manage social and ecological systems, including by geo-engineering if necessary.” https://www.sciencedirect.com/science/article/pii/S0959378016300681

  31. Ireneusz Palmowski

    During the solar minimum , the rate of mixing layers in the equatorial Pacific is low.
    http://www.bom.gov.au/cgi-bin/oceanography/wrap_ocean_analysis.pl?id=IDYOC007&year=2020&month=01

  32. Ross McKitrick, thank you for this essay, and for replying to criticisms such as those by Atomsk’s Sanakan. I wish you success in getting your manuscript published.

  33. Lindzen & Spencer, accounting for all feedbacks, find Climate Sensitivity less than 1.5, probably about 0.3 and perhaps even negative. http://co2coalition.org/wp-content/uploads/2019/12/Lindzen_On-Climate-Sensitivity.pdf

    Water vapor has been increasing faster than POSSIBLE from water vapor feedback. WV feedback is due to average global temperature increase. This indicates that CS is not significantly different from zero. https://watervaporandwarming.blogspot.com

  34. Pingback: Explaining the Discrepancies Between Hausfather et al. (2019) and Lewis&Curry (2018) – Weather Brat Weather around the world plus

  35. Ross, I commented extensively on the Real Climate post on this Hausfather paper. Basically I think this whole exercise really proves little and is a classical example of selection bias. The output selected is global average temperature anomaly. If the models conserve energy, are tuned to TOA radiation balance, and have realistic ocean heat uptake, they will get the temperature anomaly roughly right. Even the simplest energy balance methods tell us this. So then why do we have GCM’s? To tell us patterns and details. The only problem is GCM’s are well known to be quite poor at things like patterns of SST warming, cloud distributions, convective aggregation, and the list goes on at great length. Given the gross lack of resolution and the magnitude of the tuning problem this is not surprising to anyone with experience in numerical PDE’s.

    Their paper really has little scientific value and instead just opts for a narrative about climate model “accuracy.” This is scientifically of little value of course but does explain why it appeals to the peanut gallery on line. A much more valuable contribution would be the recent paper by Palmer and Stevens calling for a new paradigm in GCM modeling in which they at least admit these well known problems with current models albeit with a few bows to the narrative needs of alarmists. I’m not sure their path forward is the best one but at least a real conversation has been started. BTW, Schmidt so far as I know has not commented in detail about this. He did not respond to my comments, allowing the anonymous mud slingers to do the work of attempting to muddy the waters.

    • dpy6629

      Some here argue that publishing in a peer reviewed publication is much preferable to publishing on a blog and that Ross’s article here should therefore be disregarded.

      Am I wrong in thinking that the publication of Hausfather et als article was carried out following the payment of a fee to the journal, with additional responsibilities towards the journal, in as much authors have to agree in turn to examine and peer review other articles?

      https://www.agu.org/Publish-with-AGU/Publish

      Surely if payments are made in order to facilitate publication its scientific integrity is compromised?

      I have no comment on the technical competence or merits of the original article, just its process, which is not carried out in the blinding public spotlight that blog articles are.

      tonyb

      • Tonyb, surely we should evaluate the article on its data, logic, and conclusions, rather than its provenance.. If the citations are proper, and match the statements, we really should not care about its site. Should we?
        Today, we see too much: where did you get that… where did you go to school…what degrees do you have…what have you published… where did you publish…do you satisfy the definition of [specialist]

      • jimmww

        I entirely agree. The quality of the article should matter much more than where it is published or by whom. However as you can see some people pounce when they see a certain authors name or that it has not been published in an approved medium.

        Blog publishing provides a very useful and quick first stage function in putting useful information in front of people who might be unfamiliar with certain aspects of the subject.

        tonyb

      • “Tonyb, surely we should evaluate the article on its data, logic, and conclusions, rather than its provenance.”

        “The quality of the article should matter much more than where it is published or by whom. ”

        Well said. Blogs are able to efficiently cut through the politics and biases of the science; whereas publications are often walled gardens for consensus. But to be sure, yes, blogs too are loaded with politics and biases (accept for those that are also tightly censured by the gatekeeper consensus crowd). But a well managed blog cuts relatively efficiently through politics and biases, allowing the cream to rise to the top on its own merits.

        Great thinking requires unconscious competence, a little space to be different, to express creativity and to have the freedom to break rules as the first step to prove worth. Unconscious competence with handrails by default is dumbed down conscious competence—nothing original can come from conscious competence, which is what consensus is always bound by. Those who only speak for consensus are forever bound to a legacy of mediocrity at best.

        There is great value in a well managed blog.

  36. Ross: I like your inclusion of the effects of Pinatubo and El Nino, but that also creates problems. Let’s consider a simple one-compartment model for the planet with an atmosphere and mixed layer 50 m deep, an ECS of 1.8 K per doubling and F_2x of 3.6 W/m2/doubling. The equilibrium response to a +1 W/m2 forcing would be a steady state warming of 0.5 K and an increase in OLR+reflected SWR of 1 W/m2. Feedback would be -2 W/m2/K. Now suppose a sudden +1 W/m2 forcing were imposed on steady state. It is simple to calculate that the temperature would initially begin to rise at a rate of 0.2 K/yr. Approximating the negative exponential approach to steady state, after 1 year and 0.2 K of warming, the radiative imbalance would be down to 0.6 W/m2 and the warming rate would be down to 0.12 K/yr. After 2 years and 0.32 K of warming, the imbalance would be down to 0.36 W/m2 and the warming rate would be reduced to 0.072 K/year. In other words, this model is roughly 40% of the way to steady state in one year, 64% after two years and, 87% after four years.

    If you repeat the same calculation for an forcing change of 0.1 W/m2, steady state is approached at the same rate. If ECS is 3.6 W/m2, then steady state is approached half as fast. So the temperature response to a forcing change in this simple one-compartment model for likely values of ECS is only 20-40% complete in one year. 60-80% of the imbalance remains and the forcing change the next year is added to this imbalance. Steady-state temperature lags five to ten years behind a forcing change – even when no heat is being transported from the mixed layer to the deeper ocean. Temperature change in any one year is not a function of the forcing change in that year, it is a function of the forcing change over the past five to ten years.

    This lag appears to make it impossible for you to properly correct for the large forcing changes associated with volcanos.
    ___________________

    Second point: As best I can tell, it is unreasonable to apply a random walk hypothesis to the noise in temperature data. If random walk noise drifted +/-0.2 K after a 10 year period, that drift would grow to +/-2 K after a 1000 year period and +/-20 K after 100,000 years, and +/-200 K after 10 million years. Impossible. Feedback on our planet is negative; deviations from steady-state produce radiative imbalances that restore steady-state. Your model for noise must be driftless.

  37. Franktoo: “Feedback on our planet is negative; deviations from steady-state produce radiative imbalances that restore steady-state”.

    Agreed!!!
    Earth system has only negative feedbacks.
    http://www.cristos-vournas.com

    • Therefore noise in temperature data shouldn’t be modeled as a random walk: 1) Flip a coin. 2) Take one step warm if heads and one step cooler if tails. 3) Repeat. Such noise will eventually take temperature infinitely far from the starting temperature. After N steps, average distance from starting position is SQRT(N). If you have taken one or more steps in the warmer direction from a steady state, the odds of getting a head need to be less than 50% – so the drift from noise doesn’t increase indefinitely.

  38. Last year Mother Nature brole off a 600 squqre mile iceberg from the Antarctic ice shelf. To keep the surface temperature constant she must remove the heat used to melt that from the ocean plus the the heat we are gaining from the sun daily and send it to the black sky. We are getting that much closer to the time we run out of ice to break off and the oceans begin to fall again.

  39. Ireneusz Palmowski

    At night, severe frost in the eastern US.

  40. Ireneusz Palmowski

    The stratospheric intrusion will reach the southeastern states.

  41. An interesting artical in the Quara Digest by Mr. Noel about the Orange Beach theory. It shows we are probablly very close to the oceans beginning to retreat because the cold line in the northern hemisphere is moving south.

  42. JCH, that Arctic ice is nearly worth making a number of comments on but I dare not. One spark of interest and it always stops growing

    • I grew up in the Dakotahs. I don’t bank on ice, but given the complete failure of multiple La Niña events to get you any traction at all, I am amazed you have still have hope. So keep praying. They say it works.

  43. ENSO forecast for 2020?

    • Well the BOM SOI is up to minus 3 and NINO3 is 0.0C for the first time on a long time.A couple of early cyclones and rain. A very large cyclonic for 2 weeks east of New Zealand put a lot of heat back to space. Equatorial cloudiness suggests El Nino slightly more likely.
      BOM says conditions are neutral, and likely to remain so at least until the end of the southern hemisphere autumn.
      They rarely get anything right and since the indicators are more El Nino and it always moves opposite to what science expects I will go opposite with a cooling forecast. What else?
      Remember a guy saying years ago that if the first week and month are hot there is a much, much likelier prospect of a warm year ahead so that is not good looking at the Moyhu blog.

      • Did this last year as well and then just plain ran out of puff so not making a song and dance about it.
        ” 13,382,493 km2, an increase of 49,860 km2.
        2020 is now 13th lowest on record.”

  44. “What defines a climate change as abrupt? Technically, an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause. Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small.” NAS 2002

    This is the dynamical systems explanation for Hurst-Kolmogorov stochastic climate dynamics. Such behavior manifests at all scales from midro-eddies to atmospheric rivers. It is fractal.

    “Fluctuations at multiple temporal or spatial scales, which may suggest HK stochastic dynamics, are common in Nature, as seen for example in turbulent flows, in large scale meteorological systems, and even in human-related processes. We owe the most characteristic example of a large spatial-scale phenomenon that exhibits HK temporal dynamics to the Nilometer time series, the longest available instrumental record. Fig. 8 shows the record of the Nile minimum water level from the 7th to the 13th century AD (663 observations, published by Beran, 1994 and available online from http://lib.stat.cmu.edu/S/beran, here converted into meters). Comparing this Nilometer time series with synthetically generated white noise, also shown in Fig. 8 (lower panel), we clearly see a big difference on the 30-year scale. The fluctuations in the real-world process are much more intense and frequent than the stable curve of the 30-year average in the white noise process.” https://www.itia.ntua.gr/en/getfile/1001/1/documents/2010JAWRAHurstKolmogorovDynamicsPP.pdf

    The most recent globally coupled Pacific Ocean climate HK regimes – that have a bearing on atmospheric and oceanic temperature, TOA energy dynamics and Nile River rainfall – involve variance around a warmer mean after 1976 and a shift to a more neutral mean after 1998. The more neutral state of the Pacific may be ongoing.

    The latest Pacific Ocean climate shift post 1998 is linked to increased flow in the north (Di Lorenzo et al, 2008) and the south (Roemmich et al, 2007, Qiu, Bo et al 2006)Pacific Ocean gyres. Roemmich et al (2007) suggest that mid-latitude gyres in all of the oceans are influenced by decadal variability in the Southern and Northern Annular Modes (SAM and NAM respectively) as wind driven currents in baroclinic oceans (Sverdrup, 1947). The spinning up of oceanic gyres drives cold water north along the Peruvian and south along the Californian coast. Planetary spin curls the currents westward – that and and Bjerknes feedback shoal the thermocline and enhance deep water upwelling nearer the equator.

    In the short term SAM may favor La Niña as the ITCZ moves north and the ENSO transition season approaches. As there is no geopotential energy in the west to drive an El Niño – the alternative is a continuation of the neutral Pacific state we have seen for a couple of years now.

    SAM is modulated by solar activity – although with internal feedbacks that include atmospheric warming. In the long term the Pacific Ocean may transition to cooler states with a decline in solar activity from the modern grand maximum.


    Click to access Abram_et_al_SAM_NCC_2014.pdf

    Or it may not.

    • Low solar = warmer ocean phases, increased El Nino conditions, and a warm AMO.

    • Negative SAM spins up the South Pacific Gyre shoaling the thermocline in the region of the Humboldt Current and resulting in increased upwelling – the origin of La Nina. Solar activity, SAM and ENSO are correlated. Low solar activity is associated with negative SAM and La Nina and vice versa.

    • Noticed this in quote above: ” — Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small.”
      Those ‘chaotic processes’ during the Holocene max that changed more than the climate, were big Dragon Ks with big teeth and sharp claws. And very obvious, once we dump the Uniformitarian concept that events take millennia to bring change. ( they still predate bishop Ussher’s 4004 bc creation date by several millennia).

    • Low solar is associated with negative NAM and slower trade winds.

    • Robert I Ellison: “What defines a climate change as abrupt? Technically, an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause. Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small.” NAS 2002

      I am coming late to this instance, but I find it always a pleasure to read and reflect on this definition of “abrupt” climate change. The details! The abrupt change is forced. The rate of change in the climate is faster than the rate of change in the forcing (implying that both rates are known and commensurable). There is some threshold; and the system has been forced beyond the threshold (the threshold can be measured? And the system is measurably “beyond” the threshold?) (or “estimated” for “measured” for those who prefer “estimated”).

      Not a part of the definition, but: The cause of the abrupt climate change may be “undetectably” small! That might make it hard to examine whether the change was forced at all, and whether the rate of change in the climate was faster than the rate of change in the forcing.

      Is there even one measured or estimated purportedly “abrupt” climate change for which there is evidence for the criteria being satisfied? Not yet. This is analogous so far to an empty set in mathematics, where the criteria define a set that, after examination, has no members, such as the set of differentiable paths in Brownian motion; or the more commonly known empty set of “real” square roots of negative numbers.

    • It is the observed behavior of geophysical time series – and I use the definition to characterise observations referred to in the comment. I’d classify your comment as being in the set of empty vessels.

  45. “The sooner we depart from the present strategy, which overstates an ability to both extract useful information from and incrementally improve a class of models that are structurally ill suited to the challenge, the sooner we will be on the way to anticipating surprises, quantifying risks, and addressing the very real challenge that climate change poses for science.” https://www.pnas.org/content/116/49/24390

    The other model paper published lat month. In which Tim Palmer and Bjorn Stevens describe activist views on climate congealing around notions not held by the climate science community. Terrifyingly so in some cases.

    Getting the right answer with existing models may be a matter of interpretation within limits of irreducible imprecision – getting it right for the right reasons is a laughable conceit.

  46. What you are saying is that even if models are tuned without what I would call a warming bias that they are still unfit for the role of trying to make helpful statements about the future.
    While I would not go as far as saying all models are wrong but still useful there is still a nub of truth there in that people are still at least trying to develop a way of looking at future trends.
    There is some use in day 3 of a 1 week weather forecast even if after a week total chaos rules.
    Perhaps a simple as putting a little caution in
    *predictions of future changes are not guaranteed by past performance.
    Then we can all relax and look at alternate strategies as well.
    *Caution alternative strategies are highly risky like options.

  47. Abstract
    Retrospectively comparing future model projections to observations provides a robust if not independent test of model skill. Here we analyze the performance of climate models published between 1970 and 2007 in projecting future global mean surface temperature (GMST) changes. Models are compared to observations based on the change in GMST over time.
    -result
    “miserable fail.
    97% of all models ran too high . Some of the degrees higher.
    True discrepancies should fall in a range of 47% on both the high and low sides.
    We cannot understand why such an incredible bias has crept in and persisted in almost all models for nearly 50 years”

    We then tried to compare the models to observations based on the change in GMST over the change in external forcing.
    This enabled all the model errors yearly in the forcing and the incremental yearly error in forcing of the models to be discarded.
    Basically we were able to fit the models to be programmed with the correct yearly GMST at each year level which alters the forcings to allow them to match the observations.
    This would not have been possible with the forcings yearly models use.
    “The latter approach accounts for mismatches in model forcings, a potential source of error in model projections independent of the accuracy of model physics. “
    We find that climate models published over the past five decades were skillful in predicting subsequent GMST changes particularly And only when the subsequent GMST changes were added to the models.

  48. Ireneusz Palmowski

    Very weak solar wind. Niño 4 index remains high.

  49. Ireneusz Palmowski

    Colder low (upper and lower) will be in two days in the east of the US.

  50. Pingback: Hyper-alarmism: “deep adaptation” and “inevitable collapse” – Evil Questions

  51. I awoke this morning to news that three US firefighters were killer in a ‘water bomber’ crash in NSW. We mourn the loss and will honor their sacrifice.

  52. <a href="https://www.theguardian.com/commentisfree/2020/jan/20/i-tried-to-warn-scott-morrison-about-the-bushfire-disaster-adapting-to-climate-change-isnt-enough&quot;I tried to warn Scott Morrison about the bushfire disaster. Adapting to climate change isn’t enough –
    Greg Mullins

    Greg Mullins is a former commissioner of Fire and Rescue NSW and a climate councillor.

    • https://www.theguardian.com/commentisfree/2020/jan/20/i-tried-to-warn-scott-morrison-about-the-bushfire-disaster-adapting-to-climate-change-isnt-enough
      “Climate change is the driver of increasing extreme weather.”
      He should look at AR5. The driver: As used most often, The cause. The control. The Guardian doesn’t care. Bunch of hacks.

      • JCH:
        I just picked out a quote from the article. Then I judged The Guardian printing that quote. And thought they should be more conservative.
        A councillor might have a role.
        Your quote above. It isn’t enough. Granted. We all know mitigation isn’t going to help in the next 20 years in Australia. Farmers in NoDak will adapt, or be more bankrupt than normal. I’d bet against everyone of them that thought some mitigation they do that doesn’t save them money is going to help them personally.
        There’s some stuff that is some of both. They could soil bank. That’s mitigation. And it would probably help them in the long run. But that’s land use too. That’s what Australia’s got. A land use issue. Their politicians hate the land. They don’t protect it. They aren’t farmers. They are fake rednecks.

      • A quick and dirty calc. – based on meeting the Paris 2015 4 per 1000 soil carbon initiative – suggests that we can half our emissions by sequestering carbon in soils.

        But the day is too sad for climate politics.

      • Ireneusz Palmowski

        “Predictions are for the fires to continue well into 2020 and the forecasters tell us that much needed rain is still months away.”

    • So Greg Mullins wants more water bombers brought in and more hazard reduction burning? OK by me. The Climate Council is – however – a bevy of climate activists with no official standing. Conditions were dry last year – but that’s far from unprecedented even in the short instrumental record. The standout feature is the area of the country with less than 10% of average rainfall. In an epoch – since the mid 1970’s – with a higher rainfall mean and greater variance than seen over the earlier period of the 20th century.


      Set against a backdrop of intense variability over decades to millennia. With a dry period starting at the beginning of the 20th century that is unusual but not unprecedented.

      These are almost current quarterly emissions. 100% wind and solar by 2030 is not a panacea. The land use sector is of interest. We have succeeded to date in reducing emissions by locking up land – private and public – by Kyoto inspired and misguided land clearing laws. I have driven through a great deal of eastern and northern Australia over many years. Woodland has changed to woody weeds over vast swathes of the country with ecological impacts and much higher fire hazard potential. We are not doing nearly enough cool season, low intensity, mosaic cultural burning.

      Mullins now says that we are burning national parks and forest reserves?

      “Fuel reduction burning is being constrained by a shortage of resources in some states and territories and by a warming and drying weather cycle, which acting in concert reduce the number of days on which fuel reduction burning can be undertaken. Of all the factors which contribute to the intensity of a fire (temperature, wind speed, topography, fuel moisture and fuel load), only fuel load can be subject to modification by human effort. Fire is an essential ecological factor, which has an important and ongoing role in maintaining biodiversity and ecological processes in Australian forests and woodlands.

      As a key element in mitigating the effects of future fires, benchmarks need to be developed for funding requirements of fire, emergency and land management agencies by states and territories so that they can conduct increased, targeted fuel reduction works, and have operational capabilities (people, equipment, infrastructure)…” Greg Mullins

      There is a better strategy in place with a target of a 50% reduction in per capita emissions by 2030. Not enough? Are we supposed to go 100% wind and solar by 2030 and how much emissions would that stop? Should we stop supplying cheap, low sulphur, dry black coal to India from an Indian owned coal mine? That they have every right to burn under their Paris commitments?


      But then there is a poisonous environment of climate activist cant to navigate.

  53. Fires always continue well into summer, that’s a fact, not a prediction.
    Areas burn yet recover and return every 20- 40 years that’s a fact.
    Rain in the next few months? Beyond prediction.
    We always worry about rain when it is drought and rain when it is flooding.
    It will rain when it wants to.

  54. Willis has a good post up on Top and Bottom of the Atmosphere at WUWT

    It stirred up my illogic circuits, not hard to do as we all know
    He quoted “ Ramanathan pointed out that the poorly-named “greenhouse effect” can be measured as the amount of longwave energy radiated upwards at the surface minus the upwelling longwave radiation at the top of the atmosphere, viz:
    The greenhouse effect. The estimates of the outgoing longwave radiation also lead to quantitative inferences about the atmospheric greenhouse effect. At a globally averaged temperature of 15°C the surface emits about 390 W m -2, while according to satellites, the long-wave radiation escaping to space is only 237 W m -2. Thus the absorption and emission of long-wave radiation by the intervening atmospheric gases and clouds cause a net reduction of about 150 W m -2 in the radiation emitted to space. “

    Ramanathan does say that as Willis quoted.
    There seems some important kind of disconnect here.
    Probably my poor parsing of English.
    Or a basic misunderstanding of the science involved.
    Happens at times. Not new.
    I need someone to explain the multiple areas I am getting things wrong I.
    I am sure there will no shortage ( of volunteers)

    So

    The TOA EEI is poorly known, calculated in par from models ,adjusted the heck out because it does not fit the warming narrative and presented as a real figure in the rampage of < 2 MW a year with an error range much greater .
    The 150 MW figure is taken as a constant warming when at any given time it is not storing any extra energy the atmosphere is at that energy purely because that is how hot is has to be to radiate out the heat that comes in .
    No storage.
    The energy diagram suffers from being an average not a day night picture showing the tremendous outpouring of heat during the day
    Where is this mysterious 150 MW on the night side?
    Not there at all because now their is no input.
    Where is this 600 MW (hyperbole) on the dayside at midday directly underneath?
    Forget the eggs cooking on the footpath.
    With all that energy roaring through the CO2 we should have an oven melting cars and burning houses.
    But we don’t.
    The temperature of the air is controlled by the level of GHG.
    There is no storage cooking us.
    What comes in goes out.
    The TOA goes much higher in the day so we do not cook and comes in to probably 10 km in the cold bits at night.
    Nobody measures a TOA boundary by satellite.
    They calculate the heat in find where it matches heat out.
    Both difficult for different reasons, atmospheric water at low levels wrecks satellite assessments as do clouds.
    Then they add in ocean heat, the models add in a CO2 rise adjustment factor and they calculate it all into one unreal average boundary distance or radius.
    Which not surprisingly is the distance from the earth ( on average) that a hot body radiating back the energy of the sun that was absorbed and reemmitted was.
    You do not need any of that.
    You just put in the energy received sun.
    Size of sphere to radiate it out.
    Bingo TOA
    100 km out from earth.

    If this is the science Ramanathan is doing there are a lot of Emperors out there without clothes.

    Nice to have a rant. Nicer to be set straight.

    • No clue what you’re on about, but 30% of the 340 SW is reflect back to space before it can be absorbed. So the absorbed SW, energy in, is 238. Or, an imbalance in you numbers of 1.

      Do you want the 237, energy out, to be less so the world can end in fire or more so it can end in ice?

      • JCH | January 24, 2020
        “No clue what you’re on about”‘

        Willis at WUWT ” Ramanathan’s estimate of the size of the greenhouse effect was “about 150 W/m2” and modern CERES data shows a number very close to that, 158 W/m2″
        At a globally averaged temperature of 15°C the surface emits about 390 W m -2, while according to satellites, the long-wave radiation escaping to space is only 237 W m -2. Thus the absorption and emission of long-wave radiation by the intervening atmospheric gases and clouds cause a net reduction of about 150 W m -2 in the radiation emitted to space.

        the absorbed SW, energy in, is 238. or 237
        the 237, energy out,
        Your words.
        So how does Ramanathan claim
        “a net reduction of about 150 W m -2 in the radiation emitted to space”

        If it is all going back out how can he constantly chip off 150 MW per second, minute hour day or year and claim it is all going to the atmosphere when it is gong back into space.

        Note what happens when the sun comes up.
        Basic physics says C02 level X gives Y warming and it heats up instantly.
        Some energy absorption.
        But then it sits there all day not gaining any more energy. Not needing it to keep warm. It is quite happy at this new radiating temp as long as energy goes in and out.
        Night comes and it almost instantly drops as the energy suppl disappears for 12 hours.
        It is not like it is a battery retaining 150 MW during the night as the lesser but still real fluxes pass through it.

        He seems to have confused energy flow with an energy state.

      • Angech: ” At a globally averaged temperature of 15°C the surface emits about 390 W m -2, while according to satellites, the long-wave radiation escaping to space is only 237 W m -2″.

        When we consider planet reflecting as a disk we have very confusing results.

        Because Earth is a sphere and Earth reflects solar incident radiation as a sphere and not as a disk.

        http://www.cristos-vournas.com

  55. There is welcome light to moderate rainfall across the north and west. And a Southern Ocean front moving across the south.

    http://www.bom.gov.au/products/national_radar_sat.loop.shtml

    Seasonal forecasts are probabilities based on ocean and atmospheric indices. Despite IP – they show a potential easing of dry conditions in the coming weeks and months.

    http://www.bom.gov.au/climate/outlooks/#/overview/influences

  56. Ireneusz Palmowski

    Why does the satellite receive less long-wave radiation from clouds than from a cloudless surface?

    • Radiant heat radiated from the surface of the molecules that make up the cloud depends on the surface temperature of that molecule. It is a lot colder than the surface of the earth.

  57. Ireneusz, on the first image it is a monthly average for December 2019.
    On the second image it is daytime for January, 24, 2020.

    At the daytime there should be stronger IR surface emission because when solar irradiated, surface develops the emission temperature at the instant and on the spot.
    Of course there should be less long-wave radiation from clouds than from a cloudless surface, but how to measure the difference.
    Every given day the half of the Earth’s surface is cloud covered.

    http://www.cristos-vournas.com

    • Ireneusz Palmowski

      During La Niña global cloudiness decreases (cooler ocean) so OLR at the top of the atmosphere must increase.

    • Ireneusz and Christos, I appreciate your analysis on clouds. I’m interested in the relationship of cloud effects on climate temperature within the paleo record, though I don’t have the competencies required to describe anything about this other than having some superficial understanding of the basics; albedo, etc. Contemporary science knows half the Earth is covered with clouds in any given day, because recorded satellite data makes this evident, and we have decades worth of this sort of data to date. I imagine, based on the lack of controversy, that average cloud density hasn’t wiggled all that much within the brief record representative of satellite data.

      Sciences understanding of contemporary cloud cover is one piece of the puzzle to understanding its effects on temperature, but how much of this translates to what is known relative to the paleo record? Surely there are differences within the paleo record for what cloud cover must have looked like at peaks and troughs just before profound turns in temperatures? As temperature increases, common sense logic dictates this would lead to more evaporation, more moisture in the atmosphere equates to more cloud albedo; inversely, cold makes the atmosphere more arid, thus less cloud albedo, but more ice albedo. Perhaps ice core layers going back 800k years provide some understanding for year/year relationships to temperature (much in the way tree rings illustrate), but only if data is granular enough, is it that granular? Are the studies conclusive to date? I assume they must be relative to the narrative that the “science is settled mantra”, but I’m truly skeptical to that claim. I don’t know what questions remain; do coarse data points, deep in the record, truly provide conclusive evidence, or only partial evidence? Do pressures under deep ice skew observations? Perhaps all these are elementary questions, but I search for authoritative conclusions, and can find none.

      Is there a summary about clouds that’s conclusive based on a “science is settled” conclusion?

    • jungletrunks: ” As temperature increases, common sense logic dictates this would lead to more evaporation, more moisture in the atmosphere equates to more cloud albedo; inversely, cold makes the atmosphere more arid, thus less cloud albedo, but more ice albedo”.

      I think I have the answer on that part.
      The Earth’s system has only negative feedbacks.
      As temperature increases, this would lead to more evaporation, more moisture in the atmosphere equates to more cloud albedo. it is a negative feedback.
      Cold makes the atmosphere more arid, thus less cloud albedo.

      The Arctic sea ice has a warming and not a cooling effect on the Global Energy Balance.
      It is true that the sea ice has a higher reflecting ability. It happens because ice and snow have higher albedo.
      But at very high latitudes, where the sea ice covers the ocean there is a very poor insolation.
      Thus the sea ice’s higher reflecting ability doesn’t cool significantly the Earth’s surface.
      On the other hand there is a physical phenomenon which has a strong influence in the cooling of Earth’s surface. This phenomenon is the differences in emissivity.
      The open sea waters have emissivity ε = 0,95.
      The ice has emissivity ε = 0,97.
      On the other hand, the snow has a much lower emissivity ε = 0,8.
      And the sea ice is a snow covered sea ice with emissivity ε = 0,8.
      https://www.thermoworks.com/emissivity-table
      Also we should have under consideration the physical phenomenon of the sea waters freezing-melting behavior.
      Sea waters freeze at – 2,3 oC.
      Sea ice melts at 0 oC.
      The difference between the melting and the freezing temperatures creates a seasonal time delay in covering the arctic waters with ice sheets.
      When formatting the sea ice gets thicker from the colder water’s side.
      When melting the sea ice gets thinner from the warmer atmosphere’s side.
      This time delay enhances the arctic waters IR emissivity and heat losses towards the space because of the open waters’ higher emissivity ε = 0,95,
      compared with the snow covered ice ε = 0,8.
      Needs to be mentioned that Earth’s surface emits IR radiation 24/7 all year around.
      And the Arctic region insolation is very poor even in the summer.
      That is why Arctic sea ice has a warming and not a cooling effect on the Global Energy Balance.
      On the other hand it is the open Arctic sea waters that have the cooling effect on the Global Energy Balance.

      Feedback refers to the modification of a process by changes resulting from the process itself. Positive feedbacks accelerate the process, while negative feedbacks slow it down.

      The Arctic sea ice has a warming and not a cooling effect on the Global Energy Balance. It is a negative feedback.
      The melting Arctic sea ice slows down the Global Warming trend. This process appears to be a negative feedback.

      http://www.cristos-vournas.com

      • Christos, I appreciate your informative thoughts, though they’re tangential to my query about what’s yet unknown about the science of clouds, and their effects on temperature; in the context of projecting cloud behavior in a changing atmosphere, at the turning points in temperature similar to that demonstrated in the paleo record?

  58. Alberto Zaragoza Comendador

    In case people are still following this thread, or have read the exchanges I had with Atomsk’ above, I’ve published a detailed analysis of the IPCC’s First Assessment Report, specifically Scenario A:
    View at Medium.com

    The key takeaways are:
    -The First Assessment Report probably underestimated CO2 emissions in 1991-2018. This was more than offset by a major over-estimate of the airborne fraction (61% in FAR, about 45-48% in reality). Thus concentrations in FAR’s Business-as-usual scenario grew 25% more than in reality.
    -The methane forecast was so wrong it’s hard to believe in retrospect; concentrations in FAR’s forecast grew 5-6 times more than in reality. In fact even under Scenario B concentration growth was about 2.5 times greater than in the real world.
    -While FAR did over-estimate future forcing from Montreal Protocol gases, this was partly offset by its omission of tropospheric ozone forcing.

    All in all, the bulk of FAR’s over-estimation of future forcings stems from purely scientific errors and omissions, not from an inability to predict human behaviour and emissions.

  59. Gregory JEVARDAT

    Typo in the introduction ?
    ” Using a more valid regression model helps explain why their findings aren’t inconsistent with Lewis and Curry (2018) which did show models to be inconsistent with observations.”

    should be

    ” Using a more valid regression model helps explain why their findings are inconsistent with Lewis and Curry (2018) which did show models to be inconsistent with observations.

  60. The idea that “peer review” is a gold stamp of unassailable credibility was itself incredible even when I was a doctoral student in economics in the mid-1970s. Anything that challenged conventional wisdom, used new or unconventional methods and data, or was not submitted by someone from the “club” of approved institutions was all too often rejected by top tier journals, often with little or no useful comments. Now we see new barriers to the pursuit of scientific truth (emphasis here on PURSUIT), political ideology and social agendas. Nowhere is this more clear than in what we now call “climate science.” Any research that challenges the AGW model must not only be suppressed, but it’s author must be ridiculed. The only real “consensus” is the consensus of conformity and unwavering allegiance to the “cause.” There are however some hopeful signs of rebellion: a steady trickle of new papers challenging the AGW model, and blog sites like Judith Curry’s. If history is a guide, with the passage of time, the back and forth of real science will prevail.

  61. Noted elsewhere that McKitrick:

    “McKitrick selectively adjusts for volcanic and ENSO effects in the observational analyses, but not Hansen et al. 1988’s (H88’s) model-based projections. His adjustment leaves out, for instance, the cooling effect of drops in total solar irradiance.”
    https://judithcurry.com/2020/01/17/explaining-the-discrepancies-between-hausfather-et-al-2019-and-lewiscurry-2018/#comment-907893

    Tamino, a.k.a. Grant Foster, recently made a blogpost illustrating the latter point (if folks think it’s OK for McKitrick to do a blog analysis, then they could hardly complain if I cite one):

    https://tamino.wordpress.com/2020/01/16/2-of-global-warming-since-1979-due-to-other-things/

    Tamino shows that for the 1979 – 2019 time-period, the NASA GISTEMP analysis has the following warming trends (in K per decade) with and without adjustments that remove the effect of changes in total solar irradiance (TSI), volcanic eruptions, and ENSO:

    unadjusted : 0.19
    adjusted : 0.19


    Tamino’s TSI adjustment has a warming effect on the adjusted trend, since TSI reduced overall during the 1979 – 2019 period (i.e. changes in TSI has cooling effect for surface temperatures from 1979 – 2019). The ENSO adjustment also had warming effect as well, while the volcanic adjustment has a cooling effect, leading to a negligible net effect when the 3 adjustments are combined.

    Yet McKitrick claims to reduce the 1988 – 2017 warming trend for the Cowtan+Way analysis from 0.20 to 0.15 by correcting for volcanic eruptions and ENSO. So that makes me suspicious on McKitrick’s adjustment, especially with respect to what would happen if he included TSI in the adjustment. I suspect a TSI adjustment would at least partially offset the cooling effect of his volcanic adjustment. It’s telling that McKitrick didn’t include TSI, since it’s commonly included in the peer-reviewed literature.

    Other sources on TSI adjustments and their warming effect for the post-1979 time-period:
    10.1088/1748-9326/6/4/044022 , 10.1126/sciadv.aao5297 , 10.1175/JCLI-D-18-0555.1 , 10.1175/JCLI-D-16-0704.1 , 10.1029/2008GL034864

  62. Atomsk’s Sanakan (@AtomsksSanakan) | February 5, 2020 at 12:08
    “ Yet McKitrick claims to reduce the 1988 – 2017 warming trend for the Cowtan+Way analysis from 0.20 to 0.15 by correcting for volcanic eruptions and ENSO. So that makes me suspicious on McKitrick’s adjustment, especially with respect to what would happen if he included TSI in the adjustment. I suspect a TSI adjustment would at least partially offset the cooling effect of his volcanic adjustment. It’s telling that McKitrick didn’t include TSI“

    If you read Tamino more closely you would have seen him say,
    “The solar influence isn’t very big, and its trend is truly tiny, only about -4% of the observed trend. It’s not because the sun can’t affect Earth’s climate, it’s just that the sun’s output hasn’t changed that much.”
    I.e. McKittrick has no need to include it.
    Bindidon also pointed out to Tamino that McKitrick’s analysis was valid, you may have missed that as well.

    Tamino himself also made an elementary oversight when he wrote,
    “When we do subtract the “other factors” estimate from the observed temperature data, we get “adjusted” data that better reflects the man-made part of climate change, now that many of the natural fluctuation factors are removed: It shows a lot less fluctuation than the raw data, and promises us a better estimate of humanity’s influence on the trend, heating up at a rate of 0.0192 (± 0.0016) °C/yr. The part of global warming not due to those “other things” (which is, for the most part, the human influence) is a tiny bit bigger even than we’ve seen. Do note, however, that when it comes to trend the difference is not “statistically significant.”


    What he forgot was that subtracting data estimates of other factors, a factor in itself, will not and cannot show less fluctuation than the raw data.

    The raw data must retain the original fluctuations throughout any added modification except for one purposely designed to smooth a graph.

    His exercise was not one of smoothing the graph but of adding a second set of figures (data) said to represent the estimated over time excursions of the three factors.

    As Tamino knows, adding a constant or adding white noise will change but not smooth the signal. The anomalies persist.

    The natural estimated natural fluctuation factors were not removed to get a smoother graph, just to estimate a trend which is an entirely different mathematical kettle of fish.

  63. angech | February 23, 2020
    Your comment at Tamino’s is awaiting moderation.

    “Zeke Hausfather ” After a year of work our paper on evaluating performance of historical climate models is finally out! We found that 14 of 17 the climate projections released between 1970 and 2001 effectively matched observations after they were published”

    Great news. The earlier ones showed the 1997 El Nino peak and all of them agreed with the “pause” on 1998-2008.
    I do not know what all the fuss was about then if we already knew it was going to happen.
    Thanks Zeke.