Two more degrees by 2100!

by Vaughan Pratt

An alternative perspective on 3 degrees C?

This post was originally intended as a short comment questioning certain aspects of the methodology in JC’s post of December 23, “3 degrees C?”. But every methodology is bound to have shortcomings, raising the possibility that Judith’s methodology might nevertheless be best possible, those shortcomings notwithstanding. I was finding my arguments for a better methodology getting too long for a mere comment, whence this post. (But if actual code is more to your fancy than long-winded natural language explanations, Figures 1 and 2a can be plotted with only 31 MATLAB commands .)

Judith’s starting point is “It is far simpler to bypass the attribution issues of 20th century warming, and start with an early 21st century baseline period — I suggest 2000-2014, between the two large El Nino events.” The tacit premise here would appear to be that those “attribution issues of 20th century warming” are harder to analyze than their 21st century counterparts.

The main basis for this premise seems to be the rate of climb of atmospheric CO2 this century. This is clearly much higher than in the 20th century and therefore should improve the signal-to-noise ratio when the signal is understood as the influence of CO2 and the noise consists of those pesky “attribution issues”. Having used this SNR argument myself in this forum a few years ago, I can appreciate its logic.

Judith also claimed that “The public looks at the 3 C number and thinks it is 3 C more warming from NOW, not since the late 19th century.  Warming from NOW is what people care about.” Having seen no evidence either for this or its contrary, I propose clarifying any such forecast by simply prepending “more” to “degrees” (as in my title) and following Judith’s suggestion to subtract 1, or something more or less equivalent.

Proposal

So what would be an “obviously alternative” methodology? Well, the most extreme alternative I can think of to 15 years of data would be to take the whole 168 years of global annual HadCRUT4 to 2017.

The data for 1850-1900 is certainly sparser than for its sequel. What that does not address however is the extent to which that sparseness compromises the final analysis. By including that data instead of dismissing it out of hand, we may have a better chance of understanding that extent.

Besides increasing by an order of magnitude the duration of the data constraining the priors, another modification we can make is to the target. Instead of taking the goal to be estimating climate for 2100, perhaps plus or minus a few years, I suggest estimating an average, suitably weighted, over the 75 years 2063-2137.

This widening of the window has the effect of trading off precision in time for precision in temperature, as a sort of climate counterpart to the uncertainty principle in quantum mechanics. More generally this is a tradeoff universal to the statistics of time series: the variance of the estimate of the mean tends to be inversely proportional to the sample length.

This wide a window has the further benefit of averaging out much of the bothersome Atlantic Multidecadal Oscillation. And its considerable width also averages out all the faster periodic and quasiperiodic contributors to global land-sea surface temperature such as ENSO, the 11-year solar cycle, the 21-year magnetic Hale cycle, the ongoing pulses from typical volcanoes, etc.

But of what use is a prediction for 2063-2137 if we can’t use it to predict say the extent of sea ice in the year 2100? Well, if we can show at least that the average over that or any other period is highly likely to lie within a certain range, it becomes reasonable to infer that roughly half the years in that period are lower and half are higher. So even though we can’t say which years those would be, we can at least expect some colder years and some warmer years, relative to the average over that period. Those warmer years would then be the ones of greatest concern.

A 75-year moving average of HadCRUT4 would be the straightforward thing to do. Instead I propose applying two moving averages consecutively (the order is immaterial), of respectively 11 and 65 years, and then centering. This is numerically equivalent to a wavelet transform that convolves HadCRUT4 with a symmetric trapezoidal wavelet of width 75 years at the bottom and 55 years at the top. The description in terms of the composition of two moving averages makes it clearer that this particular wavelet is targeting the AMO and the solar cycle for near-complete removal. After much experimenting with alternative purpose-designed convolution kernels as wavelets I settled on this one as offering a satisfactory tradeoff between simplicity of description, effectiveness of overall noise suppression, transparency of purpose, and width—a finite impulse response filter much wider than 75 years doesn’t leave much signal when there’s only 170 years of data. Call climate thus filtered centennial climate.

The point of centering is to align plots vertically, without which they may find themselves uselessly far apart. The centering function we use is c(d) = d – mean(d). This function merely subtracts the mean of the data d from d itself in order to make mean(c(d)) = 0. Hence c(c(d)) = c(d) (c is idempotent).

Lastly I propose 1.85 °C per doubling of CO2 as a proxy for HadCRUT4’s immediate transient climate response to all anthropogenic radiative forcings, ARFs, since 1850. This proxy is reconstructed from ice cores at the Law Dome site in the Australian Antarctic Territory up to 1960 and as measured more directly at Charles Keeling’s CO2 observatory on Mauna Loa thereafter, giving the formula ARF = 1.85*log ₂(CO2) for all anthropogenic radiative forcing. The proof is in the pudding: it seems to work.

Results

Applying our centennial filter to HadCRUT4 yields the blue curve in Figure 1, while applying it to ARF (anthropogenic radiative forcing as estimated by our proxy) yields the red curve.

The two plots ostensibly covering the 30-year period 1951-1980 actually use data from the 104-year period 1914-2017; e.g. the datapoint at 1980 is a weighted average of data for the 75 years 1943-2017 while that at 1951 similarly averages 1914-1988. In this way all the data from 1850 to 2017 is used.

During 1951-1980 and 1895-1915 the two curves are essentially parallel, justifying the value 1.85 for recent and early transient climate response. But what of the relatively poor fit during 1915-1950?

Explaining early 20th century

We could explain the departure from parallel during 1910-1950 as simply an underestimate of TCR. However the distribution of CO2 absorption lines suggests that TCR should remain fairly constant over a wide range of CO2 levels. An explanation accommodating that point might be that the Sun was warming during the first half of the century.

To see if that makes sense we could plot the residual of the above figure against solar irradiance. While there used to be several reconstructions of total solar irradiance prior to satellite-based measurements, I’m only aware of two these days, due to respectively Greg Kopp (a frequent collaborator with Judith Lean) and Leif Svalgaard. Both are based on several centuries of sunspot data collected since Galileo started recording them, along with other proxies. The following comparison uses Kopp’s reconstruction.

 

It would appear that the departure from parallel in the middle of Figure 1 can be attributed almost entirely to solar forcing SF defined as centennial solar sensitivity times absorbed solar irradiance (ASI) as a fraction of total solar irradiance TSI received at top of atmosphere (TOA). The albedo (taken here to be 0.3) is the part of TSI reflected back to space as shortwave radiation. The remaining 70% is the portion absorbed by Earth. This is then averaged over Earth’s surface, which at 4πr² is four times the cross section πr² of the intercepted solar irradiance at TOA, whence the division by 4. That is, ASI = (1 – Albedo)*TSI/4. Lastly ASI (in W/m2) is converted to solar forcing SF (in °C) by multiplying by centennial solar sensitivity CSS (1.35 °C per W/m2 as estimated by Kopp’s reconstruction).

It is almost impossible to evaluate the goodness of this fit by looking just at Figure 1 and the red curve in Figure 2a. The residual (blue curve in 2a) needs to be plotted, and then juxtaposed with the red curve.

Any fit this good implies a high likelihood of four things.

  1. The figure of 1.85 for TCR holds not only on the right and left but the middle as well.
  2. CO2 is a good proxy for all centennial anthropogenic radiative forcing including aerosols.
  3. The filter removes essentially everything except HadCRUT4, ARF, and solar irradiance.
  4. The peak-to-peak influence on GMST of the evident 130-year oscillation in TSI is 0.07*5/3 = 0.12 °C. (The centennial filter attenuates the 130-year oscillation to 3/5 of its amplitude, compensated for by multiplying by 5/3 to estimate the actual amplitude.) Not only is the Sun not a big deal for climate, that 130-year oscillation makes its influence predictable several decades into the future.

As a check on Kopp’s reconstruction we can carry out the same comparison based on Leif Svalgaard’s reconstruction, leaving TCR and the residual completely unchanged.

On the one hand Svalgaard’s reconstruction appears to have assigned weights to sunspots of only 70% those of Kopp, requiring a significantly larger solar sensitivity (1.95) to bring it into agreement with the residual. On the other hand the standard deviation of the residual for Figure 2b (GMST – ARF – SF) is 2.3 mK while that for 2a is 3.7 mK, which is interesting.

Both fits are achieved with TCR fixed at 1.85. We were able to find a tiny improvement by using 1.84 for one and 1.86 for the other, but this reduced the standard deviations of the residuals for Figures 2a and 2b by only microkelvins, demonstrating the robustness of 1.85 ° C per doubling of CO2 as an ARF proxy.

The MATLAB script producing figures 1 and 2a,b from data sourced solely from the web at every run is in the file curry.m at http://clim8.stanford.edu/MATLAB/ClimEtc/.

I would be very interested in any software providing comparably transparent and compelling evidence for a substantially different TCR from 1.85, based on the whole of 1850-2017, and independent of any estimates of AMO and other faster-moving “attribution issues”.

Projection to 2063-2137

Regarding Is RCP8.5 an impossible scenario?, I prefer to think of it as a highly unlikely scenario. Not because Big Oil is on the verge of exhausting its proven reserves however, but because of its strange compound annual growth rate when computed in MATLAB as diff(rcp)./rcp(1:end-1)*100.

 

If that had been a stock market forecast one would suspect insider trading: something is going to happen around 2065 that will cause an abrupt reversal of climbing CAGR when it hits 1.2%, but the lips of the RCP8.5 community are sealed as to what it will be. Or perhaps 2060 is when their in-house psychologists are predicting a popular revolution against Big Oil.

Well, whatever. RCP8.5 is just too implausible to be believed.

Is any projection of rising CO2 plausible? Let me make an argument for the following projection.

Define anthropogenic CO2, ACO2, as excess atmospheric CO2 above 280 ppm. The following graph plots log₂(ACO2) since 1970. We can think of log₂(ACO2) as the number of doublings since ACO2 was 1. However the ±5 ppm variability in CO2 over its thousand-year history makes ACO2=1 a rather virtual notion.

ACO2 was pretty straight during the past century, but has gotten even straighter this century. It reveals a compound annual growth rate of just over 2%.

What could explain its increasing straightness?

One explanation might be that 2% is what the fossil fuel industry’s business model requires for its survival.

Will this continue?

The argument against is based on speculations about supply: the proven reserves can’t maintain 2% growth for much longer, the best efforts of the fossil fuel industry notwithstanding.

The argument for is based on speculations about demand: even if some customers stop buying fossil fuels for some reason, there will be no shortage of other customers willing to take their place, thereby maintaining sufficient demand to justify the oil companies spending whatever it takes to maintain proven reserves at the requisite level for good customer service, at least to the end of the present century.   Proven reserves have been growing throughout the 20th century and on into this one, and speculation that this growth is about to end is just that: pure speculation with no basis in fact. The date for peak oil is advancing at about the same pace as the data for fusion energy break-even.

There is a really simple way to see which argument wins. Just keep monitoring log2(ACO2) while looking for a departure from the remarkably straight trend to date. Any significant departure would signal failure to continue and the argument against wins.  But if by 2100 no such departure has been seen, the argument for wins, though few if any adults alive today will live to see it.

Today CO2 is at about 410 ppm, making ACO2 130 ppm. If the straight line continues, that is, if ACO2 continues to double every 34 years, two more doublings (multiplication of 130 by 4) bring the date to 2019 + 34*2 = 2087 and the CO2 level in 2087 to 130*4 + 280 = 800 ppm. Another 13 years is another factor of 2^(13/34) = 1.3, making the CO2 in 2100 130*4*1.3 + 280 = 956 ppm.

If the 1.85 °C per doubling of CO2 that has held up for 168 years continues for another 80 years, then we could expect a further rise in CO2 from today’s 410 ppm to 956 ppm to be accompanied by a rise in global mean surface temperature (land and sea together) of 1.85*log₂(956/410) = 2.26 °C.

Per decade, this comes to an average of 2.26/8 = 0.28 °C (0.51 °F) per decade. This is merely an average over those 80 years: some decades will rise more than that, some less.

But what if Figure 4 bends down sooner?

I have no idea. My confidence in what will happen if it remains straight is far greater than any confidence I could muster about the effect of it bending down.

For a more mathematical answer, bending down would break analyticity, and all bets would then be off.

A real- or complex-valued function on a given domain is said to be analytic when it is representable over its whole domain as a Taylor series that converges on that domain. In order for it to remain analytic on any extension of its domain it must continue to use the same Taylor series, which must furthermore remain convergent on that larger domain. Hence any analytic extension of an analytic function to a larger domain, if it exists, is uniquely determined by its Taylor series. This is the basis for the rich subject of analytic_continuation. Functions like addition, multiplication, exponentiation, and their inverses (subtraction, division, logarithm) where defined, all preserve analyticity.

Figure 4’s curve is analytic when modeled as a straight line. This would no longer remain the case if it started to bend down significantly

The essential contributors to centennial climate since 1850 look sufficiently like analytic functions as to raise concern when CO2 as its strongest contributor ceases to rise analytically. In particular drawdown by vegetation seems likely to respond analytically if we ignore the impact of land use changes governed by one of the planet’s more chaotic species.

So what does all this mathematical gobbledygook mean in practice? Well, it seems highly unlikely that the vegetable kingdom has been responding to rising CO2 anywhere near as fast as we have been able to raise it. While plants may well be trying to catch up with us, their contribution to drawdown is hardly likely to have kept pace.

But presumably their growth has been following CO2’s analytic growth according to some analytic function.   The problem is that we know too little about that dependence to say what plants would do if our CO2 stopped growing analytically.

Le Chatelier’s principle on the other hand entails a sufficiently simple dependence that we can expect a decrease in CO2 to result in a matching decrease in drawdown attributable to chemical processes. The much greater complexity of plants is what makes their contribution the biggest unknown here. In particular if the vegetable kingdom continued to grow at only a little less than its present pace until CO2 was down to say 330 ppm, its increasing drawdown could greatly accelerate removal of CO2 from the atmosphere.

But this is only one possibility from a wide range of such possibilities.

On the assumption that Figure 4 stays straight through 2100, and Earth doesn’t get hit in the meantime by something much worse than anything since 1850 such as a supervolcano or asteroid, I feel pretty comfortable with my “Two more degrees” forecast for the 75 years 2063-2137.

But if it bends down I would not feel comfortable making any prediction at all given the above concerns. (I made essentially this point in column 4 of my poster at AGU2018 in Washington DC, “Sources of Variation in Climate Sensitivity Estimates”, http://clim8.stanford.edu/AGU/ .)

Moderation note:  As with all guest posts, please keep your comments civil and relevant.

 

383 responses to “Two more degrees by 2100!

  1. A few comments. There are several things that I like about VP’s method:
    • the 75 year window;
    • the ACO2 analysis.
    • the point about global vegetation not being in equilibrium with the rising CO2

    Criticisms and the main points of disagreement with my analysis:

    1. Attempting to eliminate natural climate variability by filtering out the AMO and 11 year solar cycle doesn’t really work. There are additional forcings and longer cycles/oscillations in play. Further, there is much debate about the 20th century TSI data, so I’m not particularly buying VP’s analysis of the early 20th century climate.

    2. While VP’s value of TCR (1.85C) matches the IPCC AR5 average value, this value is much larger than the LC18 value that considered approximately the same time series of data. Nic Lewis estimate of TCR is 1.0-1.6 C (likely range), see these links to the relevant publications:
    https://judithcurry.com/2018/04/24/impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity/
    https://judithcurry.com/2019/12/16/comment-by-cowtan-jacobs-on-lewis-curry-2018-and-reply-part-1/
    https://judithcurry.com/2019/12/20/comment-by-cowtan-jacobs-on-lewis-curry-2018-and-reply-part-2/

    3. While I like the ACO2 analysis, there are many that have argued that CO2 concentration of 956 ppm by 2100 is implausible (see the various RCP8.5 posts).

    VP’s end result of 2 C more warming by ~2100 is basically the same result obtained by Hausfather and Ritchie, using a different analysis method.

    At the end of the day, the difference between VP’s and my analysis hinges on the amount of CO2 by 2100 and the value of TCR, both of which are associated with substantial uncertainty.

    • curryja | December 27, 2019 at 11:44 am |

      Good comments, imo.

      1. Attempting to eliminate natural climate variability by filtering out the AMO and 11 year solar cycle doesn’t really work. There are additional forcings and longer cycles/oscillations in play. Further, there is much debate about the 20th century TSI data, so I’m not particularly buying VP’s analysis of the early 20th century climate.

      Ignorance of natural oscillations is likely to persist for a long time, imo. But all estimates of TCR depend on estimates of the natural variability in one way or another. On the absolute temperature scale the effect of anthropogenic CO2 is small, around 1% of baseline where the natural variability is greater than 1%. In consequence, estimation of the effect of ACO2 is going to be hard for a long time. that is not to disparage efforts, like that of Vaughan Pratt here, but hopefully to help formulate realistic expectations about the likely rates of progress.

  2. There are two generic ways to predict the climate future: (1) Empirically use past data to infer relationships between CO2 levels and global climate, and extrapolate these into the future, or (2) derive physical models that use basic science to relate CO2 levels to climate, and use these against any scenario for future emissions to estimate the future climate. This article evidently uses #1, and it suffers from many ills, such as sparse data, and too many variables. The physical models suffer from many ills, mainly regarding uncertain feedbacks. The one thing sadly lacking throughout the field is humility.

  3. what about totally dynamic, stochastic decadal climate changes?

    What you try to explain is the assumption, all the warming must be over GHG and some feedbacks, incl. ENSO sometimes. Maybe you are right.

    If we look back in climate historie, we will find many episodes where temperatures goes up or down, over centuries and decades. The climate changes without any external forcings too, we should never forget. Except external forcings which are unknown or too small, we can not explain.

  4. Ireneusz Palmowski

    Meanwhile, in the Arctic this year will disappear effect of a strong El Niño in 2015.

    • The area of the surface covered with frozen water may be decreasing but the thickness of the frozen water has increassed over the last 18,000 yeaars. I say the tonage of ice on the Arctic is increasing as indicated by the 250 meters increase shown in the Vostok ice core in the antarctic.

  5. There are probably defects associated with any endeavor of this type. However, it may provide a useful tool for further analysis without the need for long computer runs if those interested can plug in different values representing, for example, Lewis and Curry’s TCR.

    • “if those interested can plug in different values representing, for example, Lewis and Curry’s TCR.”

      Excellent point, and I did exactly that using Nic’s TCR of 1.3 (middle of his range for TCR) in my first response to JC. The residual (GMST – ARF) in my Figure 1 is much larger and isn’t well explained by TSI alone. I’m hoping to see what explanation they propose for that residual if not the Sun.

  6. Ireneusz Palmowski

    The extent of ice in the Eastern Arctic is now growing faster than in 2015.

  7. Incorrect approach to climate analysis. You have to account for the full period of time to pre-industrial time. What people care about is not necessarily the same as what the earth feels about.

    • One of the tensions in climate analysis:

      nabilswedan: “You have to account for the full period of time to pre-industrial time.”

      Donald Rapp: ” This article … suffers from … sparse data,”

      HadCRUT4 since 1850 is my present compromise between those two ideals. As I said in my post, “the proof is in the pudding”.

      If there exists reliable GMST data for that “full period of time” I’m all ears.

      • Berkeley has better coverage.
        and the latest Reanalysis goes back to 1836

      • here Vaughn

        https://rda.ucar.edu/datasets/ds131.3/

        The Twentieth Century Reanalysis Project, produced by the Earth System Research Laboratory Physical Sciences Division from NOAA and the University of Colorado Cooperative Institute for Research in Environmental Sciences using resources from Department of Energy supercomputers, is an effort to produce a global reanalysis dataset spanning a portion of the nineteenth century and the entire twentieth century (1836 – 2015), assimilating only surface observations of synoptic pressure into an 80-member ensemble of estimates of the Earth system. Boundary conditions of pentad sea surface temperature and monthly sea ice concentration and time-varying solar, volcanic, and carbon dioxide radiative forcings are prescribed. Products include 3 and 6-hourly ensemble mean and spread analysis fields and 6-hourly ensemble mean and spread forecast (first guess) fields on a global Gaussian T254 grid. Fields are accessible in yearly time series (1 file per parameter).

        The NOAA-CIRES-DOE Twentieth Century Reanalysis Version 3 uses the NCEP Global Forecast Model that was operational in autumn 2017, with differences as described in (Slivinski et al. 2019). Sea ice boundary conditions are specified from HadISST 2.3 (Slivinski et al. 2019). Sea surface temperature fields prior to 1981 are prescribed from the 8-member ensemble of pentad Simple Ocean Data Assimilation with sparse input (SODAsi.3, Giese et al. 2016) and from the 8-member ensemble of pentad HadISST 2.2 for 1981 to 2015. Observations from ISPD version 4.7 are assimilated using an ensemble Kalman filter.

        The Twentieth Century Reanalysis Project version 3 used resources of the National Energy Research Scientific Computing Center managed by Lawrence Berkeley National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 and used resources of NOAA’s Remote Deployed High Performance Computing Systems. Version 3 is a contribution to the international Atmospheric Circulation Reconstructions over the Earth initiative. Support for the Twentieth Century Reanalysis Project is provided by the Physical Sciences Division of the NOAA Earth System Research Laboratory, the U.S. Department of Energy Office of Science (BER), and the NOAA Climate Program Office MAPP program.

      • SM

        What % of the globe is covered with your reanalysis in 1836?

        Not much I assume, especially after looking at the NASA page for spatial coverage of 1900.

      • “SM

        What % of the globe is covered with your reanalysis in 1836?

        Not much I assume, especially after looking at the NASA page for spatial coverage of 1900.”

        1. NASA page doesn’t show all the data, especially the pressure data.
        2. You believe in the LIA. How many locations do you have to support this belief?
        3. The simple answer is 100%. you use the sample measurements to estimate the whole. The question is not coverage, the question is PREDICTION ACCURACY AND UNCERTAINTY.

        There is no magic number of % coverage. We all look at one ice core , say vostok, and assume it can represent the globe. It can, with UNCERTAINTY

      • @SM: “2. You believe in the LIA. How many locations do you have to support this belief?”

        I’m not sure whether Steven or someone else was asking that question. However during the LIA, CO2 dropped 5 ppm lower than at any time during the previous 600 years, suggesting that LIA not only happened but was global and lasted about 160 years. Here’s a graph of CO2 from Law Dome ice cores supporting this.

        The way I read that graph, the onset of the Industrial Era in 1770 was what brought the LIA to an abrupt halt. During 1770-1780 CO2 rose sharply, though whether that was caused by rising temperature or the onset of industrial emissions is not clear. In any event it was clearly an abrupt climate change judging by the sudden rise in CO2.

        Maybe just coincidence, but in all recorded history the most people that have ever been killed in Atlantic storms in one year was in 1780. Multiple storms IIRC.

      • Vaughan – I have a bit of a problem with the graph of CO2 levels you posted – Hoffman model.

        The first is that it says “observed Law Dome/Mauna Loa” which implies they have mixed the ice core readings with instrumental readings. I find that troublesome on it’s own, but most especially because secondly, my understanding is that ice core readings cannot capture CO2 variability under 800 years or so because fo the problem of diffusion.

        “It is well known that diffusion processes within the firn layer and the gradual enclosure of the air in the lock-in-zone of the ice lead to a reduced signal of the original atmospheric variability and may obscure high-frequency variations (e.g. Trudinger et al., 2003)”

        There are a number of papers on this.

        I’d love to get your thoughts on that point (I really enjoyed this essay btw – a great discussion).

        One of my concerns is that we might have causality the wrong way around. On all time scales that we can reliably measure atmosCO2, CO2 lags temperature. It may well be a much larger proportion of recent atmospheric rise is due to increasing temperatures from natural variability, rather than the other way around. It would certainly explain why despite emissions trebling in the last 2 decades compared to the previous 2, temperatures increase has slowed. It would also explain why CO2 increases in the atmosphere don’t increase or decrease along with human emissions.

    • I’m left wondering what the Earth feels about non-human life sequestering all that CO2 … perhaps happy to see us re-introduce it into the biosphere? Earth spawned life, I’m guessing the CO2-driven greening would make her happy … surely, the plants would thank us, if they could eh ….

  8. stevenreincarnated

    Your argument would seem to be that there is no long term climate sensitivity, only transient.

    • “Your argument would seem to be that there is no long term climate sensitivity, only transient.”

      If you have any actual *data* bearing on long term climate sensitivity I’m all ears.

      The IPCC certainly doesn’t, all they have is model runs on ensembles of scenarios from the CMIP5 crowd. A footnote in section D.2 of the Executive Summary of WG I of AR5 constitutes the state of the art regarding equilibrium climate sensitivity:

      “No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.”

      With no actual geophysical data to look at, only estimates based on some 35 models that vary too wildly to be meaningful, anyone chasing after “long term climate sensitivity” today would appear to be in a state of sin.

      • stevenreincarnated

        True, the estimates would vary wildly and I certainly couldn’t tell you what it is but if I were inclined to believe you are on the right track to begin with then my point in mentioning long term climate sensitivity would be to point out that you should expect a divergence between forcing and temperature change. They track too closely over time with your method.

      • There are certainly efforts to use paleoclimate data to calculate ECS:

        https://en.wikipedia.org/wiki/Climate_sensitivity#Using_data_from_Earth's_past

      • => ECS ~ 5K

        Very approximately. 😊


      • There is no expectation that ECS = constant

      • stevenreincarnated

        DA, I agree there is no reason to believe that that ECS is a constant and I agree with VP that the estimates are all over the place. My point doesn’t really require a good estimate other than there is an ECS of at least some importance. After decades and decades of warming there is no valid reason to expect temperature change to be a direct result of forcings being added because by now you would have not only the new forcings but a cascade of long term warming from forcings added previously. There should be a divergence between the added forcings and temperature change after such a long period of time and my guess, because guessing is what I’m limited to, is that the divergence should be pronounced by now.

  9. Cattle have stopped breeding, koalas die of thirst: A vet’s hellish diary of climate change

    Bulls cannot breed at Inverell. They are becoming infertile from their testicles overheating.

    • Those bulls must have had a really much harder time during the Federation drought then, when temperatures were higher and the drought was harder than the current one. That inconvenient fact is why BoM starts their analysis at 1900.

      • Christ

        Are you referencing the 1896 drought and severe heat wave?

        Tonyb

      • Chris

        My iPad has obviously been imbued with the Christmas spirit

        Tonyb

      • This paper discusses 3 droughts, including the Federation drought.

        “Recent reconstructions (36, 40) indicate that 3 major decade-scale drought epochs have occurred in the instrumental record: the (1892) 1895 to 1903 “Federation Drought,” the (1935) 1937 to 1945 “World War II Drought,” and the (1997) 2000 to 2009 “Millennium Drought” (MD) (SI Appendix, Fig. S1†; dates in parentheses indicate the start date used by some authors). Preinstrumental (pre-1900) and instrumental reconstructions (40) indicate that the FD (especially 1895 to 1902) was the most geographically extensive of the 3, with more intense rainfall deficiencies across most of the eastern half and north of the continent (the exception being far southern regions; refs. 36 and 40) and a combined intensity and spatial footprint that exceeds that of any Australian drought for at least the past 2 centuries and possibly longer (40)”

        https://www.pnas.org/content/116/31/15580

      • Yes Tony, that is the one – It went on for about a decade and affected almost all of Australia
        https://www.nma.gov.au/defining-moments/resources/federation-drought
        At the same time, there were massive heatwaves lasting weeks and IIRC, major fires as well. But then, Australia has always been on the edge of drought and fires.

      • Doubtless they did. Australia’s cattle and sheep numbers were halved in the Federation drought.

      • Chris Morris

        >”That inconvenient fact is why BoM starts their analysis at 1900″

        Yes. The world according to BoM.

    • The bulls in Inverell are having problems with the heat, where the high during a year is 87 degrees Fahrenheit, however, the bulls in Mesa, AZ are not having any problems where the temperature’s routinely exceed 100 degrees Fahrenheit? I think the vet needs to look at the control for their study.

      • Not long ago around 600,000 cattle died in a storm and flood Queensland. A news report I read said many of had died of exposure. The low temps were around 50 to 60 degrees Fahrenheit. Cattle where I grew up regular survived storms with temps of 20 degrees below zero and lower. So I thought that was odd. Turned out the cattle in that part of Australia were bred to survive hot conditions. They have thin hides and thin coats. Unable to get to food during the storm, they were in fact dying of exposure at temperatures out cattle would have considered warm and cozy.

        So I don’t know the back story on this.

      • Looks like an opportunity for AZ bulls to hire themselves out to knock hooves with them heifers, down there in Inverell.

      • I had forgotten this. We are a country of extremes. “I love a sunburnt country, A land of sweeping plains, Of ragged mountain ranges, Of droughts and flooding rains…”

      • August 9, 2018: More than 700kg of dead fish pulled out of Brisbane lake

        “They [the council] did a number of tests on water quality with regard to the algae, the oxygen levels and a few other tests to make sure the environment wasn’t killing the fish, such as poison,” he said.

        “They have decided that what has happened is that the temparature of the lake got low enough the tilapia couldn’t survive.”

        https://www.brisbanetimes.com.au/brisbane-news/more-than-700kg-of-dead-fish-pulled-out-of-brisbane-lake-20180809-p4zwj1.html

    • They’re Australian bulls. They are still giving it 120% effort. Perhaps we need to issue boxer shorts?

  10. Pingback: Two more degrees by 2100! — Climate Etc. – Climate- Science.press

  11. I don’t understand why all those silly predictions about warming by 2100 based on unproven assumptions.

    The truth is that the velocity of warming is decreasing since the mid-1990s. Is anybody taking this into account?
    Average velocity of warming is ΔT/Δt, obtained by subtracting from each monthly anomaly in HadCRUT4 the previous month. The data is very noisy so we can use a 181-month (15-yr) centered moving average to see it better. This is the correct average because it is 1/4 of the 65-yr oscillation discovered by Schlesinger & Ramankutti in 1994. However it can be seen in any average between 10 and 25 years.

    The velocity of warming responds very little to the increase in our emissions and should respond very little to any decrease in emissions that we manage to accomplish.

    The velocity of warming shows the correct phase shift with respect to the multidecadal oscillation in global temperature as identified by Schlesinger & Ramankutti in their Nature paper of 1994. This indicates that temperature is going to increase more slowly in the coming years.

    A conservative projection of how much is the global surface temperature average going to increase by 2100 is 0.15-0.20 °C per decade, so from 2020 to 2100 we should expect an increase of 1.2 – 1.6 °C.

    • “I don’t understand why all those silly predictions about warming by 2100 based on unproven assumptions.”

      Apples and oranges. As I said, my target was not 2100 but 2063-2137. Some years in that 75-year interval may be a lot colder than others, consistent with your point. I can’t say which ones will be warmer, and wasn’t trying to. I was willing to settle for the weaker claim that about half of them will be colder and half warmer

  12. Let me respond to Judith’s points 1-3 as follows.

    1. “There are additional forcings and longer cycles/oscillations in play.” Sure, lots of them. But Figure 2 (either a or b) demonstrates that they have no detectable influence on *centennial* climate as defined by this trapezoidal wavelet, which is all that matters for averaging over 2063-2137. Were this not so I would have refined the wavelet further.

    “there is much debate about the 20th century TSI data, so I’m not particularly buying VP’s analysis of the early 20th century climate.” Certainly Kopp and Svalgaard interpret the *scale* of sunspots very differently, but is there any disagreement by anyone making a career out of studying TSI as to the *timing* of the last grand solar minimum and maximum at centennial scale? Their timing is the only thing that matters in explaining the shape of Figure 1’s residual (GMST – ARF, the blue curve in Figure 2) solely in terms of the Sun. Both extremes match the blue residual remarkably well with regard to timing.

    2. Using Nic’s middle TCR of 1.3 C/doubling for centennial climate gives the following match between centennial GMST and centennial CO2.

    (Sorry but I’ve forgotten how to insert images in Climate Etc comments.)

    The residual GMST – ARF is as per the blue curve in this plot.

    There is no trace of either extreme (grand max or grand min) in Nic’s residual, even with a centennial solar sensitivity, CSS, of 3.01 (as opposed to 1.35 when TCR is 1.85). (For Figure 2a, Svalgaard’s TSI, CSS = 4.35 instead of 1.95 is needed.) How would you or Nic account for the much larger centennial residual obtained with a TCR of 1.3 C/doubling than with 1.85, especially when it doesn’t show that minimum and maximum in TSI?

    3. “many that have argued that CO2 concentration of 956 ppm by 2100 is implausible” I’m not disagreeing with that, and my post considers both cases. As I said, if they’re wrong, I’m confident about 2 more degrees, if they’re right then all bets are off, mainly because plants then become a bigger unknown. Nothing to disagree with here.

  13. I don’t think that ocean temperature inertia has been taken into account. The rate of change of ocean temperature is roughly proportional to the “forcing”, so the peaks and troughs of ocean temperature (and hence global temperature) are asynchronous with the peaks and troughs of “forcing”. While I think the approach taken in this article is way better than many others, I don’t think that any such analysis can be reliable.

    • MJ: “the peaks and troughs of ocean temperature (and hence global temperature) are asynchronous with the peaks and troughs of “forcing”.

      Why wouldn’t that be even more true of land temperature, given that land is more sensitive to forcing than ocean? HadCRUT4 takes both into account.

      The centennial quantity GMST – ARF – SF has had a standard deviation of 2.3 thus far, i.e. extremely flat. If the next 80 years of Sun and CO2 are like the past 168 years then it would be surprising if it suddenly started jumping around more strenuously. Something like Figure 4 bending down would have to happen, or some other event not witnessed since 1850.

      • MJ: The oceanic mixed layer is sufficiently intimately coupled with the atmosphere that my centennial filter pretty much masks any delay in its response to the forcings. But I would expect the deep ocean on the other hand to respond so sluggishly as to be unlikely to play any more of a role in climate this century than in the last. In the 22nd century we may see more of its influence, perhaps.

    • MJ: Oops, sorry, I misread your “asynchronous” as “synchronous”. Ignore the first paragraph of my reply. The second paragraph is all that matters. Let me deal with the asynchronicity in a second (third?) reply.

  14. Pratt wrote: “The date for peak oil is advancing at about the same pace as the date for fusion energy break-even.”*

    Worth the price of admission and more!
    * Well, almost wrote. I did a slight edit to conform what he wrote to what I read (and which I believe he intended).

  15. Such a house of dinosaur cards you live in Vaughan. We live on a spinning planet driven by the sun far from thermodynamic equilibrium where tremendous energy cascades through shifting patterns in a turbulent fluid medium. The solution to this fluid flow problem may be a little more problematic than log CO2.

    A “small forcing can cause a small change or a huge one.”
    — National Academy of Sciences, 2002, Abrupt climate change: inevitable surprises, p74

    That was Vaughan’s first mistake. The problem goes far beyond the AMO to patterns of globally coupled, nonlinear oscillators that evolve and synchronize over decades to millennia – having an evident influence on average surface warming in the 20th century.


    But let’s cut to the chase.

    “Outgoing longwave (LW) radiation exhibits similar increases with warming in each RCP scenario, but reflected shortwave (SW) radiation decreases far more rapidly for RCP8.5 due to marked decreases in cloud cover and snow/sea-ice. In time, the climate system heats up sufficiently in all scenarios to arrive at a new equilibrium temperature and an energy balance at the TOA.” https://www.mdpi.com/2225-1154/6/3/62/htm

    This is not just model land – which have not captured modes of internal variability – but the pattern of TOA radiant flux observed in the satellite era. Cooling in IR and warming in SW. This is Vaughan’s second mistake. Assuming that global albedo is constant.

    Ellesmere Island in the Canadian Arctic archipelago in 2100 AD?

    Fine scale modelling using fundamental equations of state – not possible at a global scale due to limitations in computing power – suggests at least that the potential is there.

    https://www.nature.com/articles/s41561-019-0310-1

    The Tapio Schneider et al study is on marine boundary layer strato-cumulus. Tapio’s favorite cloud – because there’s so much of it. Me to. I look for closed and open Rayleigh–Bénard convection cells everywhere. Marine boundary layer strato-cumulus cells are closed until they rain out from the centre leaving open cells. The bistable, nonlinear dynamics of this modulates global albedo and thus the energy dynamic of the planet and ocean heat content.

    How significant this is in the big picture can only be determined by observation.

    “This study examines changes in Earth’s energy budget during and after the global warming “pause” (or “hiatus”) using observations from the Clouds and the Earth’s Radiant Energy System. We find a marked 0.83 ± 0.41 Wm−2 reduction in global mean reflected shortwave (SW) top-of-atmosphere (TOA) flux during the three years following the hiatus that results in an increase in net energy into the climate system. A partial radiative perturbation analysis reveals that decreases in low cloud cover are the primary driver of the decrease in SW TOA flux. The regional distribution of the SW TOA flux changes associated with the decreases in low cloud cover closely matches that of sea-surface temperature warming, which shows a pattern typical of the positive phase of the Pacific Decadal Oscillation.” Loeb et al 2018

    The Loeb et al study is a bit of a government camel – and whether the 2014/15 ‘regime shift’ is baked in is another question mark. But let’s take the observations at face value and ask how SST in the Pacific Ocean changes. At least some of it is not anthropogenic.

    And this non-anthropogenic component changes over decades to millennia – as I said. Of particular interest is the uptick in El Niño frequency and intensity – and the change in beat frequency – at the beginning of the 20th century. This refers to rainfall in Australia.


    https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-12-00003.1

    It may indeed be solar modulated through the polar annular mode influences on global oceanic gyres. The latest Pacific Ocean climate shift in 1998/2000 is linked to increased flow in the north (Di Lorenzo et al, 2008) and the south (Roemmich et al, 2007, Qiu Bo et al 2006) Pacific Ocean gyres. Roemmich et al (2007) suggest that mid-latitude gyres in all of the world’s oceans are influenced by decadal variability in the Southern and Northern Annular Modes (SAM and NAM respectively) as wind driven currents under a baroclinic atmosphere (Sverdrup, 1947).

    it is unlikely that a weak solar intensity signal contributed much directly to early 20th century warming. We must look instead for an amplifying mechanism. Strike 3 and he’s out. 😊 What can we expect in the 21st century? Surprises would still seem to be the order of the day – as uncomfortable as uncertainty is.

  16. Ireneusz Palmowski

    The resulting summary curve, which is linked to the solar activity curve defined by the averaged sunspot numbers5, restored backward for 3000 years shows about 9 grand cycles of 350–400 years, with the times of their grand minima having remarkable resemblance to those reported from the sunspot and terrestrial activity in the past millennia17: Maunder (grand) Minimum (1645–1715), Wolf grand minimum (1200), Oort grand minimum (1010–1050), Homer grand minimum (800–900 BC), combined with the warming periods: medieval (900–1200), Roman (400–10 BC) and other ones occurred between the grand minima. This approach allowed us to predict the modern grand solar minimum (GSM) approaching the Sun in 2020–20556. This grand minimum offers a unique opportunity for the space scientists and all people of the planet to witness in many details the modern grand minimum and to understand better the nature of solar activity.
    https://www.nature.com/articles/s41598-019-45584-3

    • Ireneusz Palmowski

      “This grand minimum offers a unique opportunity for the space scientists and all people of the planet to witness in many details the modern grand minimum and to understand better the nature of solar activity.”

      • No doubt in her mind. It is going to look ridiculous when the Solar Grand Minimum does not show up.

      • Ireneusz Palmowski

        You do not have arguments?

      • The arguments have been exposed to boredom. No need to repeat that the polar fields method, with an excellent track record disagrees with Zharkova.

      • Ireneusz Palmowski

        Noctilucent Clouds
        The southern hemisphere season for noctilucent clouds began on Nov. 15th–the earliest start in recorded history. Check here for daily images from NASA’s AIM spacecraft.

      • Ireneusz Palmowski

        Sunspot number: 0
        What is the sunspot number?
        Updated 27 Dec 2019

        Spotless Days
        Current Stretch: 1 day
        2019 total: 278 days (77%)
        2018 total: 221 days (61%)
        2017 total: 104 days (28%)
        2016 total: 32 days (9%)
        2015 total: 0 days (0%)
        2014 total: 1 day (<1%)
        2013 total: 0 days (0%)
        2012 total: 0 days (0%)
        2011 total: 2 days (<1%)
        2010 total: 51 days (14%)
        2009 total: 260 days (71%)
        2008 total: 268 days (73%)
        2007 total: 152 days (42%)
        2006 total: 70 days (19%)
        Updated 28 Dec 2019

      • Noctilucent Clouds
        The southern hemisphere season for noctilucent clouds began on Nov. 15th–the earliest start in recorded history.

        Just how long is this, “so called”, “recorded history”?
        I would guess only since we were watching from space.
        That is a blip in a thousand year climate cycle.

        Climate Science is to write a bunch of complicated stuff that means nothing, that no one could understand, then draw conclusions, pre-determined, and say obviously, man has suddenly become able to control the climate. Man does not know what caused natural climate cycles in the past but knows a trace gas change can overpower whatever it was.

        Short term correlations used to explain thousand year cycles gives forecasts that are extrapolated beyond anything that has happened in the past.

        Past boundaries are still in effect. Climate is self correcting but there is not immediate correction. When climate warms, there is more polar evaporation and snowfall and ice accumulation. The results of this correcting is felt after a few hundred years, that is how thousand year cycles work. Check the ice core records. Warm times are when sequestered ice accumulates the most, colder times always follow times of the most ice accumulation, the more ice advances and causes the colder.

        These ice advances and colder after are not at the same time in the opposite hemispheres. Ice accumulation and cold and warm cycles in the Greenland and Antarctic ice cores are not in phase and do not last the same length of time. If the external forcing was in control, if CO2 was in control, these cycles in the Greenland and Antarctic ice cores would be more in lockstep phase with each other. Climate does not warm and cool at the same time all over the earth. The bigger cycles are somewhat coordinated because the oceans connect all of the climate system, but the smaller cycles work in their own hemisphere on their own internal clocks, all reflecting external correlations, in resonance with external forcing, but not in phase with external or opposing hemispheres.

  17. nah! no degrees more, just cooling. As I have shown in my papers the solar irradiance AND solar wind determine the temperatures in earth. People usually forget to look at solar wind when they look at the sun. They only take into account the solar activity and solar irradiance. As I have shown the solar wind is decisive. It manipulates the geomagnetic field and cloud covering. Temperatures oscillate according to the sun. By adding the AMO index oscillation (that counts for internal system variability) to the two solar constituents, we get an extremely accurate temperature projection. As soon as AMO turns negative we shall experience a strong cooling. That is one degree cooling by 2100 finally.

  18. “SOC is a vital component of soil with important effects on the functioning of terrestrial ecosystems. Storage of SOC results from interactions among the dynamic ecological processes of photosynthesis, decomposition, and soil respiration. Human activities over the course of the last 150 years have led to changes in these processes and consequently to the depletion of SOC and the exacerbation of global climate change. But these human activities also now provide an opportunity for sequestering carbon back into soil..” https://www.nature.com/scitable/knowledge/library/soil-carbon-storage-84223790/

    Reclaiming deserts, restoring and conserving soils, forest, woodland, savanna and wetlands. We are architects of our future and not passively enduring.

    A million of these by 2040?


    https://www.excellentdevelopment.com/our-strategy

  19. ACO2 was pretty straight during the past century, but has gotten even straighter this century. It reveals a compound annual growth rate of just over 2%.

    What could explain its increasing straightness?

    One explanation might be that 2% is what the fossil fuel industry’s business model requires for its survival.

    A better one is that it is more or less what the economy grows, since it is the economy what produces the ACO2.

    Will this continue?

    Only if the economy continues growing. Global economic growth has been slowing down over time.

    Proven reserves have been growing throughout the 20th century and on into this one, and speculation that this growth is about to end is just that: pure speculation with no basis in fact.

    In reality reserve growth doesn’t have to end for Peak Oil to occur. Peak Oil is defined as maximum extraction rate, not reserve growth.
    But to any rational person, the fact that we have to resort to break the rocks with high pressure water or sand to release the lightest fraction of the oil trapped is evidence that Peak Oil cannot be too far in the future.

    The date for peak oil is advancing at about the same pace as the data for fusion energy break-even.

    Except that Peak Oil will arrive and we don’t know if fusion energy will ever be commercially viable.

    • Javier: “Global economic growth has been slowing down over time.”

      The economy grows in spurts. My projection is for the 75 years 2063-2137. What past 75-year interval supports your claim that economic growth has been slowing down?

      “But to any rational person”

      This is the https://en.wikipedia.org/wiki/No_true_Scotsman argument. You’re claiming that only irrational people like me would argue that petroleum could continue to be extracted at an increasing rate for the rest of this century provided the extractors kept increasing their efforts.

      “Except that Peak Oil will arrive”

      That will be irrelevant to my projection if it doesn’t arrive this century. To quote from the Wikipedia article on Peak oil, “In 1919, David White, chief geologist of the United States Geological Survey, wrote of US petroleum: ‘… the peak of production will soon be passed, possibly within 3 years.’ ” A hundred years later people are still convinced we’re on the verge of Peak Oil.

      • The economy grows in spurts. My projection is for the 75 years 2063-2137. What past 75-year interval supports your claim that economic growth has been slowing down?

        Seriously?



        Peak Oil will arrive this century for sure, and sooner than most people think possible. And yes, only irrational people like you would argue that petroleum could continue to be extracted at an increasing rate for the rest of this century, because that is not what the evidence shows. The rate has been decreasing for the past 30 years and it is approaching zero.

    • But to any rational person, the fact that we have to resort to break the rocks with high pressure water or sand to release the lightest fraction of the oil trapped is evidence that Peak Oil cannot be too far in the future.

      What is any rational person’s rationale regarding the price and size of frackable reserves, and how this indicates that Peak is upon us ?

      • Any rational person would ignore the reserve numbers that depend on definitions and can’t really be trusted and concentrate instead on rate of extraction and the economics of oil.

        Besides, if the world demands less oil Peak Oil would take place regardless of reserves.

  20. I would invoke some things I remember from Nic’s excellent work. The shorter the time interval between initial time and final time, the higher the uncertainty. I believe you reference this in your post. Your long averages reduce the length of the interval by perhaps 30%. I would like to see formal statistical uncertainties and confidence intervals. Nic’s work is quite rigorous on this score. Without it its not of much value quite frankly.

    While I welcome your contribution Vaughn, it would be much more persuasive if it was more rigorous statistically.

    Perhaps at some point Nic himself will show up and comment.

    • “Your long averages reduce the length of the interval by perhaps 30%.”

      Yes, to 94 years.

      How is this a problem? My post was in response to JC’s analysis based on 15 years, which further reduces the length of my 94 year interval by perhaps 85%.

      “Nic’s work is quite rigorous on this score.”

      It would have to be if he’s trying to extract useful information from a mere 15 years. One can get by with far less statistical rigor when working with six times that much data.

    • “I would like to see formal statistical uncertainties and confidence intervals.”

      Nic’s uncertainty was for temperature in 2100. My uncertainty traded off confidence in time for confidence in temperature: I’m very confident that there are some 37 years in the 75-year neighborhood of 2100 that are above my estimate of the mean for that period, but if you asked about one of those 75 I’d have to toss a coin as to whether it was hotter or colder than the mean (suitably detrended of course).

      Just because that’s easy to understand doesn’t make it less rigorous. 2+2 = 4 is easy to understand, but it’s still rigorous.

    • “Your long averages reduce the length of the interval by perhaps 30%.”

      Actually even more: 74/168 = 44%.

      However that’s not an intrinsic problem with my method but rather an artifact of the available GMST data: Whereas HadCRUT4 only goes back to 1850, the CO2 and TSI reconstructions go back centuries further. If I had just one more century of GMST data, however sparse, the “reduction” you speak of would be from 268 years to 196 years, a reduction of 28%.

      But in any even my analysis uses all the data since 1850. The interval 1887-1980 is not the whole data but rather an intermediate step based on all the data since 1850 leading to an estimate of the mean for a future 75-year interval. It is misleading to view it as merely completely discarding the data for 1850-1886 and 1981-2017.

    • There is another issue here that Nic pointed out in his work. Using later time frames has an additional advantage that the forcing is much higher and so its relative error is smaller.

      Statistically there is a big advantage to using periods as early and as late as possible.

      • @dpy6629: “Using later time frames has an additional advantage that the forcing is much higher and so its relative error is smaller.”

        The big problem with today’s forcing is that it’s competing with the AMO. Rather than try to guess the apportioning of radiative forcings and the AMO to accounting for recent climate change, my approach was to look at a much longer time frame in a way that eliminates most of that competition.

      • dspy6629 wrote, “Nic pointed out… Using later time frames has an additional advantage that the forcing is much higher and so its relative error is smaller.”

        I agree.

        The forcing we’re interested in is CO2 (including consequent feedbacks). CO2’s forcing (i.e., log(CO2 level)) has been increasing rapidly for about sixty years, and at a roughly linear pace for about forty, as you can see in this log-scale graph:

        That suggests we should use 40-60 years of data.
         

        Vaughan Pratt replied, “The big problem with today’s forcing is that it’s competing with the AMO… look at a much longer time frame in a way that eliminates most of that competition.”

        Since the AMO is about 60-70 years, that’s all you need, to “eliminate most of that competition.”

        The problem with going back even further is that the CO2 forcing was so slight that most of the temperature changes were surely due to other factors. In other words, the signal you’re looking for is drowned by “noise.”

        I imagine that we can all agree that there’s no way to glean information about the effect of CO2 on temperature from the circled periods, here:

        We can put a lower bound on the amount of CO2 forcing needed for us to have reasonable hope that its forcing will exceed other factors affecting temperatures, by observing that temperatures fell from about 1940 to 1970 (and even through the late 1970s in the northern hemisphere), as CO2 levels rose by about 4.7% over 30 years:

        From that fact we can deduce that we’ll need a CO2 level change of substantially more than 5% if we’re to have any hope of quantifying the warming effect of that CO2 level change, from the temperature measurements.

        From 1800 to 1900 CO2 levels (from ice cores) rose less than 5%. So trying to discriminate the effect of that change from other factors, and quantify it, is probably impossible.

        From 1930 to 1955 CO2 levels (from ice cores) rose less than 2%. So trying to detect the effect on temperatures from that period is downright silly.

        You might think you could detect an effect from the 15 ppmv CO2 level increase between 1900 and 1940, and temperatures did, indeed, rise during that period. But the total CO2 level increase was only about 5%, which we know is too slight to reliably exceed other factors that affect temperatures.

        The bottom line is that I don’t think it’s possible to quantify the warming effect of CO2 using temperature data before the 1950s. I think your best bet is to get is close to 60 years as possible (to minimize AMO effect), but restrict yourself to using data only since the 1950s, and avoid endpoints with ENSO spikes.

        I used 1960-2014. That’s not enough to entirely avoid AMO effects, but it does mostly dodge the ENSO spikes:

      • @db: “The bottom line is that I don’t think it’s possible to quantify the warming effect of CO2 using temperature data before the 1950s. I think your best bet is to get is close to 60 years as possible (to minimize AMO effect), but restrict yourself to using data only since the 1950s, and avoid endpoints with ENSO spikes.”

        Dave, I think you’re missing the elephant in the room. All the phenomena you’re talking about except for CO2 are at high frequencies. Had this discussion been about designing broadcast receivers, you’d be telling me about how hard it is to tune out FM signals when I’m trying to build an AM receiver

        An AM receiver doesn’t have to worry about FM signals because they’re at much too high a frequency for an AM receiver to even be aware of them..

        Likewise I don’t have to worry about ENSO because it’s a quasiperiodic signal with a mean period of about 7 years. And although the AMO may be weakening a little, the reason lukewarmers like to claim it is over is because it is currently on a downswing during 2000-2030 and so they can blame that downswing on CO2 instead of the AMO. This creates the impression that TCR is a lot less than it really is.

        It is completely counterintuitive to analyze the impact of CO2 on global warming by looking at 15-year periods. That short a time frame is very noisy. That’s why I focus on centennial climate instead of decadal climate.

        “I used 1960-2014. That’s not enough to entirely avoid AMO effects”

        Using your figure of 60 years for the AMO,why not use 1955-2014 to entirely avoid AMO effects. (HadCRUT4 is stronger at 65 years than at 60, so I’d have used 1950-2014.)

        “but it does mostly dodge the ENSO spikes:”

        ENSO being quasiperiodic with a mean period of 7 years, it shouldn’t have any influence at all on long term forecasting.

      • Vaughan wrote on January 2, 2020 at 6:26 pm, “All the phenomena you’re talking about except for CO2 are at high frequencies… I don’t have to worry about ENSO because it’s a quasiperiodic signal with a mean period of about 7 years.”

        You do need to worry about ENSO, because it is such a large signal that if you start or end with an El Nino or La Nina spike, it makes a substantial difference in the trend that you’ll calculate.
         

        Vaughan wrote, “It is completely counterintuitive to analyze the impact of CO2 on global warming by looking at 15-year periods. That short a time frame is very noisy.”

        I agree with that, w/r/t the temperature signal, but not w/r/t the CO2 forcing. It is not very noisy.
         

        Vaughan wrote, ” That’s why I focus on centennial climate instead of decadal climate…”

        Decadal is too short, but centennial is too long, because before the 1950s there was not enough CO2 forcing for you to have any realistic hope of finding its signature in the temperature data. Extending your analysis back that far just adds more “noise” to the temperature signal.
         

        Vaughan wrote, “Using your figure of 60 years for the AMO,why not use 1955-2014 to entirely avoid AMO effects. (HadCRUT4 is stronger at 65 years than at 60, so I’d have used 1950-2014.)”

        Either 1950 or 1955 would be reasonable, though 1955 puts you just 1/3 of the way into a very strong La Nina.

        I chose 1960 because:

        1. Mauna Loa CO2 measurements began in March 1958. CO2 levels prior to that are less precisely known; and

        2. That’s also about when CO2 emissions really took off; and

        3. It was the start of a fairly long ENSO-neutral period, as you can see here:

         

        Vaughan wrote, “ENSO being quasiperiodic with a mean period of 7 years, it shouldn’t have any influence at all on long term forecasting.”

        A big ENSO spike at either end of your analysis period will have a substantial effect on a linear regression of temperatures, if the period is only 50-60 years. That’s especially true if the spike is not adjacent to an opposite spike of similar magnitude.

        So I understand your motivation for using a longer period. But if you go back so far that CO2 level changes were too slight to have had much to do with with temperature changes, it cannot possibly help you quantify the effect of CO2 on temperatures.

        It also doesn’t help that both CO2 and temperatures are less precisely known for those earlier dates.

  21. Well, it seems highly unlikely that the vegetable kingdom has been responding to rising CO2 anywhere near as fast as we have been able to raise it. While plants may well be trying to catch up with us, their contribution to drawdown is hardly likely to have kept pace.

    You would have to factor in human greed for profit too: lots of wood (pellets) are sold to be burned in European power stations to produce “green” electricity. That cutting big trees for this reduces the quantity of CO2 they could absorb is an inconvenient truth.

    • Carbon cycle – and I doubt that they cut valuable timber for pellets.

      • The wood pellets are mostly made from relatively young trees. Here in North Carolina, they’re mostly Southern Yellow Pine.

        They also use chipped branches from tree-trimming, and wood scraps and sawdust from lumber mills. Competing uses would be products like particleboard, paper & cardboard.

        In general, wood pellets are simply a more expensive and environmentally destructive alternative to coal. But when the current political realities/insanities are taken into account, it is possible to make a case for using wood pellets.

    • human “greed for profit”
      aka : desire for a better life

      • Someone with a yearly US $ 50K income is getting a better life. At which income figure does pursuing “more” becomes an unhealthy addiction? US $ 500K, 5,000K, 50,000K etc,?

  22. Hopefully global energy demand will quadruple and more this century. I have doubts that fossil fuel supplies can keep up – even with venturing into deeper and more expensive reserves. This is the high energy planet scenario that will likely involve a portfolio of technologically sophisticated sources.

    e.g. https://thebreakthrough.org/articles/our-high-energy-planet

    The path to that future is maximum economic growth and the key to that is to identify and overcome structural impediments.



    https://www.heritage.org/index/ranking

    I have posted this from the NREL in several contexts. It is one possible scenario. Advanced fast neutron nuclear reactors – powering the US with light water reactor ‘waste’ for 400 years – providing the backbone of electricity supply and industrial process heat – evolving hydrogen in high temperature electrolysis – with low LCOE wind turbines chipping in produce liquid fuels.

    Advanced nuclear reactors use known technology – with perhaps accident tolerant fuel clad with silicon-carbide being a 21st century materials exception. It seems much more likely than not that there will be an embarrassment of options in the next decade as developers rush to market. The prize is immense. At the right price point capitalist creative-destruction will overturn the energy apple cart. It may be sooner rather than later.

    • I like pictures – but struggle to keep them in order.

    • RIE: “Hopefully global energy demand will quadruple and more this century.”

      Very plausible, Robert. If demand increases at an average of 2% a year, quadrupling will take 70 years, bringing us to 2090.

      ” I have doubts that fossil fuel supplies can keep up – even with venturing into deeper and more expensive reserves.”

      While it is natural to expect that most near-surface petroleum is of biogenic origin, one should not infer that therefore all petroleum is of biogenic origin. The deeper you look, the greater the pressure and temperature and hence the greater the likelihood that those conditions will turn carbon and H2O into greater reserves of abiogenic petroleum than the biogenic reserves we know of today.

      The certainty that Big Oil will soon be put out of business by exhaustion of their feedstock is greatly exaggerated. It is a big unknown, and a bad one if the result is CO2 at 1000 ppm early next century and still rising.

      I’m not saying that it WILL happen, I’m just saying that it is important to understand the impact on climate if it DOES happen.

    • There is another problem with fossil fuels and that is cost. At some point other sources become cheaper by a lot. This energy supply issue is very complex and is a product of a giant government/utility/industrial complex that is very difficult to turn in a different direction. Surely at some point solar or wind with pumped storage will be a lot cheaper than coal for example. We just need a little innovation and risk taking by utilities. There are lots of other alternates such as using solar to convert water into hydrogen and oxygen or to synthesize natural gas.

  23. Except that the less we know the scarier it gets and the more reason to take climate action to be on the safe side. That’s the science of it, anyway.

    https://tambonthongchai.com/2019/12/23/climateemergency/

  24. Imagine my joy when I tuned in to find the eminent and erudite scholar Dr. Vaughan Pratt’s contributions listed in Recent Comments outnumbering those of the usual suspect.

    Nice work, doc. I usually always believe everything you say, except I had some doubts about that double saw tooth thing that you discussed several years back. Anyways, let’s stipulate that it’s going to be “Two more degrees”.

    If we fear that’s going to be bad, somebody needs to convince the hysterical greenie alarmists to stop trying to do away with nuclear power.

    We took up a collection and wisely decided to nominate…you. I mean we took a vote.

    Whether or not you choose to accept, please stick around.

    • “I usually always believe everything you say”

      Oh you silly man you. ;)

      Better to wait for the peer reviews, ideally the ones by those who best understand both old and new ideas in their field.

      My suggestion of trading off precision in temperature for precision in time in climate projections seems to be new, judging by the difficulty commenters like dpy6629 seem to be having with it. Dpy6629 seems not to like uncertainty in time and wants me to go back to uncertainty in temperature a la Nic, which for some reason he considers more “rigorous”.

      Rigor is just an excuse for creating enough space in which to hide fallacies. It is the basis for the saying, “Lies, damned lies, and statistics.” Rigor is no substitute for transparency.

      “somebody needs to convince the hysterical greenie alarmists to stop trying to do away with nuclear power. We … decided to nominate…you.”

      Excellent choice, Don. It would be very bad if you’d nominated someone from the pro-nuclear lobby. :)

      Safely sequestering even just the spent fuel rods to date is a huge unsolved problem; for example Japan is currently proposing to dump a million tons of tritiated water into the Pacific in the near future, https://www.nytimes.com/2019/12/23/world/asia/japan-fukushima-nuclear-water.html .

      If, as RIE expects, energy consumption will quadruple this century, getting it from nuclear will result in a massive increase in spent fuel rods. If that were the only alternative we might be better off with the two more degrees of warmth.

      At least it would be warmth from that great fusion reactor in the sky, whose spent hydrogen is at a safe distance of 93 million miles, with only those pesky CMEs, coronal mass ejections, to bother us occasionally.

      • Light water reactors use 1/2% of potential energy in fuel and leave long lived actinides. That won’t work. The answer is to breed fissionable material in high energy neutron reactors before removing short lived fission products and closing the fuel cycle. Fertile material includes depleted uranium and thorium. Unlimited energy from beautiful little nuclear engines. This is General Atomics. Not as flashy as some – but with such an impressive track record.




        But there are many others around the world. Is this a future for nuclear? We will see in the next decade.


        https://www.wired.com/story/the-next-nuclear-plants-will-be-small-svelte-and-safer/

      • You are not supposed to talk about fuel rods, doc. We regret that we have to withdraw the nomination, but please stick around.

      • Tim Palmer is discussing making some aspects of the models less precise so that other parts can have, whatever, more processing time.

        On the storage of nuclear waste, families that want to use nuclear power should have to store the waste in little casks on their dining room tables. A total family commitment to care for their waste containers for however long it takes.

      • Palmer and Stevens canvass the creation of a new class of models. “This will only be possible if the broader climate science community begins to articulate its dissatisfaction with business as usual—not just among themselves but externally to those who seek to use the models for business, policy, or humanitarian reasons. Failing to do so becomes an ethical issue in that it saddles us with the status quo: a strategy that hopes, against all evidence, to surmount the abyss between scientific capability and societal needs on the back of 2 less-than-overwhelming ideas: 1) that post processing (i.e., empirically correcting or selectively sampling model output) can largely eliminate the model systematic biases that would otherwise make the models unfit for purpose (12) and 2) that incremental changes in model resolution or parametrization can overcome structural deficiencies that otherwise plague the present generation of models (13). ”

        Against that – JCH claims from a line in a Youtube interview that models are remarkably accurate. A line spoken in the context of extremists claiming that models underestimate change. It appears that they have not underestimated warming. 🤣

        80,000 tons of high level waste? Sure we could sprinkle it on our breakfast cereal.

      • The mans says it, very recently, as plain as day. He says mitigation is justified now – based upon the models. Plain as day.

        You want to hear him say what you say. You’re always borrowing bits and pieces, and you’re tricking yourself. Tim Palmer is a consensus climate scientist. How could he not be? He’s bright.

        While we are certainly not claiming that model inadequacies cast doubt on these well-settled issues, we are claiming that, by deemphasizing what our models fail to do, we inadvertently contribute to complacency with the state of modeling. This leaves the scientific consensus on climate change vulnerable to specious arguments that prey on obvious model deficiencies; gives rise to the impression that defending the fidelity of our most comprehensive models is akin to defending the fidelity of the science; and most importantly, fails to communicate the need and importance of doing better.

        Specious arguments that prey? Gee, I wonder who he is shooting at?

        Has he ever cited Tsonis or Kravstsov?

        He wants the computer that will come before the one my nephew is working on. He depending upon the lumber brains that are in office now. Good luck with that.

      • He says that the attacks are coming from your side. Models haven’t underestimated warming. But this seems to be the wisp of straw you cling to – and not the published article. Let alone earlier work I have been discussing here for years. And how often have I tried to discuss pragmatic responses in our nonlinear world. I have even asked you and have gotten not even a half baked response.

        “The idea that the science of climate change is largely “settled,” common among policy makers and environmentalists but not among the climate science community, has congealed into the view that the outlines and dimension of anthropogenic climate change are understood and that incremental improvement to and application of the tools used to establish this outline are sufficient to provide society with the scientific basis for dealing with climate change.”

        There is no consensus – other than the simplest one that people are adding greenhouse gases to the atmosphere – just some activists with congealed views. You should recognize yourself in that.

      • Again, I used to link to the settled climate science website, which is basically the world is warming and we’re doing it. Doesn’t say anything else is settled.

        Have never claimed anything else is settled.

        You are are exactly the person he is describing: specious arguments.

      • Am not a member of any climate group. Have never been to a rally/march/strike, etc. Am not a politician. Am not in academia, and never have been.

        He wants the next models so he can prove you and Professor Curry wrong.

      • “As our nonlinear world moves into uncharted territory, we should expect surprises. Some of these may take the form of natural hazards, the scale and nature of which are beyond our present comprehension. The sooner we depart from the present strategy, which overstates an ability to both extract useful information from and incrementally improve a class of models that are structurally ill suited to the challenge, the sooner we will be on the way to anticipating surprises, quantifying risks, and addressing the very real challenge that climate change poses for science. Unless we step up our game, something that begins with critical self-reflection, climate science risks failing to communicate and hence realize its relevance for societies grappling to respond to global warming.”

        They seem to have proved you wrong. Not a surprise.

    • Processing the spent nuclear fuel rods is not a technically difficult problem. It’s a matter of finding the political will in a country which is manifestly insane on environmental issues. People go NIMBY on Yucca Mountain but I guess prefer to have sites around the country where the waste is stored in tanks and barrels which inevitably leak. This is the definition of green NIMBY insanity and its not a pretty picture of our elites. Like so many virtue signaling attempts to revive 19th Century technology, like light rail in urban areas, it is based on deep denial of human nature and the desire for freedom of movement.

      • People go NIMBY on Yucca Mountain but I guess prefer to have sites around the country where the waste is stored in tanks and barrels which inevitably leak.

        The leaking storage tanks are all associated with the nuclear weapons projects of the USA. None, not a single leaking tank, is associated with commercial nuclear electricity generation. The Hanford Site in Western Washington State is infamous for its decades-long (maybe actually a generational-long) problem. Maybe the Savannah River Site in South Carolina has a smaller-scale problem.

        See the following about the various options for handling spent nuclear fuel rods from commercial nuclear electricity generation plants.

        The section on Storage, treatment, and disposal in The Wiki.

        After some time spent in spent fuel pools that are integral parts of all commercial nuclear electricity generation plants, the fuel rods are maintained in dry cask storage.

        Finland has approved geological disposal, such as Yucca Mountain.

        France has been reprocessing spent fuel since the mid-1960s. President Jimmy Carter is responsible for the USA not reprocessing spent nuclear fuel. One objective was to prevent additional countries from attaining nuclear weapons. Look how that has worked out.

      • Dan, Thanks for the information. I’d often wondered why the US didn’t reprocess spent rods. I agree that Carter’s decision has not worked out as he intended. That happened with a lot of what Carter did.

  25. RCP8.5 is a ridiculous scenario IMO.

    A great proxy to use to triangulate the direction for the fossil fuel industry is Exxon. Signing up for their press releases provides a reasonable lens to the future for fuel oil.

    There appears to be a propensity for climate modelers to ignore the self evident forces of capitalistic creative destruction that’s always been the hallmark for capitalism. Some might say, well that’s because we can’t anticipate what they’re going to do. I say that’s nonsense. it’s harder to anticipate climate, but there’s a whole industry built up around that.

    Fossil fuel companies that don’t innovate will go away, same as any industry. Many will simply follow the leader who innovates, that would be Exxon here. For me it’s the lateral moves in the industry that are the most interesting, not the useless prognosticator gloom and doom scenarios, those who extrapolate from the past.

    Some Exxon press:
    ExxonMobil and Synthetic Genomics algae biofuels program targets 10,000 barrels per day by 2025
    https://corporate.exxonmobil.com/News/Newsroom/News-releases/2018/0306_ExxonMobil-and-Synthetic-Genomics-algae-biofuels-program-targets-10000-barrels-per-day-by-2025

    Select 2019 Releases:

    ExxonMobil, FuelCell Energy expand agreement for carbon capture technology
    https://corporate.exxonmobil.com/News/Newsroom/News-releases/2019/1106_ExxonMobil-and-FuelCell-Energy-expand-agreement-for-carbon-capture-technology

    ExxonMobil renews support for MIT Energy Initiative’s low-carbon research
    https://corporate.exxonmobil.com/News/Newsroom/News-releases/2019/1021_ExxonMobil-renews-support-for-MIT-Energy-Initiatives-low-carbon-research

    ExxonMobil expands low-emissions technology research with universities in India
    https://corporate.exxonmobil.com/News/Newsroom/News-releases/2019/1014_ExxonMobil-expands-low-emissions-technology-research-with-universities-in-India

    ExxonMobil and Mosaic Materials to explore new carbon capture technology
    https://corporate.exxonmobil.com/News/Newsroom/News-releases/2019/0826_ExxonMobil-and-Mosaic-Materials-to-explore-new-carbon-capture-technology

    ExxonMobil and Global Thermostat to advance breakthrough atmospheric carbon capture technology
    https://corporate.exxonmobil.com/News/Newsroom/News-releases/2019/0627_ExxonMobil-and-Global-Thermostat-to-advance-breakthrough-atmospheric–carbon-capture-technology

    ExxonMobil progresses growth plans and efforts to advance lower emissions technologies
    https://corporate.exxonmobil.com/News/Newsroom/News-releases/2019/0529_ExxonMobil-progresses-growth-plans-and-efforts-to-advance-lower-emissions-technologies

    ExxonMobil to invest up to $100 million on lower-emissions R&D with U.S. National Labs
    https://corporate.exxonmobil.com/News/Newsroom/News-releases/2019/0508_ExxonMobil-to-invest-up-to-100M-on-lower-emissions-RandD-with-US-National-Labs

    ExxonMobil and Renewable Energy Group partner with Clariant to advance cellulosic biofuel research
    https://corporate.exxonmobil.com/News/Newsroom/News-releases/2019/0123_ExxonMobil-and-Renewable-Energy-Group-Partner-w-Clariant-Advance-Cellulosic-Biofuel-Research

  26. So the numbers hang together. That’s a necessary but not sufficient condition for the theory that CO2 + solar fluctuations are the only things that matter for long term climate.

    The ice ages are the major feature of the earth’s climate. Since their timing can’t be explained (even though CO2 data exists), the earth’s long term climate is clearly beyond climate science. Yet school children are running around scared because of the predictions by climate scientists.

    • “Since [ice ages] timing can’t be explained, the earth’s long term climate is clearly beyond climate science.”

      The same reasoning would show that since three-week weather forecasts are beyond meteorology, so are three-day weather forecasts.

      • We trust 3-day forecasts because they have been run successfully many times. How many times have 100-year climate predictions been tested? The answer is 0.

        A 3-day forecast would also not be trusted if it came from a new system that has not been validated. If it were further known that 30-day forecasts were useless, only fools and charlatans would promote the 3-day outputs.

    • These are probabilistic forecasts because the real world diverges from model land.

    • stevenreincarnated

      I think they could be easily explained by solar driven ocean heat transport which is more and more what it appears to be.

  27. Planet or Moon Without-Atmosphere Effective Temperature Complete Formula, according to the Stefan-Boltzmann Law, is:

    Te.planet = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ (1)

    S = So(1/R²), where R is the average distance from the sun in AU (astronomical units)
    S – is the solar flux W/m²
    So = 1.362 W/m² (So is the Solar constant)
    Planet’s albedo: a
    Φ – is the dimensionless solar irradiation spherical surface accepting factor
    Accepted by a Hemisphere with radius r sunlight is S*Φ*π*r²(1-a), where Φ = 0,47 for smooth surface planets, like Earth, Moon, Mercury and Mars…
    (β*N*cp)¹∕ ⁴ is a dimensionless Rotating Planet Surface Solar Irradiation Warming Ability
    β = 150 days*gr*oC/rotation*cal – is a Rotating Planet Surface Solar Irradiation Absorbing-Emitting Universal Law constant
    N rotations/day, is planet’s sidereal rotation period
    cp – is the planet surface specific heat
    cp.earth = 1 cal/gr*oC, it is because Earth has a vast ocean. Generally speaking almost the whole Earth’s surface is wet. We can call Earth a Planet Ocean.
    cp = 0,19 cal/gr*oC, for dry soil rocky planets, like Moon and Mercury.
    Mars has an iron oxide F2O3 surface, cp.mars = 0,18 cal/gr*oC
    σ = 5,67*10⁻⁸ W/m²K⁴, the Stefan-Boltzmann constant

    This Universal Formula (1) is the instrument for calculating a Planet-Without-Atmosphere Effective Temperature. The results we get from these calculations are almost identical with those measured by satellites.

    We have collected the results now:
    Comparison of results the planet Te calculated by the Incomplete Formula, the planet Te calculated by the Complete Formula, and the planet Te (Tsat.mean) measured by satellites:
    Planet or…..Te. incomplete……Te.complete……Te (sat.mean)
    Moon
    Mercury………….437 K………….346,11 K…………..340 K
    Earth…………….255 K………….288,36 K…………..288 K
    Moon…………….271 Κ………….221,74 Κ…………..220 Κ
    Mars…………….211,52 K………215,23 K……………210 K
    Conclusions:
    The 288 K – 255 K = 33 oC difference doesn’t exist in the real world.
    There are only traces of greenhouse gasses.
    The Earth’s atmosphere is very thin. There is not any measurable Greenhouse Gasses Warming effect on the Earth’s surface.

    http://www.cristos-vournas.com

    • Why is your formula independent of the depth of the oceanic mixed layer (OML)?

      The deeper the OML, the more slowly Earth’s surface temperature will respond to the diurnal fluctuations in insolation.

    • I am guessing he’s been to all those celestial orbs, recently.

    • However, I think even the IPCC as well as smart people like Palmer admit that its impossible to predict future climate states. The problem here is nonlinear feedbacks that often depend on sub grid scale effects.

      BTW, There is a new paper by Palmer and Stevens clarifying how lacking current climate models are.

      “Fig. 1, which is taken from the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (13), illustrates this situation well. It shows that all of the climate models can ade- quately reproduce the observed change in temperature—part of what we call the blurry outline of climate change. This is some- thing that the Assessment Report draws attention to in its sum- mary for policy makers. What is not discussed in the summary is what is shown by the thin horizontal lines on the edge of Fig. 1. Even after being tuned to match observed irradiance at the top of the atmosphere,† models differ among themselves in their es- timates of surface temperature by an amount that is 2 to 3 times as large as the observed warming and larger yet than the estimated 0.5 °C uncertainty in the observations. The deemphasis of this type of information, while helpful for focusing the reader on the set- tled science, contributes to the impression that, while climate models can never be perfect, they are largely fit for purpose. However, for many key applications that require regional climate model output or for assessing large-scale changes from small- scale processes, we believe that the current generation of mod- els is not fit for purpose.
      Figs. 2 and 3 develop this point further by showing how, on the regional scale and for important regional quantities (7), these prob- lems are demonstrably more serious still, as model bias (compared with observations) is often many times greater than the signals that the models attempt to predict. In a nonlinear system, particularly one as important as our model of Earth’s climate, one cannot be complacent about biases with such magnitudes. Both basic physics and past experience (at least on timescales that observations con- strain) teach us that our ability to predict natural fluctuations of the climate system is limited by such biases (8–11). By downplaying the potential significance that model inadequacies have on our ability to provide reliable estimates of climate change, including of course in terms of extremes of weather and climate, we leave policy makers (and indeed, the public in general) ignorant of the extraordinary challenge it is to provide a sharper and more physically well- grounded picture of climate change, essentially depriving them of the choice to do something about it.”

      https://www.pnas.org/content/pnas/early/2019/11/26/1906691116.full.pdf

  28. First the bulls in Oz. These are Brazilian bred Brahma from stock imported from the Indian sub continent in 1880 to Uberaba.
    Now the CO2.
    The world pop is expected to be about 11 billion by 2100.
    https://www.un.org/development/desa/en/news/population/world-population-prospects-2019.html
    Roger Graves points out in:
    https://wattsupwiththat.com/2016/05/17/the-correlation-between-global-population-and-global-co2/
    that world pop and CO2 are strongly correlated and that for 10 billion a level of 500 ppm can be expected. and around 540 ppm for 11 billion.
    This is a fairly safe bet as folk need to eat and correct soils (>>Gt of limestone), they need to burn stuff to cook and power to get about.
    This is far lower than the ACO2 based estimate. What the effect of 540 ppm will be is still open to discussion.
    As to the Solar Min., this will either happen or not, as will large volcanoes spew more Gt of sulfur (or not) or past greenhouse heat will be ascribed more to other mechanisms than to CO2.(you can bet on it).

    • AC: “This is far lower than the ACO2 based estimate.”

      Quite right. Roger Graves only looked at CO2, which is currently growing in proportion to sqrt(population). This is because only about 4% of today’s carbon cycle is anthropogenic. Had he looked at ACO2 he’d have found that it is growing in proportion to population^2. The reason it is more than just population is because of increasing per capita consumption of fossil fuels that itself seems to be increasing in proportion to population.

      Whereas the compound annual growth rate of world population today is 1%, that of CO2 is about ½% (but growing) while that of ACO2 is 2% but neither increasing nor decreasing as my Figure 4 should make clear.

      The world population clock at https://www.worldometers.info/world-population/ is right now showing a population growth for 2018 that by December 31 will be up to 81 million. If world population were 8100 that would be 1%. Since it’s still less than that, obviously the population growth this year was slightly more than 1%. It has been more than 1% for the last hundred years, averaging 1.6% since Charles Keeling installed his CO2 observatory on Mauna Loa.

      Given that 280 ppm has been nature’s background for at least the past millennium, it should be obvious that the relationship between world population and CO2 should be made not with total CO2 but with anthropogenic CO2. Roger Graves is a physicist, and physicists tend to be oblivious to that sort of distinction. They just see CO2 growing.

      • VP: ” The reason it is more than just population is because of increasing per capita consumption of fossil fuels that itself seems to be increasing in proportion to population.” That seems logical, yet Javier’s graph above on world energy consumption for the last almost 50 years shows that not to be the case. Reason: efficiencies? Is per capita energy consumption going up or down? My car in 1970 got about 14 miles to the gallon. Today, 36. Same for heating systems, appliances, etc. My point is that I’m not really sure we can say with any clarity what the per capita energy consumption will be around 2100. We can guess. No?

    • “The world pop is expected to be about 11 billion by 2100.” This is a totally unrealistic figure. The world’s population currently is about 7.5 billion. By 2100 it will be in the range 6.5 to 9 billion.
      The world’s fertility rate is only 2.4 in 2019, and falling. It will be below breakeven by 2050 or earlier. China is already caught in a population collapse that will cause it to lose at least 300 million by 2100, and as much as 750 million if it cannot get its fertility rate back up to 2.2 very soon (the rate right now is ~1.7, despite the ending of the 1-child policy three years ago). Japan will lose at least 40 million of its 120 million population. The U.S. is below 2.0 fertility, Indonesia is just at 2.1, and even India is barely above 2.3, and falling every year. These facts make it likely that the world’s population will not be much above 7.5 billion in 2100.
      http://worldpopulationreview.com/countries/total-fertility-rate/

  29. “Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.” NAS 2002, Abrupt climate change: inevitable surprises

    I began above with the stadium wave of Wyatt and Curry (2013) – as annotated by Marcia Wyatt. It is one attempt – and a good one – to highlight a fundamental mode of operation of the climate system. It adds to a dynamic picture of the system as coupled, oscillating nodes – the familiar climate indices – on a network. With far reaching implications for nonlinear responses in biology, hydrology and surface temperature. It is far from a new idea – and far from tractable with the simple math and simpler assumptions of Vaughan Pratt or Nic Lewis. Sometimes we do the math that we can – and not the correct but complex PDE math required to describe the fluid flow problem of the Earth system. The basic problem is that an answer is given that might give some comfort – with no exploration of the bounds of uncertainty. The correct answer is far more likely to be that we don’t know.

    A couple of Einstein quotes come to mind.

    “Everything should be made as simple as possible, but no simpler.”
    “Not everything that counts can be counted, and not everything that can be counted counts.”

    Explaining such to such as the perennially clueless ‘usual suspect’ just above may, however, be as fraught as explaining relativity theory to a junkyard dog. But it is as I say far from a new idea. Indeed, in some circles it may be considered the obvious and dominant Earth system paradigm.

    e.g. https://history.aip.org/climate/rapid.htm

    “The climate system is particularly challenging since it is known that components in the system are inherently chaotic; there are feedbacks that could potentially switch sign, and there are central processes that affect the system in a complicated, non-linear manner. These complex, chaotic, non-linear dynamics are an inherent aspect of the climate system.” IPCC, TAR 14.2.2.2

    The first conceptual model of climate below – door number 1 – has fallen by the wayside. The second is stubbornly persistent. Climate is viewed as periodic oscillations around a rising mean – and these oscillations are assumed to be noise that sums to zero and can thus be ‘filtered out’ over periods of a little as 100 years. Climate data shows regime change – with shifts in means and variance at scales of decades to millennia.

    https://wordpress.com/media/watertechbyrie.com?s=ghil

    I have been returning to Dimitris Koutsoyiannis and the Nile River’s nearly 1000 years of recorded data. Here Koutsoyiannis shows climate reality alongside a graph of generated Gaussian white noise. Noise as opposed to dynamical complexity. The difference is apparent.

    “Essentially, this behaviour manifests that long-term changes are much more frequent and intense than commonly perceived and, simultaneously, that the future states are much more uncertain and unpredictable on long time horizons than implied by standard approaches. Surprisingly, however, the implications of multi-scale change have not been assimilated in
    geophysical sciences. A change of perspective is thus needed, in which change and uncertainty are essential parts.” https://www.tandfonline.com/doi/pdf/10.1080/02626667.2013.804626

  30. The AMO is an inverse measure of indirect solar variability. Post 1995 global warming has been dominated by weaker solar wind states driving a warm AMO phase, via negative NAO/AO, with an associated decline in low cloud cover in the mid latitudes, increased low cloud in the Arctic, and increased lower troposphere water vapour. Attributing that warming to rising CO2 forcing is irrational.
    https://www.linkedin.com/pulse/association-between-sunspot-cycles-amo-ulric-lyons/

    “But of what use is a prediction for 2063-2137 if we can’t use it to predict say the extent of sea ice in the year 2100?”

    The next centennial solar minimum starts from the early-mid 2090’s which drive a warm AMO and Arctic phase. Useful Jovian analogues from 1917-1919 suggest strong negative NAO/AO states through 2096-2098, driving major AMO warming, and likely associated strong El Nino conditions which drive additional lagged AMO and Arctic warming pulses.

    • Ulric bases his analysis on a cartoon. SpongeBob Squarepants was it?

      I’m sure the rest makes absolute sense. But with eyeballing and zilch geophysics – how would we know?

      • Robert writes:
        “Ulric bases his analysis on a cartoon. SpongeBob Squarepants was it?”

        In fact I first inspected the correlations between negative NAO/AO and El Nino conditions, and then searched the literature and historic data to confirm the association. Such that El Nino episode frequency increases strongly during colder phases of centennial solar minima, e.g. 1807-1821.
        https://sites.google.com/site/medievalwarmperiod/Home/historic-el-nino-events

        “But with eyeballing and zilch geophysics – how would we know?”

        That’s just your self confession from the last post:

        “High solar activity gives us more zonal patterns, intense and frequent warm Pacific states and modern warming – or at least some of it – and low solar activity more meridional patterns, a cool Pacific and the LIA.”

        That is anti-geophysics.

    • UL: “Attributing that [post-1995 AMO] warming to rising CO2 forcing is irrational.”

      Quite right. I wouldn’t dream of doing that. We’re on the same page there.

    • OK so if rising CO2 projects onto natural variability, i.e increasingly positive NAO/AO with rising CO2 forcing, the inverse solar forcing of the AMO provides the frame of reference to access the feedbacks. They are negative.

    • Ulric: “The AMO is an inverse measure of indirect solar variability.”
      Not deduced from obervations. There is a simple method to look for a solar signal in any dataset: Try to make a wavelet ( I used the KNMI climate explorer) and look for some energy at 11 years:

      Niente!

  31. The comment about there being no basis in fact for a medium term timing for oil reserves to fail to meet demand (what some may call peak oil) is caused by the author’s ignorance of the oil industry in general. The subject is quite complex, requires understanding the mechanisms which drive oil out of the ground, the technologies used to improve upon them, the different rock and oil types, the surface conditions, logistics, and the effort and investment required to get oil out. This of course isn’t sometning most individuals in the oil industry grasp very well. It’s also beyond the understanding of most analysts (who in general are in diapers and can barely count beans). Therefore it’s easy to understand the blog post’s author made a mistake when dwelling in this field. The oil IS running out, and as far as I can see some agencies prefer to say production can’t increase much more because the fight to reduce emissions will be won by diktat. But reality is a bit simpler: we can’t find new oil fields, there’s no magic technology over the horizon, Venezuela’s heavy oil reserves are being obliterated by reservoir mismanagement which renders a huge amount completely unrecoverable, and we already know enough about the potential for horizontal wells plus fracking to see the end of the road in the US.

    I run into oil types, especially geologists, who insist we can still find oil somewhere else, but I’m afraid they are simply too scared to acknowledge we are at the end years of the industry because we simply ran out of locations on this planet to find sufficient crude oil and condensate. And do understand we are talking about the 30.3 billion barrels we produce and run into refineries every year. That figure is barely increasing and it’s for the better, because we will have to stretch it as much as possible.

    • “The oil IS running out,”

      Presumably you’ve claimed this so often you’ve long since convinced yourself of it, and therefore don’t feel the need to give an actual argument in support of it, beyond pointing out that everyone who disagrees with you is ignorant.

      • Those who disagree with me do tend to be quite ignorant about the oil industry’s future potential to meet ever increasing demand. However, this isn’t exactly something they should be ashamed of, because the subject is extremely complex, the information and data bases are confidential and nobody in their right mind would disclose it without charging $millions and requiring non disclosure agreements with teeth.

        The first time I wrote about serious disconnects between RCP8.5 and reality was in 2013. I did so because I saw the models they used were rather primitive, and assumed fossil fuel resources way above anything I had ever reviewed. I don’t come at this from the “peak oiler” community. I don’t use their equations, nor do I think most of what’s published makes sense. I look at it from the other lens: exactly where and how are we supposed to find the oil to REPLACE production? This takes me into a world most of you know nothing about, and about which I’m not about to give you references, because any such knowledge is prized and never disclosed.

        You can however access some reports which hint at the problem. For example, McKinsey just published their vision of future fossil fuel production. McKinsey writes for a business audience, and they lack internal expertise (those outfits usually rely on outside consultants like me to tell them what we think).

        One problem with the different sets of statistics and projections is the coarseness of their models, which fail to account for facts such as the use of ethane and other hydrocarbons for plastic. Therefore we see what I consider deceitful figures issued by agencies and companies. The bottom line is that refineries get fed crude oil and condensate, and refinery runs are not increasing as much as production because most of the increase is in light molecules we use for plastic (or emit less CO2 per mole if used for fuel).

        Another issue is that the majority of future emissions will be from burning coal, and coal also has limits. Therefore the focus on oil is erroneous (other than as we run out the price will increase sharply and this will create serious problems for poor nations, which will turn to biofuels as the only viable alternative and change agricultural acreage from from food to fuel).

        And these issues aren’t handled by existing public models, nor are they understood by the incompetents who run the UN bureaucracy.

  32. Titan’s (Saturn’s satellite) Without-Atmosphere Effective Temperature Calculation:

    So = 1.362 W/m² (So is the Solar constant)
    Titan’s albedo: atitan = 0,22
    1/R² = 1/9,04² = 1/81 = 0,01234567
    Titan’s sidereal rotation period is 15,9 days
    Titan does N = 1/15,9 rotations per day (synchronous rotation)
    Titan is a rocky planet, it has atmosphere of 95% N2 and 5% CH4, but very opaque. Titan’s atmosphere is 8 times larger with respect to square meter planet’s surface compared to Earth, so we consider Titan a gaseous planet and Titan’s surface irradiation accepting factor Φtitan = 1.
    Titan can be considered as a liquid methane ocean planet,
    Cp.methane = 0,4980 cal/gr*oC
    β = 150 days*gr*oC/rotation*cal – it is the Planet Surface Solar Irradiation Absorbing-Emitting Universal Law constant
    σ = 5,67*10⁻⁸ W/m²K⁴, a Stefan-Boltzmann constant

    Titan-Without-Atmosphere effective temperature Te.titan is:

    Te.titan = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴

    Τe.titan = { 1*(1-0,22)*1.362 W/m² *0.01234*[150 *(1/15,945)*0,4980]¹∕ ⁴ /4*5,67*10⁻⁸ W/m²K⁴ }¹∕ ⁴ = 96,03 K

    Te.titan = 96,03 K is the calculated
    And we compare it with the measured by satellites
    Tsat.mean.titan = 93,7 K (- 179,5 oC)

    Titan has an atmosphere of 95% N2 nitrogen plus 5% of greenhouse gas methane CH4. Titan has a minor greenhouse effect phenomenon. This phenomenon is so insignificant that it hasn’t appeared in calculations.

    http://www.cristos-vournas.com

  33. Dear Vaughan Pratt,

    As a realistic emission scenario for the remainder of this century, may I suggest the SRES A1B and A1T scenarios? Updated estimates of the International Energy Agency are sitting inbetween A1B and A1T. Also there is no sign yet of sink saturation.

    • Dear Hans,

      Are you suggesting that ACO2 continuing to grow at 2% is somehow “unrealistic”? And if so, why?

      I am deeply suspicious of models that have had too much thought put into them.

      I don’t believe AR6 will be using SRES scenarios. My understanding is that they’ll be using SSP, https://en.wikipedia.org/wiki/Shared_Socioeconomic_Pathways . However these too strike me as too contrived.

      • I consider a continuation of the 2% ACO2 rise to the end if this century unrealistic because you are not considering the logistic behaviour of world population growth and the related energy demand. We already are seeing that emissions in developed countries flatline and even are slightly declining. Increasing emissions are completely caused by the emerging economies, and once these economies have matured these emissions will also flatline and decline. The question is not if but when. As the sinks are not showing any sign of saturation a flatlines emission will lead to a decrease of atmospheric co2 levels.

      • VP
        I am deeply suspicious of models that have had too much thought put into them.

        Just in time for best science quote of 2019 😂

  34. I agree with choosing endpoints of the analysis period to dodge ENSO spikes, but, other than that, I suggest that you should look at the longest possible period of time for which the annual CO2 forcing increase has been rapid. That will reduce the effects of “noise” on your calculations.

    Here’s a log-scale graph of the CO2 trend. It is log-scale so that it reflects the logarithmically diminishing climate forcing:

    https://sealevel.info/co2_logscale_with_1960_highlighted.html

    There was a “knee” in that graph in the 1950s & 1960s, so before that the CO2 influence is likely to be a small part, rather than the dominant part, of the cause of temperature changes.

    Here’s a graph of temperatures from 1960 through the end of 2014 (to avoid the big El Nino):

    As you can see, if 54 years temperatures increased 0.4°C to 0.8°C, depending on whose global temperature index you use.

    If we project that trend another 80 years we’d get another 0.59 to 1.19 °C of warming.

    You might object that during the early part of that 54 year period the annual forcing increase from CO2 was less than for the last forty years or so, and that’s true. However, I would counter that projecting a linear increase in CO2 forcing for another 80 years, i.e., continued exponential increases in CO2 levels, is unrealistic. Thanks to fossil fuel resource constraints, and to negative feedbacks that are removing CO2 from the atmosphere at increasing rates, the CO2 forcing trend is almost certain to fall below linear during the 21st century. So, while the last 54 years had a flatter CO2 forcing curve at the beginning, the next 80 years will have a flatter CO2 forcing curve toward the end.

    I used this sort of analysis to calculate estimates of climate sensitivity. For that work I used a generous estimate of 0.625°C of warming over the 54 year period (which is probably too high, but not wildly so). That work is here:

    https://sealevel.info/sensitivity.html

    The tl;dr version is that if you attribute 100% of the 0.625°C temperature increase to anthropogenic forcings, and 75% of the anthropogenic forcings to CO2, you’ll calculate a TCR estimate of 1.41°C per doubling of CO2.

  35. Interesting collage of comments and data
    Several thoughts.

    I’m not a climate modeler, although I was involved in the US and EU Corporate Average Fuel Economy regulation design…and know that just about every part of those forecast models was wrong. (My role was to compute capital costs of various auto company responses to regulations. Since that was about calculating finite things like substituting known cost of fuel injectors replacing known-cost carburetors, my engineering and financial scenario projections were much more accurate than the energy/pollution forecasts of the EPA and DOT)

    Recently I did read in detail the many sub-papers on transportation and housing that were hammered into the IPCC “consensus” documents for policy makers. I found them “entertaining”. Seemingly written by grad students with little real-world experience

    So I’m pretty sure most of the climate forecasts will follow the grand traditions of human forecasting. They will be partly right, and largely wrong….and we all will learn from them even if they are wrong.

    I do wish to note that most past ideas about nuclear power involve large, centralized systems. There have been a number of very interesting small scale systems – still in development that have quite different waste profiles, because they don’t seek to feed a massively wasteful central grid.

    Passive, local geothermal is also very interesting. See modern versions of New Mexico’s Earthships. See widespread use of adobe by humans along the equator.

    I also wish to note that that i have traveled the global supply-demand chains for electric vehicles, utility-scale storage batteries, and the ingredients for solar and wind apparatus. At current government certified plans for expansion, it is abundantly clear that the mining, water, and eventual e-waste tsunami for the kinds of conversions envisioned are very far from environmentally benign. Current large EV’s contain 1000-2000 ibs of e-waste that is almost impossible to recycle at scale. Planned utility batteries are worse.

    So it is very clear to me that – as in this post stream – there will continue to be lack of consensus on the future of “climate solutions”

    And that there is much good news to be found in the ability of humans to adapt their consumption and lifestyle habits to almost any poor, or good forecast.

    Notice this. Americans drive 3 TRILLION miles per year. At least 30% of that could be discretionary. Check out how much oil can be conserved via behavioral shifts.

    Notice how rapidly global driving is declining – driven largely by the fact that 7 billion humans on Earth have access to a mobile device, and delivery/distribution systems are rapidly streamlining worldwide.

    Love the community here. Thanks for great stimulation.

    • Enjoyed both of your comments.

      “….drive 3 TRILLION miles per year.”

      I once calculated the usage of the hunk of metal and rubber in the driveway. My miles are far below average admittedly, but that particular asset has to be one of the most under utilized known to man. It sits idle ~96% of the hours in a year. The possibility of ride sharing with driverless cars is fascinating to me. Asset efficiency would get a huge boost.

    • Perhaps you could give some numbers on e-waste, like percent efficiencies. It would educate lots of fellow readers who are not exposed to this aspect of the electrical energy future. I was always wondering how large a cross-section the direct current power cable would be (one square meter?), the one planned to supply South Germany from the windy North. DC losses was one of the original reasons to chose AC over DC for the 1870’s power-up in the US.

      • Would be glad to address some of the emerging e-waste challenges implicit in IPCC and similar proposals, and frame up the kinds of costs involved.

        At this stage of global system development, the higest costs – and risks of low ROI in both money and environmental terms — are probably in large-scale energy storage and related infrastructure…not so much in transmission losses…

        …for both mobile source (e-vehicles) and stationary source (housing, utilities, etc).

        So – I am trying to be responsive to your question, even though it means not focusing on transmission – but on global storage ecosystems required for new energy systems (solar, wind, etc)

        Physical waste from new systems is more obvious in IPCC proposals – and is easier to project – compared to energy transmission efficiency, although they are related.

        I would apply the same caution to myself that I am suggesting for others – we can do rough projections now, based on reasonable assumptions, but by no means are these forecasts.

        (I do know, though, that forecasting physical waste in global industrial manufacturing systems is a lot easier than forecasting global temperature at any time frame.)

        The STRATEGIC numbers are the best place to start to see the true nature of the system to be changed, and the KIND of costs IMPLICIT in the – quite ‘imaginative’ – IPCC ‘consensus’ documents.

        On mobile source. There are estimated to be between 1.2 and 2 billion vehicles in use worldwide, depending on definition of “vehicle”. If you travel Africa and Asia you will see many things that use petrol and have wheels that are NOT included in official vehicle counts (or IPCC consensus forecasts)

        (In terms of pollution many small vehicles, or even chain saws, can pollute more than a modern large car – but that’s not e-waste)

        Let’s use the 2 billion vehicle number – as reasonably conservative among 7 billion humans.

        IPCC or similar groups advocate wholesale conversion of the global vehicle fleet to EV’s that are said to be ‘clean’ compared to ‘dirty’ existing vehicles.

        While an EV may smell better at close range than a petrol car, the tailpipe effects of conversion to EV’s are likely to be significantly smaller than the massive global industrial, mining, and recycling changes implied in IPCC style documents.

        The real “environmental cost efficiency” of EV’s can only be determined by examining the ENTIRE global transportation system, and the significant environmnental, social, and financial costs of the “before” and “after” global systems – implied in IPCC and similar forecasts.

        About 2% of the total global fleet is replaced each year, and it can take 20-30 years for vehicles sold today to exit the fleet as scrap. Average now is about 23 years. (This varies widely with economic cycles/recessions.)

        Today ALL modern cars/trucks are made for easy assembly – and therefore for easy DISASSEMBLY. This means more than 70% (and as much as 90%) of the components in vehicles can be disassembled and sold at high profit as USED PARTS, which could easily last more than 10 years in the used car getting those parts.

        This means perhaps 70% of the energy and material “trapped” in a modern petrol-fired car DOES NOT NEED NEW MINING/MELTING/FORMING to keep people moving for decades.

        In addition even the parts that can’t be re-used can be easily melted into new parts – WITHOUT THE MINING AND REFINING STEPS THAT BUILT THEM 20 YEARS EARLIER.

        So the modern motor vehicle manufacture/use/re-use/scrap/re-cast cycle is FAR more efficient in total global waste terms than most people imagine.

        This may be the largest recycling program on planet Earth – that shares the wealth of the richest car buyers with the poorest ones – in a stable, well-measured system….

        …THAT EXHIBITS FEW OF THE ENVIRONMENTAL CHALLENGES NOW BEING EXPERIENCED BY WEALTHY CITIZENS OF THE US WHO WANT TO SHIP THEIR PLASTIC AND E-WASTE TO CHINA, INDIA, AND AFRICA.

        (ASIDE: I can’t find any of this well-known current petrol vehicle life cycle efficiency data in the IPCC consensus documents – although it might be buried there. This is why I suspect many parts of the IPCC “consensus” documents are written by grad students or similar. The IPCC politics keep focusing on gasoline versus electrons only, and have substantial errors in their discussions of well-known industrial eco-systems)

        That’s baseline.

        Because Toyota/Honda etc CREATED that life-cycle efficiency – and understand its substantial environmental benefits – they applied the same strategy to their EV’s and hybrids

        Toyota Prius contain mostly re-usable parts, AND BATTERIES THAT ARE EASY TO DISASSEMBLE, REUSE IN PART, AND SCRAP WITH CURRENT RECYCLING METHODS. Research them. Notice their simple modular design.

        Now. Let’s look the Tesla or Tesla emulators’ total life cycle.

        Current Tesla batteries are said to contain 3,000 to 7,000 individual ‘18650’ or similar cells. They look like the cells for a small flashlight. They contain small spools of plastic in which the active ‘chemicals’ reside. And they are ‘soldered’ together in a large cube – with large scale electronic “boards”. The car also contains several hundred pounds of ‘boards’ and other e-waste.

        These kinds of 800 KG batteries are not even close to the recyclability of the modern petrol car. See the many hobbyists disasembling them on Youtube to see that they are not easily broken down to the raw elements.

        Unless something has changed dramatically, these massive batteries must be ground up, burned and then used as filler in things like construction cement.

        The lithium in these batteries currently comes largely from “free” lithium in salars like Chile/Nevada. BUT – at some production volume the lithium MAY have to come from hard-rock mining, which is just as damaging as most other forms of mining.

        EVERY ‘TESLA-STYLE’ EV REQUIRES SIGNIFICANT NEW MINING, REFINING AND ASSEMBLY – compared to the long term life cycle of petrol vehicles.

        This might change.

        But right now, this KIND of EV COULD produce 50% to 80% more total life-cycle waste for the world than the current petrol-fired motorcar.

        It is entirely possible that a RAM 1500 truck MIGHT have better life cycle efficiency than a current Tesla-style EV.

        The life cycle efficiency of the modern petrol car v the current large-mass battery EV defines a significant global amount of E-waste.

        And given that the average passenger occupancy in a car is like 1.2 people – smaller cars or more ride sharing could increase the total life cycle efficiency far more than lots of EV’s.

        This is cars.

        UTILITY batteries, at the scale of solar and wind the IPCC seems to recommend – could expand the current EV battery life cycle waste to global scale that might be “interesting”…if we are kind.

        This is where the IPCC meets E=MC2 The life cycle of big mass that can’t be recycled represents energy waste and pollution at scales that could be much higher than the current life cycle system.

        Part of my larger point is that THIS kind of environmental forecasting is much easier, and probably more accurate than forecasting the average temperature of the climate.

        My management instincts think it might be easier to get billions of people using Prius-like vehicles that require almost no driving, fueling, parking changes – than converting 2 billion people to completly new, resource intensive EV’s….
        ….or agreeing on a consensus climate temperature number.

        Not a forecast….just a hunch…

        It is not clear why IPCC and gov’t allies are pushing high-cost global technologies lke massive storage systems, with this degree of easily predicted waste.

        This kind of thinking is what makes many people ‘slightly skeptical’ of IPCC climate forecasts, and ‘solutions’ like huge EVs

        AND YOU ASKED ABOUT ‘WIRES’. I was surprised to see that IPCC and others placed high value on a Euro student’s thesis that suggested a postage-stamp size piece of land in Algeria could supply solar energy for much of Europe.

        I read the whole thing in detail. I tried to calculate transmission mass and loss in his hypothesized paper. It became quickly clear that this graduate student had pieced together conclusions from many tiny studies – without really “going to the Gemba” as Toyota would say.

        Nice story. No real-world experience in complex fragile interconnected systems.

        Reading that gave me insight into the kind of ‘evidence’ – that would never get financing from anyone with true fiduciary responsibility.

        Sort of like the ‘child savant’ craze of the moment.

        I hope this was helpful. The physical work, money, and global “sales effort” – recommended by the IPCC – to REPLACE global transportation and housing systems… instead of MODIFYING them at low cost…

        …are most likely beyond the current financial, and social capacity of the human population.

        I could be wrong, but I don’t see more than a tiny fraction of the 7 billion citizens of Earth lining up to follow the ‘imaginative images’ that make up the IPCC ‘consensus’

        I would be placing my money on proven – robustly adaptable – environmental/social technologies like the Prius…more than the waves of massive batteries implicit in the IPCC vision.

      • Tesla 2170 batteries allow for mass manufacturing and flexible chemistry. Cobalt is the most critical element. It also allows for future incorporation of ultracapacitors for regenerative braking, cold start, superfast charge, extending battery life, minimizing weight and even more acceleration. Tesla has just purchased Maxwell Technologies. Tesla seems light years ahead of Toyota. Is this the fruit of a man with a vision?


        I’d guess that fuel flexible linear generators driving hybrid machines – with mixed li-ion and ultracapacitor energy modules – are the nearish future.

        Although I have read no IPCC document since 2007 – such technology recommendations go well beyond their scientific mandate. Not that I doubt Marty’s veracity – but I’d probably need to see a reference. Even then – as a practical technologist – the scale of battery storage required makes it more straw man than a possible future.

        Much more likely is beautiful little atomic engines.

      • Robert – if you look at the picture of the Tesla battery you posted, you will see exactly what I mean. If you look at the videos of Tesla batteries on Youtube, you will see in greater detail how it is impossible to sort out the mix of materials in a Tesla battery. If you search online for the design and existing recycling of Toyota Prius batteries you will see a very different, much more robust and recyclable design.

        I checked out ARPA-C by Biden. Typical check-the-box politics. Might also try this calculation – total environmental impact of the millions of Toyota hybrids made, sold, re-sold, and compare the physics of that real-world production to the physics of the number of Tesla’s sold

        Also sorry you think that I lack vision.

        Perhaps I do.

        But why not check out the real data, just for the heck of it.

        Signing off.

      • What I said was that Elon Musk was a man with a vision driving an innovative enterprise. Why twist my words. And as happy as I am that you have finally checked out Joe Biden’s platform. What I said was that stripped of the rhetoric – a vote winner in today’s climate – we are left with some reasonably practical research objectives. Many of these in areas in which the US leads the world.

        The prius battery module consists of 5 NI-MH cells. Yes they can be recycled.

        Tesla has just opened a recycling facility at it’s battery Gigafactory in Nevada.

        https://www.greencarreports.com/news/1122631_tesla-launches-battery-recycling-at-nevada-gigafactory

        Tesla is aiming at a battery cost of $100/kWh. Very ambitious. Getting there with mass manufacturing, flexible chemistry and flexible module design for placement in diverse powertrains.

        Toyota had sales of electrified powertrains in 2017 of 1.52 million worldwide. “Tesla is delivering record number of vehicles
        That’s a relief for Tesla, which increased total production from about 120,000 vehicles in 2017 to 350,000 in 2018.”

      • Not twisting. Just misunderstood your comment in the context.

        Look at total hybrid/EV sales by Toyota and others, not just one year. Their designs are completely stable, and we have actual data how they behave over compleste life cycle

        Total Tesla lifetime production is less than the annual output of 3 assembly lines at the established car companies. Tesla have just reached annual volume of a single line at other makers

        Look at all the details of the battery, not just a single plant announcement. People have been recycling the ever-changing Telsa battery since it was launched – with details online, which is why it is so easy to see how it works, and exactly what the differences are between the many other hybrid/electric designs.

        Look up the detailed technical papers on exactly how the different EV batteries behave cradle to grave. Check out the global sources of lithium, and the projections for total life cycle. The role of water in lithium and other “green” innovations is crucial to the future economics of ALL EV batteries, and the climate/environment. The life cycle of water vis-a-vis mass production of transport and utility batteries is fascinating and perhaps scary.

        All of this is in the public domain.

        Also, try this. All cars and most industrial products follow a cradle-grave cycle of:

        1. mine
        2. melt/process
        3. Form
        4. Fabricate
        5. Assemble
        6. Use
        7. Fuel
        8. Maintain/service
        9. Re-sell /re-use (often 3 times)
        10. Scrap
        11. Disassemble
        12. Resell 70% of the components (check out the massive computerized industry of used parts)
        13. Disassemble and ‘melt’ worn parts
        14. Recycle the material and start the whole process over again

        It is the TOTAL environmental impact of EVERY step in this chain that determines which products help or hurt the environment/climate

        The leveraged reduction in environmental insult of the “re-use” steps is the largest portion of the environmental effect of any car (or most products on Earth)

        Map out the total life cycle of:

        1. A petrol-fired car

        2. Toyota Prius

        3. Tesla

        You will immediately find that the post-use cycle for Tesla does not really exist yet, and that the design has been so unstable that there is not a large volume of used parts available. The platform is just starting to stabilize, and because Tesla produces several variants of 3 models, at low volume, there is effectively no used parts business in any professional sense.

        If you actually travel these chains, on the ground, it is much easier to see beyond the headlines and articles to the reality of the system. (This also makes it very clear how the “science” in the IPCC summaries is seriously distant from the reality of the environment, and the ‘solutions’ proposed like electrification. Citations are radically different from hands-on experience.)

        Even if one has not been able to travel these amazing pathways – one can find most of them documented online – Youtube, etc.

        I’m focused on autos here, because the system is accessible and easier to understand than other human industrial systems.

        But the same kinds of insights can be earned by ‘traveling the gemba’ of solar, wind, etc.

        And the “global university of Youtube” is a pretty good proxy for understanding all this from one’s chair at home.

        Most people find it fun to learn. And, this can also help people learn to collaborate and invent new things

        FWIW

      • …oh…on Marty’s “veracity”…..

        The reference is – “Read the large collection of original papers that are produced by the many sub-levels of the global collection of IPCC researchers.”

        The notes and papers that are then increasingly edited by the many layers of higher status faculty and government folks who create the “consensus” document for policy makers.

        If you do this, you will see much more clearly how the process really works.

        You will have to hunt for them, but that is part of the learning process – first hand understanding of all the people in the confederation, and how they think and work. Great people. Great motivation. Not such a great system, owing to the political forcing at the top of the pyramid.

        And I would never ask them to build something or run a global enterprise.

        You will also see how the various recommendations are culled and ‘translated’ at the top – in the “guidance” for policy makers. The material left behind is often more interesting than the bullet points in the final summary.

        These are primary data for understanding the culture and selection process of the larger IPCC system.

        Looking for a ‘citation’ is not relevant. Doing the hard work of sorting through primary data is relevant.

        That’s really my larger theme.

        Doing real work and the tough research of producing millions of cars, houses, etc is where real science always begins….and prospers.

        Going to the Gemba is where real science starts, lives, and replenishes.

        Citations are taking shortcuts and missing the complex gestalts that are crucial for fresh understanding and insight

        And I’ll stand on my recommendation of Toyota and Honda as much more successful implementers of innovation than ARPA.

        During my years of working with all three groups around the world, I learned that ARPA folks are wonderful people and do great projects, but they have never built large-scale, self-sustaining social-industrial networks like Toyota and Honda.

      • Read the large collection of original papers that are produced by the many sub-levels of the global collection of IPCC researchers.

        “Greenhouse gases can come from a range of sources and climate mitigation can be applied across all sectors and activities. These include energy, transport, buildings, industry, waste management, agriculture, forestry, and other forms of land management.”

        This is working group III – and nothing here is inconsistent with the research objectives of the proposed ARPA-C.

        Or indeed with emerging global practices.

        e.g. https://www.environment.gov.au/climate-change/government/emissions-reduction-fund/methods

        From my long experience of solving real world problems – I can usually separate gold from dross.

      • You solved world problems? I never noticed. My bad, I’m sure.

        We already know all we need to know to address climate change — it’s all due to man, and it will continue to warm in proportion to the total amount of carbon emitted.

        Now, let’s get to work.

      • Wicked environmental problems David – involving culture, economics, science, engineering, business, government and communities. On projects worth up to $10B. I don’t make a song and dance about it – but I got a bit bored with Marty and his claims to a personal authority that would not otherwise be obvious.

        Global warming can be solved. Electricity is 25% of the problem of greenhouse gas emissions. A multi-gas and aerosol strategy is required – carbon dioxide, CFC’s, nitrous oxides, methane, black carbon and sulfate. Ongoing decreases in carbon intensity and increases in efficiency and productivity. And technical innovation across sectors – energy, transport, industry, residential and agriculture and forestry. Yes we have a strategy.

        “This pragmatic strategy centers on efforts to accelerate energy innovation, build resilience to extreme weather, and pursue no regrets pollution reduction measures — three efforts that each have their own diverse justifications independent of their benefits for climate mitigation and adaptation. As such, Climate Pragmatism offers a framework for renewed American leadership on climate change that’s effectiveness, paradoxically, does not depend on any agreement about climate science or the risks posed by uncontrolled greenhouse gases.”

        For an update see –
        https://cspo.org/research/implementing-climate-pragmatism-2/

        But I know you don’t like ‘homework’.

      • Maybe you should change your undies, just this once.

      • You’re full of big words, until someone calls you on your BS.

        You need to get out of your comfort zone and out of your comfort blogs.

      • What I found notable was the increase in Tesla sales between 2017 and 2018 – and the solid profit. Can Tesla sell 500,000 cars per year by 2020? It only works if it works in the market. Nor of course is li-ion the only battery chemistry. And it is just a drop in the bucket of vehicle demand. It is still very early days. But innovation, flexibility and mass manufacture – by Panasonic in the case of Tesla batteries – is what counts.

        “Can lithium batteries scale up? According to this quick and purely speculative math, the short answer is, with current reserves (13.5 million metric tons), not just no, but hell no.” 🤣

        Batteries are a rapidly evolving technology – it is one of the ARPA-C research priorities.


      • No – I was in the right place – David was on the wrong track. Judith will sort it out.

      • Robert

        As regards battery technology do you believe there are enough ‘rare earths’ and other related materials to supply all the green uses to which they are needed?

        I read somewhere that we will need a planet A, B, C and D in order to provide all the materials needed to fulfil the green pledges of our politicians and activists.

        tonyb

      • Hi Tony,

        I looked for a prayer to start 2020 with. Can I share a blessing? 😊

        Eternal God, You gave us the greatest gift: the gift of life.
        In the coming year, help us use it wisely.
        May we grow in generosity, kindness and forgiveness, hope, faith and love.

        — Rabbi Evan Moffic

        We may be more technological monkees than profound thinkers. The limit is our boundless ingenuity.

        My friend Daisy has a 1.5L turbo diesel Suzuki Vitara I introduced her to. Brilliant little car. Responsive, great handling with its complex ‘all-grip’ system that is neither 4wd or 2wd, automatic or floppy paddle manual, gets 19 km/L. Sensors for everything. It is the epitome of complex computer controlled machinery with almost uncountable moving parts. It is a fun and pretty little dinosaur.

        To be left in the dust by high performance EV’s – like any other ICE vehicle. The limitation there is batteries. Groups globally – including Toyota – are racing to bring to market simple, fuel flexible, balanced, efficient and lightweight linear generators as range extenders. Battery chemistries are proliferating – carbon-carbon, carbon nanotubes, lithium-CO2, graphene… Along with ultracapacitors that extend battery life, improve cold weather performance and regenerative braking, etc… All fit seamlessly into Tesla’s 2170 battery format and flexible module design emerging from 3 going on 4 gigafactories. The bottom line is of course sales and profit. Can they sell 500,000 vehicles this year?

        Graphene is the flavor of the months. Is this atom thick and stronger than steel material a revolution? There is idea to bundle graphene in a cable and feed it down a drill hole to hot rock. Heat instantly transported to the surface with no loss – they say.

  36. “Remember, then, that scientific thought is the guide to action; that the truth at which it arrives is not that which we can ideally contemplate without error, but that which we can act upon without fear; and you cannot fail to see that scientific thought is not an accompaniment or condition of human progress, but human progress itself.” William Kingdon Clifford, The Common Sense of the Exact Sciences (1885)

    This is science. Very good science. Fine scale modelling using fundamental equations of state – the laws of physics – to see where the system might go. Can it be ‘contemplated without error’? Of course not. But it does suggest that within a broad limit of precision a global average surface temperature increase of 8K is possible by the end of the century, 10K in mid latitudes and 16K at high – in as little as a decade. It’s a sort of Pauli exclusion principle for marine stratcumulus. 😊

    https://www.nature.com/articles/s41561-019-0310-1


    “There is a tide in the affairs of men, Which taken at the flood, leads on to fortune. Omitted, all the voyage of their life is bound in shallows and in miseries. On such a full sea are we now afloat. And we must take the current when it serves, or lose our ventures.” William Shakespeare

    If I were American I’d get on Joe Biden’s boat. A DARPA-C proposal with excellent and practical research objectives costing $400B over 10 years.

    • Robert, you should have read the last sentences of this paper:
      ” However, it remains uncertain at which CO2 level the stratocumulus instability occurs because we had to parameterize rather than resolve the large-scale dynamics that interact with cloud cover. To be able to quantify more precisely at which CO2 level the stratocumulus instability occurs, how it interacts with large-scale dynamics and what its global effects are, it is imperative to improve the parameterizations of clouds and turbulence in climate models.”
      Tapio S. et al wrote a long letter to show the disadvantages of the recent models. This is the message of this paper.

    • … But it does suggest that within a broad limit of precision…

      As I said. And while improved parameterization was certainly the gist of a recent Palmer and Stevens article – I am not aware that Tapio Schneider was likewise publicly active on this. Perhaps you could link to this letter.

      There is an obvious application in nesting fine scale modelling within coarse grid global models to reduce the need for semi-empirical parameterization. The cloud resolving scale – as I said – allows modellers to use fundamental laws of physics rather than parameterization. .

      But this was not what the $200,000 of supercomputing modelling was about. It is not remotely the ‘massage of the paper’ – the latter does, however, seem to reveal your biases. .

  37. Hmm. Not to get into politics but I’ve done lots of work with DARPA. They basically invented internet but drastically underestimated its current form and scope. I remember making a presentation to them at MIT around time of Iraq One about the emerging asymmetric warfare threat. The generals guiding research said terrorism would not scale. They are wonderful people but suffer the typical consequences of command-control research.

    From my experience the $400 billion would be far better spent in the advanced R&D facilities of Toyota and Honda. They are not waiting for scientific consensus and are years ahead on socially adaptive bottom up energy innovations. BTW. Toyota is one of the largest makers of kit housing whose designs have been far more environmentally friendly than anything in the US for decades.

    • APRA-C – I think I have the US terminology sorted. C for climate I suppose. Stripped of the rhetoric – which in today’s climate (excuse the pun) is a vote winner – and focusing on the technology – from nuclear to farming – there is not a hope in hell that Toyota has the breadth of expertise required. Many of these things are well advanced – some are a bit blue sky. That’s as it should be. But the best idea is to leverage US business, farmers and academia to achieve objectives. As well as accessing sophisticated technology globally and adapting it to local conditions in a multi-gas and aerosol strategy – carbon dioxide. CFC’s, nitrous oxides, methane, black carbon and sulfate. With ongoing decreases in carbon intensity and increases in efficiency and productivity. And technical innovation across sectors – energy, transport, industry, residential and agriculture and forestry.

      You seem to be a Berlinian hedgehog – one big idea that you cram everything into. That hardly works on the simplest problems and it cannot work at all with wicked problems threading through culture, economics, technology and environments. For that you might need a team of environmental scientists.

      • ARPA-c – got it nailed now. 😊

      • Robert. I generally enjoy your posts.

        And – my comments are about large system IMPLEMENTATION.

        All the scientific modeling of the future climate is completely useless without equally ‘scientific’ changes in human culture, industrial systems, global financial structures, political structures, and the collective motivation of several billion people.

        That is why people with large scale, global, management/organizational/political experience are often skeptical of scientific forecasts. They are skeptical because they have experienced directly and at large scale the negative effects of “consensus” forecasts.

        As for my hedgehog behavior, when I use Toyota and Honda for examples, I do so, because they are the most simple ways to introduce people without implementation experience to the best forms of human organizational experimentation that relate to the implementation of climate science.

        Sort of a primer for very smart people who don’t have management experience.

        You are simply wrong about Toyota’s and Honda’s experience and environmental knowledge. And their ability to handle complexity.

        Both companies have more actual experience in changing the environment and ulttimately the climate, than all of the authors in the IPCC documents I have read put together.

        They built some of the largest industrial systems on Earth more than 100 years ago – got blown up in the environmental disaster of WWII – and then established two of the very few mine-factory-showroom-use-scrap-recycle chains on Earth

        It might be interesting for you to study how many scientific disciplines the two organizations bring together – and how they have APPLIED their primary scientific experience in hard, natural, biological, environmental, AND organizational sciences..

        Their experience with forecasting, regulation, and realities of placing bets on long term fantasies (forecasts) of the future goes far beyond that of most current political factions and scientific communities.

        Their systems are built for NO forecasting. (If you study their amazing global implementation of ‘takt” time, you may see the – organizational science – underpinning of some of the most successful environmental impovement in the past 50 years.)

        Their environmental footprint has changed more of the Earth’s ecosystems than any of the massive government agencies in US/EU

        The past decades of experience with long term scientific forecasting is that it is wonderful – and usually wrong to a high degree. That is scientific evidence, proven by scientists, that should not be ignored in current 100 year forecasts.

        See “Junk DNA” for recent example.

        I use Toyota and Honda because they are some of the few organizations that have thrived and expanded globally despite the extraordinarily large errors of previous forecasts of energy, pollution, economics, etc

        Their environmental footprints and their scientific contributions to pollution control, poverty, energy conservation, housing, robotics, physics, computing, and human behavior are far beyond the scope of your comment.

        For example, both companies have implemented hybrid vehicles at global scale – with technology more advanced than any of the crude “solar farm” technologies suggested by political factions.

        So maybe we need to stop calling people “skeptics”

        Maybe those of us plugging along on the ground should proudly adopt the title of “hedgehogs”

        It has a certain down to earth ring to it.

        I do enjoy your comments.

        Marty

      • What to do with such a discombobulated rant. First of all the science here is not a forecast – it is a simulation of physical processes using laws of physics…

      • Did not mean to – accidentally – get into a tussle with Robert I. Ellison

        I love this blog because it is one of the few places where extraordinarily complex issues can be discussed.

        And, although I am not a climate scientist, I am quite familiar with the kinds of models and forecasts being discussed here, because I have been involved with the creation of energy/pollution models since they started, helped create the CAFE regulations that seek to regulate some of the primary pollutants being modeled, and have mapped out – on the ground worldwide – many of the consumption and physical networks that will determine the future of “greenhouse gases”.

        For example, with colleagues I have mapped out water, energy, transportation, food, and other networks on the ground, worldwide.

        When one does this, one always finds differences between the physical realities of these global networks, the models built to emulate them, and therefore the forecasts used to regulate them.

        So, since the early days of environmental – and climate – regulation I have been tracking the differences between models, forecasts derived from models, and actual performance of regulated industries (and populations).

        My main point and perspective is that modeling and implementation are two very different phenomena, and forecasts based on even the best models of large scale systems – are always partly right and partly wrong.

        In the case of environment/climate models – if you look at the history of the disciplines involved – the actual behavior of large systems always deviate – significantly – from the forecasts of them – built from models of them.

        Example: if you study the evolution of “greenhouse gas” regulation, you will see that the original science and regulation of “smog” was ridden with “errors” on all levels: physical measures, definition of components, modeling of the system, forecasts of the “smog” system, and results of the science-based regulation.

        When I read the posts in this blog, I always read them with this framework in mind.

        The “climate” challenge has a number of parts:

        1 The actual physical ‘climate’ – which scientists suggest extends to the “cosmos” and to the lowest sub-atomic levels.
        2 The bounding definitions of “climate” formed by humans (see differences between “climate” and “environment” for example)
        3 The “scientific disciplines” that humans form to study “climate” (physics, biology, etc)
        4 The data collected to emulate the many levels of “climate” (the subject of this blog post)
        5 The models derived from these data (also the subject of this blog post)
        6 The political mental models derived from the above
        7 The regulations derived from the political mental models
        8 The enforcement mechanisms put in place
        9 The real-world translation of the above, by 7 billion humans, into the physical realities of ‘climate change mitigation” (the seemingly desired social good)

        My main point – to Robert and in general – and with deep respect to those involved – is that Parts 1 through 8 above are fundamentally useless – UNLESS Part 9 meets the physical objective desired.

        This is where the messy “management sciences” come in.

        The reason I keep pointing toward Toyota and Honda as examples that climate scientists might look at, is that they represent some of the best – global – Part 9 – human implementations of all of the past “scientific forecasts” of environmental and climate futures.

        These companies understand at extremely fundamental levels that ALL forecasts, their own included, are partly right and mostly wrong.

        They have experienced first hand how all – science based – mobile source solution regulations have been partly right and mostly wrong. (See how the offending “elements” have changed over time in regulations. Carbon is only the current culprit in a long line of regulatory targets.)

        For several decades, I have found that the best way to bridge the – perceptual and political – gap between scientists and practitioners – is to walk them through the radical differences between:

        A The top down implementation failures of GM, and…
        B …the decades-long implementable, sustainable real-world innovations of Toyota and Honda

        Toyota and Honda are some of the few human organizations on Earth who have sustained simultaneous technological, financial, social, and environmental value – despite the many constant “forecast failures” of the best governments, scientists, financiers on Earth.

        At a strategic level the strategy is simple. They know that all long term forecasts are visualizations (‘fantasies’ by definition) – because THEY HAVE NOT HAPPENED YET.

        So they never invest in large fixed cost systems – a priori – and only build systems that adjust with minimal change to the constant stream of surprises – forecast errors – that the future represents.

        For example. Right now Toyota is not “freezing’ their production schedule longer than 45-60 days into the future.

        They pace their entire demand/supply chain on “takt time”.

        The science behind takt time is as follows:

        Customers at the end of the massive demand-supply chain buy a car about every 4.5 seconds. IF every one of the 20,000 work steps required to get a car from raw material to final car takes 4.5 seconds – then the whole system is in harmony and IT CAN REACT IN LESS THAN 45-60 DAYS TO THE UNFORCASTABLE CHANGE IN DEMAND WITH ALMOST ZERO COST TO THE ENTIRE SYSTEM

        I would hope that the benefits of this kind of system are obvious to professionals who deal with the constant improvements in scientific knowledge and practice.

        Just a brief review of any of the major sciences reveals that “science” is so good, it proves itself wrong all the time.

        My current favorite example explaining this is the radical transformation in the scientific concept of “junk DNA”.

        I also enjoy taking people to the Trinity Site in New Mexico, where the original earth fused by the atomic blast tends to focus the mind on the hard realities of scientific forecasting.

        So – that is what I was trying to pack into a response to Robert. I forecast that it might help, but my forecast was translated as a “rant”.

        My only wish is that Step 9 in the “Climate Challenge Stack” – should be much more tightly integrated with Step 1.

      • M Anderson :
        Did not mean to – accidentally – get into a tussle with Robert I. Ellison

        I enjoyed your posts.

        Robert I. Ellison: First of all the science here is not a forecast – it is a simulation of physical processes using laws of physics.

        The tests of model accuracy and adequacy are comparisons of model output to out-of-sample data; if those data will be obtained in the future, then then model output is a forecast, whether intended so or not.
        That is how science is so good at proving itself wrong, as M. Anderson phrased it.

      • This ‘tussle’ seems a little one sided. I describe something that is not a forecast but a simulation of physical processes using laws of physics. These laws emerge from systematic observations. I quoted this above.

        “Remember, then, that scientific thought is the guide to action; that the truth at which it arrives is not that which we can ideally contemplate without error, but that which we can act upon without fear; and you cannot fail to see that scientific thought is not an accompaniment or condition of human progress, but human progress itself.” William Kingdon Clifford, The Common Sense of the Exact Sciences (1885)

        I got it from an article on hydrology. “Debates—Hypothesis testing in hydrology: Pursuing certainty versus pursuing uberty” Uberty is a fruitfulness of enquiry gained at the expense of certainty. A following of clues in a world directed process essential to the advancement of Earth system science. The process leads to a perspective in which change and uncertainty in our nonlinear, spatio-temporally chaotic world is the inevitable conclusion (Koutsoyiannis, 2013). The best climate science forecasts surprises.

        But people go their own way and there is not a lot that can be done about it in complete economic and democratic freedom. There is a hierarchy of needs and many people are struggling with the first level. Only free peoples in free markets can resolve the tensions that otherwise – I forecast – will lead to another utter disaster. Centralized planning – as Hayek told us – is the surest path to disaster. The global economy is worth about $100 trillion a year. To put aid and philanthropy into perspective – the total is 0.025% of the global economy. If spent on Copenhagen Consensus smart development goals such expenditure can generate a benefit to cost ratio of more than 15. If spent on UN Sustainable Development Goals you may as well piss it up against a wall. Either way – it is nowhere near the major path to universal prosperity. Some 3.5 billion people make less than $2 a day. Changing that can only be done by doubling and tripling global production – and doing it as quickly as possible. Optimal economic growth is essential and that requires an understanding and implementation of explicit principles for effective economic governance of free markets. It requires cheap and abundant energy – from whatever source until there are cheaper alternatives and the creative-destruction of capitalism transforms the energy landscape.

        In a free world of free peoples the future is cyberpunk. The singularity occurs on January 26th 2065 when an automated IKEA factory becomes self-aware and commences converting all global resources to flat pack furniture. Until then – endless innovation on information technology and cybernetics will accelerate and continue to push the limits of what it is to be human and to challenge the adaptability of social structures. New movements, fads, music, designer drugs, cat videos and dance moves will sweep the planet like Mexican waves in the zeitgeist. Materials will be stronger and lighter. Life will be cluttered with holographic TV’s, waterless washing machines, ultrasonic blenders, quantum computers, hover cars and artificially intelligent phones. Annoying phones that cry when you don’t charge them – taking on that role from cars that beep when you don’t put a seat belt on. Space capable flying cars will have seat belts that lock and tension without any intervention of your part. All this will use vastly more energy and materials this century as populations grow and wealth increases.

        There are surface temperature risks in our nonlinear and unpredictable world – as well as for biology and hydrology – and the only feasible response is a pragmatic one. “This pragmatic strategy centers on efforts to accelerate energy innovation, build resilience to extreme weather, and pursue no regrets pollution reduction measures — three efforts that each have their own diverse justifications independent of their benefits for climate mitigation and adaptation. As such, Climate Pragmatism offers a framework for renewed American leadership on climate change that’s effectiveness, paradoxically, does not depend on any agreement about climate science or the risks posed by uncontrolled greenhouse gases.” https://thebreakthrough.org/articles/climate-pragmatism-innovation

        So I mention briefly Joe Biden’s $400B over 10 years ARPA-C proposal and M Anderson wants to give it to Honda or Toyota. That is perhaps the least practical suggestion of 2019.

        Returning some of the carbon lost from soils and vegetation since the advent of agriculture is one aspect of Joe Biden’s ARPA-C proposal.

        Carbon dioxide emissions from fossil fuels and cement production – from 1750 to 2011 – was about 365 billion metric tonnes as carbon (GtC), with another 180 GtC from deforestation and agriculture. Of this 545 GtC, about 240 GtC (44%) had accumulated in the atmosphere, 155 GtC (28%) had been taken up in the oceans with slight consequent acidification, and 150 GtC (28%) had accumulated in terrestrial ecosystems (IPOCC 2007).. Climate and ecologies are chaotic – and this implies that these systems are both unpredictable and sensitive to small changes. Small changes can trigger large and rapid shifts in internal dynamics. It is the key reason why caution is warranted when changing such fundamental systems as the atmosphere or the chemistry of the oceans. An example – carbon dioxide increase allows plants to reduce the size and number of stomata. Plants can access the same amount of carbon dioxide for growth and lose less water resulting in a change in terrestrial hydrology. It is impossible to foresee the ramifications of this for nutrient cycling, water availability, the carbon cycle, fire regimes and biodiversity. Forecasts of Sahel greening in 100 years notwithstanding.

        But it is possible to return some of the atmospheric carbon increase to vegetation and soils in ways that improve agricultural productivity by up to 3-fold, enhance food security, conserve biodiversity and create more flood and drought tolerant food production systems. And to reclaim desert, conserve and restore forest, open woodland, savanna and wetlands.
        Buying time for the development of 21st century energy systems to supply cheap and abundant energy for the essential needs of humanity. That is the bulk of the ARPA-C proposal – and not something you want to farm out to Toyota – if that was a remotely practical notion. Innovation is the key to future prosperity.

        Better land and water management is a key to balancing the human ecology and is being embraced by free peoples worldwide. There is a role for government but environmental management by and large is best pursued from the bottom up – the free people problem again – rather than in the traditional command and control paradigm. It is an idea – ‘beyond the tragedy of the commons’ – pioneered by Elinor Ostrom that emerged with study of real world successes in managing fisheries and forests, soil and water, aquifers and wildlife.

      • “Rule 4: In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, not withstanding any contrary hypothesis that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.” Isaac Newton

        Are they seriously contending that the laws of physics – confirmed over centuries – are contingently wrong because – as Einstein said – no “amount of experimentation can ever prove me right; a single experiment can prove me wrong.”? I think I’ll take the laws of physics as read until these ‘giants’ come up with a better idea.

      • Trying to decide if I should play golf tomorrow. I’ll have to check the National Weather Service’s simulation of physical processes using laws of physics to see if rain is in the forecast. Sorry, for the rant.

      • Modern weather forecasters are very good at incorporating satellite data into initial model conditions. But still Lorenzian butterflies prevail over very short periods of time. If you look closely – there are probabilities involved.

        “Partly cloudy. Medium (50%) chance of showers about the coast, slight (30%) chance elsewhere. The chance of a thunderstorm this afternoon. Winds easterly 20 to 30 km/h. Daytime maximum temperatures 30 to 36.”

        The difference is between large grid models using semi-empirical paremetizations and cloud resolving scale simulations using fundamental equations of state.

        I think swimming is more the order of my day. And discombobulated more the point than rant. Not that Don would notice.

      • In physics it is often much easier to calculate the changes in a bulk parameter than changes in every small section of a system.

        See: statistical mechanics.

        Imagine a swimming pool. It’s much easier to calculate the change in its average temperature — say, from Newton’s law of cooling — than the changes in the locations, sizes and durations of every warm and cool spot in the pool.

        Calculating climate is like calculating the former. Weather, the latter.

      • You repeat this meme far too often David. This ‘law of cooling’ is just a power law with an empirical rate constant. Don’t know why they call it Newton’s law of cooling. Can’t see that warm or cool patches are relevant to anything. Nor is statistical mechanics at all relevant to climate models. It is just a misbegotten narrative metaphor gleaned from some climate echo chamber..

        Nor was I talking weather or climate models – they have the same sensitive dependence dynamic. Cloud resolving scale process simulation using fundamental laws of physics is much more fun. You should try to learn something new – instead of simply regurgitating the same ol’ climate war polemic.

      • It’s not about Newton’s law of cooling per se — it’s that almost always in physics it’s easier to predict the evolution of a bulk parameter than that of individual particle behaviors, cloud particles or whatnot.

        There is no doubt about this.

      • Robert I. Ellison wrote:
        Nor is statistical mechanics at all relevant to climate models.

        Except that parameterizations are often dependent on it — as are the basic law of thermodynamics, via the partition function.

      • ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist.’ Edward Lorenz

        50 years later the problem is not yet solved and may be insoluble. There are no statistical functions for cloud – otherwise it wouldn’t be the major source of uncertainty that it is.

      • 50 years from now better solutions will be useless.

        As Stephen Schneider said a few decades ago, we will have to make climate decisions in the face of considerable uncertainty. But then we do that all the time for other problems.

        Remember Dick Cheney’s 1% rule?

      • We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist.’ Edward Lorenz

        In la la land.

        In real land we don’t care. We know that total warming is proportional to total emissions.

        That’s all we need to know to act, and to act NOW.

      • More precisely observed initial conditions and finer grids increase precision. The next step is to incorporate satellite and soil and ocean data and fine scale process modelling – carbon, hydrological and nutrient transport cycles – into near real time land, ocean and atmosphere models. Ignore Don. It is Earth system science to meet humanities grand challenges this century.

      • Robert I. Ellison wrote:
        More precisely observed initial conditions and finer grids increase precision. The next step is to incorporate satellite and soil and ocean data and fine scale process modelling – carbon, hydrological and nutrient transport cycles – into near real time land, ocean and atmosphere models

        Ever more inputs won’t help.

        Why Is Climate Sensitivity So Unpredictable?
        Gerard H. Roe, Marcia B. Baker
        Science 26 Oct 2007:
        Vol. 318, Issue 5850, pp. 629-632
        DOI: 10.1126/science.1144735
        https://science.sciencemag.org/content/318/5850/629

      • “The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.” https://www.pnas.org/content/104/21/8709

        The real reason sensitivity uncertainty hasn’t narrowed is that they pull it out of their arses.

        “As our nonlinear world moves into uncharted territory, we should expect surprises. Some of these may take the form of natural hazards, the scale and nature of which are beyond our present comprehension. The sooner we depart from the present strategy, which overstates an ability to both extract useful information from and incrementally improve a class of models that are structurally ill suited to the challenge, the sooner we will be on the way to anticipating surprises, quantifying risks, and addressing the very real challenge that climate change poses for science. Unless we step up our game, something that begins with critical self-reflection, climate science risks failing to communicate and hence realize its relevance for societies grappling to respond to global warming.” https://www.pnas.org/content/116/49/24390

      • “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.” TAR 14.2.2.1

        What they are talking about here is perturbed physics ensembles as discussed by Julia Slingo, Tin Palmer, James McWilliams, James Hurrell… What we have is opportunistic ensembles whose members are chosen on the basis of ‘a posteriori solution behaviour’.

        https://lmgtfy.com/?q=perturbed+physics+ensembles

        It helps to recognise the nature of the system before deciding what math is applicable.

      • This has been known since the 1960’s. Stop parading your ignorance. .

        https://royalsocietypublishing.org/doi/full/10.1098/rsta.2011.0161

      • Not exactly end of year fare, but maybe it is better that by tomorrow it will be last year’s fare.
        Referring here to this post by REI Robert I. Ellison | December 30, 2019 at 4:04 pm |
        Quote ” ——- Forecasts of Sahel greening in 100 years notwithstanding.”
        My point here is to introduce the ‘big unknowns’ of any predictions. The Sahara, and its abrupt changes makes for a good case.

        Here at 33:00 in this video https://www.youtube.com/watch?v=LIAkJg8knTI&t=1989s P DeMenocal points to the abrupt change in Saharan climate at 5500BP (3550bce). It has not changes since.
        Then at 14:08 here: https://www.youtube.com/watch?v=3-vCBRaVeyI&t=929s Notice the changes in human habitation and the dates. The dates indicate also changes in climatic conditions, and the ability of the Sahara to support habitation.
        Then see the dates here from an altogether different perspective https://melitamegalithic.wordpress.com/2019/03/15/searching-evidence-update-2/ The dates and the events seen from altogether different proxies – the human side – tell something none of the climate models entertain. The absolutely unforeseen.

      • Hi

        See – a blank slate for 2020. I pray for blessings for us all my friend. 😊

        The proximate cause of 5500 year BP Sahel drying was shifts in ocean and atmospheric circulation. Warm Pacific conditions prevailed and then shifted to cool. Coupled Pacific and Atlantic condition are the source of much precipitation variability in that region. The bifurcation can be seen in the millennial band of the wavelet analysis.


        https://www.nature.com/articles/nature01194

        A transition I would attribute to slow changes in solar activity – from low to high – triggering a shift following crossing of a threshold. Canonical chaotic behaviour. What I can’t see in this is any evidence of periodic behaviour.

        Beyond that there be dragons.

      • Blessings to thee all, and long respite to those down under (another net contact there was told to evacuate on NY eve).

        Yes, dragons there be, very mean, but well camouflaged. It is by chance they are glimpsed. The 5500BP Sahel drying, – and all the accompanying climatic changes, can be traced to one dragon ( that used to visit periodically up to the 4k2 date) – can be seen in the tell-tales it left, here: https://melitamegalithic.wordpress.com/2019/08/12/searching-evidence-deaths-tsunamis-and-earth-dynamics/
        The engineering there and the math involved are easy to understand (once realised); and were fully tested.

      • “In the case of the Nile, there is a well-known link between SST fluctuations in the Pacific Ocean (and El Niño), and monsoon rainfall over the Ethiopian plateau, and, consequently, annual runoff fluctuations in the Blue Nile basin. Blue Nile runoff can account for up to 80% of the Nile flood/annual runoff, so it can be deduced that the long-term variability of Ethiopian plateau rainfall is a major driver of long-term variability in annual Nile runoff. Eltahir (1996) has found a linear relationship between a SST El Niño index for the months of September, October and November and annual Nile runoff at Aswan, which explains 25% of the variance.” This is a parsimonious explanation for the aridity transition in the Mediterranean basin ~ 5500 years BP. Beyond that – the allusion was to dragons on old maps. A metaphor for the unknown.

      • Melitamegalithic

        In our local paper today

        https://www.devonlive.com/news/local-news/follow-5500-year-old-footsteps-3657035

        This new farming community was found on nearby upland dartmoor which periodically becomes too cold for farming at levels around 1500 feet. The last farmers came off the top of the moors around 1300 AD.

        Tonyb

      • Tonyb; RIE,
        My ref to Dragons was in respect of this: “We have presented supporting evidence for the concept that meaningful outliers (called “dragon-kings”) coexist with power laws in the distributions of event sizes under a broad range of conditions in a large variety of systems. These dragon-kings reveal the existence of mechanisms of self-organization that are not apparent otherwise from the distribution of their smaller siblings.” From Dragon-Kings, Black Swans and the Prediction of Crises
        Didier Sornette

        Tks for the info re Devon’ first farmers. 5500BP was at first a difficult date to find/locate in the chronology of events. Until it began to show up in various proxies and other data. As one can see from the Iceland’s ice-cap C/N ration (link in my post above) it is a spike (a better representation would be a step change/impulse and exponential decay) indicating fast and abrupt event but leaving an altogether greatly changed conditions.
        However it is another dragon’s return at 3200bce that brought great disaster (a 45deg change in equinox sunrise – link above). An odd proxy here, long read, (see end pg 13 of 35, not numbered) https://www.academia.edu/39950048/THE_ELITE_LONGHEADS_OF_MALTA

      • Well done. Dragon-kings are associated with – equivalently – bifurcations, tipping points, catastrophes (in the sense of Rene Thom) or a phase transitions.

        Oddly enough – the longfaces of Malta put me in mind of the Dogon of what is now Mali. Before the disaster they ranged across northern Africa and were visited by amphibious creatures from Sirius B in the Canis Major constellation. Tom Robbins has a wonderfully amusing novel. Well worth a read if possibly not utterly reliable.

        https://www.penguinrandomhouse.com/books/155515/half-asleep-in-frog-pajamas-by-tom-robbins/

      • Yes the Dogon have an interesting heritage. (But look out for the historical muddying caused by missionaries).
        Not related to the primary point of this thread,- but–. This is an excerpt from a book (mine) “According to Nicholas Grimal the myth of Osiris contains elements that are very ancient, remanent recollections of the earliest thoughts of the ancient civilisations. Grimal says that the Osiris myth recalls the Lebe, where the resurrection of Osiris represents the re-growth of the millet. The Lebe is one of the four early gods of the Dogon people of West Africa. It was the agricultural deity that controlled the cycles of the seasons.” The earliest myths are in fact ancient agrarian science told in metaphor. However the science of agriculture did not stop there; it required a good solar calendar. The remnants of those calendars, like the ancient myths, are everywhere.

  38. Geoff Sherrington

    The official global temperature record has two periods of some 30 years each, starting roughly 1915 and 1975, which others have described as being indistinguishable in pattern. The valid question is whether the increases were caused by the same effects. To answer this, one might look into data from individual sites, related assemblages of sites, regional groups of sites and so on to see if textural dufferences are found that might lead to mechanisms.
    As an Australian, I work mainly with official BOM data. In the beginning I looked for sites that plausibly had little effect from the Hand of Man. The first site was remore Macquarie Island, where there has been no significant increase in Tmax or Tmin over the last 50 years. All the graphs show is a pattern that would be called noise or for some people, uncertainty.
    I have tried for several years to obtain a BOM estimate of uncertainty for routine daily historic temperatures. I have failed to be rewarded with anything more than obfuscation. I conclude by extension that graphs from HadCRUT4 and earlier that show rote grey areas for uncertainty are no more than window dressing. I cannot provide realistic estimates of uncertainty because AFAIK, the required experiments to provide definitive answers have not been done. The closest I have come is to state that the official Australian warming estimate for the century from 1910 is some 0.9 degrees C, when the estimates of self and colleagues, using realistic unceratinties, are more like 0.4 degrees C. If uncertainty analysis leads to figures as different as these, then I cannot take seriously the mathematical exercise of Vaughan Pratt that seems to rely uncritically on HadCRUT$ being accurate and with low uncertainty.
    It gets worse when ocean temperatures are added in. The uncertainty in these should be greater because their coverage is much less. At the start of Vaughan’s study period, there were very few sea surface temperatures available from the Southern Hemisphere. To rely on invented figures is not scientific – it is childish. No amount of clever math can compensate for data guesswork.
    I also concur with others who nhavew noted that subtraction of diverse perturbations is hardly valid absent useful knowledge of the unknowns as well as the known and particularly the magnitude of the knowns over time. Look at the way that TOA enerhy fluxes have been estimated by the data from several satellites whos displacements from each other are huge compared to the tiny differences in the final result that researchers regard as signficcant. Again, it is a matter of uncertainty, where the real TOA flux uncertainty must be so large as to be not useable for the classical measurement scientist. Geoff S

    • Geoff

      I think this piece from Vaughan is very interesting and well done but the data used is highly flawed. You say;

      “It gets worse when ocean temperatures are added in. The uncertainty in these should be greater because their coverage is much less. At the start of Vaughan’s study period, there were very few sea surface temperatures available from the Southern Hemisphere. To rely on invented figures is not scientific – it is childish. No amount of clever math can compensate for data guesswork.”

      Ocean temperatures are a chimera. They are not reliable and are often so sparse that ‘interpolation’ would be a gross exaggeration, they are invented.

      HMs Challenger in the 1850’s made a worthy effort, covering a tiny fraction of 1% of the ocean surface. The notion that a single reading in a year in a grid square far removed from the target can be used as a scientific measure is somewhat counter intuitive.

      When I wrote my piece on SST’s some years ago the only conclusion that can be arrived at was that the vast majority of SST’s had no proper scientific merit and were to few anyway to create a global database. The exceptions being a very few narrow well travelled ocean trading or naval routes that were properly measured in a rigorous fashion.

      The SST’s started to acquire something approaching some sort of fairly vague but useable global guestimate around the 1960’s. They should not be used in a scientific paper prior to that.

      I think Judith said something similar a few years ago but perhaps has changed her mind.

      tonyb

    • Rather than debate whether CO2 is good for plants, how about considering that it can only be good for them provided they consume it?

      Ever since the PETM 56 million years ago, plants may have been reducing CO2 by consuming it. Eventually, very gradually, over millions of years, it drops to 180 ppm.

      At that point they actually are starving.

      This naturally reduces their population, hence easing up on their consumption. CO2 rises back up again, reviving the plants to the point where they draw it back down.

      Even if this feedback isn’t the primary trigger for Milankovitch’s cycles, it might at least have amplified them.

      If at some point during this century CO2 emissions decrease to the point where these thriving CO2-gorging plants finally get caught up, they’re now back in the business of drawing down CO2 by consuming it.

      The last time the atmosphere was above 400 ppm, plants and CO2 were very gently shrinking down over millions of years.

      This time the plants are more like Audrey 2: FEED ME!

      I can see it now: CO2 further below 180 than at any time in the last 400 million years.

      With the surviving plants still whimpering, feed me.

      And with the CO2 still decreasing.

      (Sorry, sometimes my imagination gets away from me.)

    • @GS, @tonyb: “At the start of Vaughan’s study period, there were very few sea surface temperatures available from the Southern Hemisphere. To rely on invented figures is not scientific – it is childish. No amount of clever math can compensate for data guesswork.”

      Geoff and Tony, it’s all guesswork. Some guesses are more accurate than others. Here’s a graph that shows that the folk who curate HadSST3 and CRUTEM4 have guessed more accurately about the end of my study period than the start.

      The blue curve labeled GMST-SF is mainly HadCRUT4, the solar forcing correction SF is very small by comparison. Notice how closely HadCRUT4 tracks CO2 forcing on the right.

      On the left the blue curve is more wobbly, consistent with your point that it is supported by much less data in the Southern Hemisphere.

      Yet it still managed by some miracle to remain between the 1.7 and 2.0 curves. While there’s not enough data to support 1.85 spot on, I find it remarkable that the blue curve doesn’t just wander all over the place for the lack of data that you complain about without making any attempt to quantify the resulting uncertainty.

      When I wrote “The proof is in the pudding” in my post, what I had in mind there was the mere fact that *any* relationship existed between early HadCRUT4 and CO2 as reconstructed from the ice cores from the Australian Antarctic Territory. As Figure 6 shows, that relationship is actually remarkably good. One could quantify how good in terms of how much the blue curve wobbles (standard deviation) on the left vs. on the right, relative to ARF = 1.85*log2(CO2).

  39. Re: “Lastly I propose 1.85 °C per doubling of CO2 as a proxy for HadCRUT4’s immediate transient climate response to all anthropogenic radiative forcings, ARFs, since 1850. … The figure of 1.85 for TCR holds not only on the right and left but the middle as well. … Both fits are achieved with TCR fixed at 1.85. … I would be very interested in any software providing comparably transparent and compelling evidence for a substantially different TCR from 1.85,…”

    Vaughan, your own work finds a TCR much less than 1.85 °C, because TCR is defined to be the response to increasing CO2 level alone. It is not the response to CO2 + all the other forcings which happened to have increased at the same time.

    If 100% of the warming is attributed to anthropogenic causes, and 3/4 of it is attributed to CO2, and 1/4 to other anthropogenic GHGs (which I think is pretty conventional), then TCR becomes 0.75 * 1.85 = 1.39°C. That is almost identical to the 1.41°C that I calculated for TCR, using the 1960-2014 interval, when assuming 100% anthropogenic attribution).

    • Dave

      As water is by far the largest greenhouse gas contributor-and overall its most potent- according to the IPCC, it would be useful to consider that aspect, as it may be contributing to the warming effect attributed to co2.

      I don’t know by how much, if any, water vapour amounts have changed over the decades due to mans extraction or greater evaporation due to irrigation etc. I also don’t know if it has a diminishing logarithmic effect in a similar manner to co2 which is surely at its most potent in its first 150ppm.

      Vaughan needs to be congratulated for his paper and the manner in which he responds to queries and comments

      tonyb

    • Tony, the TCR is defined as the impact on the GMST due to a doubling of CO2 when the CO2 ramps. There are all feedbacks included , also the WV feedback. There are still other anthopogenic forcings ( other GHG, O3 tropo, strato ect.) which are not considered in the measure of the TCR.

      • Yes, thank you, Frank. Sorry I was unclear. I meant that TCR is defined to be the response to increasing CO2 level, and its feedbacks, alone. That is, it does not include response to other anthropogenic forcings, like CH4, O3, CFCs, particulate/aerosol pollution abatement, etc. which incidentally happen to change at the same time that CO2 levels change.

      • Dave, this is correct. The mixing up of the TCR with the response to every anthropogenic forcing implies one more pitfall: The T-response in the future is much more CO2 dependend than in the period 1850…2018 due to the long livetime of the ( cumulative) CO2 forcing. Therefore this mixing of the terms implies a big failure when it comes to the doubling of CO2 according to the real definition of the TCR.
        For all readers: The true TCR found by VP is 1.23 for the HadCRUT4 dataset, NOT 1.85 which is the estimated TCR of the CMIP5 mean.

    • This appears to me to be an excellent point and may cast Vaughan’s post as something of a less rigorous confirmation of Lewis and Curry.

      • dpy: Dave and I tried to point out hardly the mixing up of the TCR with the response to every ERFanthro… no response up to now. Thanks for stepping in!

      • To make the problem with the estimated TCR clearer:

        The author deduced the “TCR” from the effective radiative forcing (ERF) for the total anthropogenic forcing ( red). The definition of the TCR implies the T-response to the CO2-related forcing ( blue). There are other anthropogenic forcing agents at work ( green) leading to an increasing overestimation of the “TCR” deduced from ERF anthro tot. after about 1970.

  40. Dave, very true! The relation for the CO2 forcing vs. the total anthropogenic forcing is indeed 2/3 when looking at the timespan 1850…2018. Source: https://www.nicholaslewis.org/wp-content/uploads/2018/05/LC18-AR5_Forc.new_.csv . This gives a TCR of 1.23 for HadCRUT4 from the calculations of the main post, which is well in the ballpark of L/C18 ( for this dataset ) of 1.2, see Tab.3 of this paper.
    Nic and Judy will be glad to see the confirmation of their values, however the used method in L/C 18 is much more stringent IMO and peer reviewed. If one writes a longer blogpost at this place I would suggest to account for some fundamental definitions at first.

    • Frank, my initial response was the same as yours and a couple of other posters in this thread. I wanted to hear the details from Dr. Pratt on how he derived his TCR values. As someone speaking on the subject matter as authoritatively as he has in this thread it was difficult for me to assume that he got the TCR derivation that wrong. I will be interested to hear his reply not only to that question but others that I posed in my initial post concerning his analysis.

  41. “Well, it seems highly unlikely that the vegetable kingdom has been responding to rising CO2 anywhere near as fast as we have been able to raise it. While plants may well be trying to catch up with us, their contribution to drawdown is hardly likely to have kept pace.”

    Au contraire, There is a huge and continual turnover of CO2 from plants that sustain the CO2 cycle. More CO2, More plants may sound trite but the extra CO2 should be commensurate with the overall biomass sustaining it hence they should be keeping pace quite well.

  42. “At the end of the day, the difference between VP’s and my analysis hinges on the amount of CO2 by 2100 and the value of TCR, both of which are associated with substantial uncertainty.”
    – Yes.

    “This proxy is reconstructed from ice cores at the Law Dome site in the Australian Antarctic Territory up to 1960 and as measured more directly at Charles Keeling’s CO2 observatory on Mauna Loa thereafter, giving the formula ARF = 1.85*log ₂(CO2) for all anthropogenic radiative forcing.”

    Would have helped to show the actual CO2 figures since 1850 if any. Despite good mixing, CO2 levels at Law Dome are not the same as at Mauna Loa and require quite a lot of imagination to equate them to each other to make up a graph
    . Still we can only work with what we have.
    It is the temperature graph however that gives the 1.85 figure.
    Amazing that it crosses at just the middle on the graph or is this because the ARF formula is worked purely off the temperature graph?
    In other words is it only 1 figure and an idealized formula from that figure.
    What would happen if you had decided to use 1.2 or 2.6 for your ARF?

    “The figure of 1.85 for TCR holds not only on the right and left but the middle as well. CO2 is a good proxy for all centennial anthropogenic radiative forcing including aerosols.”

    CO2 levels, IMNHO, are a very murky field to enter into.
    CO2 is obviously a GHG and capable of having some warming effect on the atmosphere, otherwise we should all pack up our bags and do poetry and painting.
    We have one only main resource for this, Maunu Loa.Forget the Dome.
    Not very much corroboration anywhere else.
    “Lucky it mixes so well”.
    CO2 is derived for the main part from the biomass of sea and land vegetation breakdown.
    An enormous amount that turns over each year.
    For the last 40 years it has gone up and down 6 monthly like a metronome with an upward trend of 3-4 ppm annually.
    Wars, famines, droughts, rains, El Ninos, Oil shortages and oversupply, etc.
    Nothing fazes it .
    When something chaotic behaves in such a man made fashion, robotically increasing despite chaos, questions need to be asked and answered about the science on that peak.

  43. From angech above:
    CO2 is derived for the main part from the biomass of sea and land vegetation breakdown.
    CO2 is controlled by the surface area of the oceans. The higher the ocean more radiant heat is reflected to the BLACK SKY, The higher the ocean there is less land area so there is less green foliage to make oxygen.
    The daily heat gained from the sun is controlled by the height of the oceans. The higher the oceans the greater the surface area to reflect the radient heat back to the BLACK SKY. At present Mother Nature is reflecting more radient heat back to the BLACK SKY daily than the earth retains daily. Mother nature is taking heat from the oceans to maintain a relatively constant surface temperature.
    THAT IS THE SIMLICITY OF MOTHER NATURES DESIGN!!!!!

  44. Ireneusz Palmowski

    El Niño effect is very visible in the Central Arctic.

  45. Ireneusz Palmowski

    The magnetic activity of the Sun is still very low.

  46. Vaughan

    Thanks for this excellent article.

    Figure 2a and even more so figure 2b show a remarkable alignment of warming residual with solar irradiance (TSI). Almost enough to make a believer out of me. Leif Svalgaard hit the bullseye 🎯 .

    1.85 would be a good compromise climate sensitivity. Too high for skeptics, too low for alarmists.

    • ps: “1.85 would be a good compromise climate sensitivity. Too high for skeptics, too low for alarmists.”
      It’s NOT a compromise, it’s the result of some mixing up in the main post. The real TCR ( according to it’s definition) in the main post is 1.23 for HadCRUT4.

  47. Well, it seems highly unlikely that the vegetable kingdom has been responding to rising CO2 anywhere near as fast as we have been able to raise it. While plants may well be trying to catch up with us, their contribution to drawdown is hardly likely to have kept pace.

    Within the paradigm of climate modelling, the language used to describe plants response to increased CO2 gives an impression that they are being cruelly force-fed something “against their will”, like a battery of geese being force fed corn to make tasty pate from their bloated livers. Those poor beleaguered plants being force-fed carbon till they explode from excess.

    This is of course false. The plant kingdom has for all of the Pleistocene and even before, been carbon starved. Earth’s vegetation has existed in excellent health with atmospheric concentrations up to many thousands of ppm CO2. It allows them to photosynthesise better with less water loss. Recent CO2 starvation has resulted in the evolution of the C4 alternative type of photosynthesis that uses CO2 more efficiently. This is a stress response. The earth right now needs more, not less, CO2.

    CO2 increase is an unmitigated good for plants on earth. No force feeding is involved. If CO2 continues increasing, within a century the Sahara desert will start greening over.

    • phil salmon wrote:
      The plant kingdom has for all of the Pleistocene and even before, been carbon starved.

      I see this claim a lot. What’s the evidence for it?

      • “I see this claim a lot. What’s the evidence for it?”

        The greening of the Earth suggests plants are gorging on CO2 like there’s no tomorrow; but if they had your views I suppose they’d be more inclined to relax and soak up the good times.

        A scientific study published in Nature Climate Change and highlighted by NASA reveals that rising carbon dioxide levels are having a tremendously positive impact on the re-greening of planet Earth over the last three decades, with some regions experiencing over a 50% increase in plant life.

        The study, entitled, “Greening of the Earth and its drivers,” used satellite data to track and map the expansion of green plant growth across the globe from 1982 – 2015. Published in 2016, this study found that rising atmospheric carbon dioxide causes “fertilization” of plant life, resulting in a remarkable acceleration of increased “greening” across every Earth continent. As the study abstract explains:

        We show a persistent and widespread increase of growing season integrated LAI (greening) over 25% to 50% of the global vegetated area… Factorial simulations with multiple global ecosystem models suggest that CO2 fertilization effects explain 70% of the observed greening trend…

        https://www.climate.news/2019-04-26-nasa-declares-carbon-dioxide-is-greening-the-earth.html

      • jungle: Thanks. I know about those studies.

        But what’s the evidence plants were *starving*? What does “starving” even mean in this context?

      • BTW, more plants means more water use by plants — when humans are already facing water shortages around the world.

        Plants grown under elevated CO2 are less nutritious and have fewer minerals.

        More plants = more weeds, relative to the crops we want to grow. It means more insects. More CO2 = warmer temperatures, which affects plant growth and changes the hydrological cycle.

        It’s not at all clear to me that more CO2 is better just because plants like it. It’s a very simplistic notion. If CO2 was so great for plants, Venus would be absolutely overrun with plants, since its atmo is 96% CO2.

      • Must one have to state the obvious? Since the Earth is greening in areas where before it hadn’t, it must be those areas weren’t conducive to growth, they were starved for something. Comprehende?

      • More CO2 = More plants = more weeds, relative to the crops we want to grow. It means more insects.

        Indeed, more life in general.

        So hard to argue that increased life on earth is a problem for life on earth.

      • Are you claiming there were insufficient plants during the Viking’s conquest of Greenland? Isn’t the greening there supposed to tell us about what a warm, prosperous times the Middle Ages were? But now you’re telling me plants were starving then??

      • Turbulent Eddie wrote:
        So hard to argue that increased life on earth is a problem for life on earth.

        So where is the life on Venus? Or Mars? Where atmo CO2 for both is 96%?

      • Turbulent Eddie wrote:
        So hard to argue that increased life on earth is a problem for life on earth.

        So more weeds and more insects and more water usage isn’t a problem for agricultural crops?

      • Peak water use in the U.S. occurred circa 1980, though I’m not suggesting there aren’t aquifer issues, but for the sake of this argument it will do.

        And I’ve heard the “nutrition argument” from excess CO2, it’s generally couched within the context of a lab discussion. I’d venture that the 1.5 degree warming over the last 150 years has had next to zero impact on the nutrition of plants globally. And they’re certainly more nutritious where there were little to none before.

        https://thebreakthrough.org/journal/issue-5/the-return-of-nature

      • jungletrunks commented:
        Peak water use in the U.S. occurred circa 1980, though I’m not suggesting there aren’t aquifer issues, but for the sake of this argument it will do.

        And the rest of the world? (USA = 2% of the planet)

        And I’ve heard the “nutrition argument” from excess CO2, it’s generally couched within the context of a lab discussion.

        No, it comes from field studies.

        “Nitrate assimilation is inhibited by elevated CO2 in field-grown wheat,” Arnold J. Bloom et al, Nature Climate Change, April 6 2014.
        http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate2183.html

        “Higher CO2 tends to inhibit the ability of plants to make protein… And this explains why food quality seems to have been declining and will continue to decline as CO2 rises — because of this inhibition of nitrate conversion into protein…. “It’s going to be fairly universal that we’ll be struggling with trying to sustain food quality and it’s not just protein… it’s also micronutrients such as zinc and iron that suffer as well as protein.”
        – University of California at Davis Professor Arnold J. Bloom, on Yale Climate Connections 10/7/14
        http://www.yaleclimateconnections.org/2014/10/crop-nutrition/2014

      • “During a 20-year field experiment in Minnesota, a widespread group of plants that initially grew faster when fed more CO2 stopped doing so after 12 years, researchers reported in Science in 2018.”

        https://www.sciencenews.org/article/rising-co2-levels-might-not-be-good-plants-we-thought

      • “Negative impacts of global warming on agriculture, health & environment far outweigh any supposed positives.” Smith et al. PNAS (2009)
        http://www.pnas.org/content/106/11/4133.full.pdf

      • “Ask the Experts: Does Rising CO2 Benefit Plants?” Annie Sneed, Scientific American 1/23/18
        https://www.scientificamerican.com/article/ask-the-experts-does-rising-co2-benefit-plants1/

        From this article:

        “Even with the benefit of CO2 fertilization, when you start getting up to 1 to 2 degrees of warming, you see negative effects,” she [Frances Moore, an assistant professor of environmental science and policy at the University of California, Davis] says. “There are a lot of different pathways by which temperature can negatively affect crop yield: soil moisture deficit [or] heat directly damaging the plants and interfering with their reproductive process.” On top of all that, Moore points out increased CO2 also benefits weeds that compete with farm plants.

        “We know unequivocally that when you grow food at elevated CO2 levels in fields, it becomes less nutritious,” notes Samuel Myers, principal research scientist in environmental health at Harvard University. “[Food crops] lose significant amounts of iron and zinc—and grains [also] lose protein.”

      • “During a 20-year field experiment in Minnesota, a widespread group of plants that initially grew faster when fed”

        The operative word is “fed”,

        Didn’t I already tell you the analysis of nutrition from excess CO2 is based on lab results? BTW, how much CO2 were these plants fed? Do you realize a human can drown from drinking too much water?

      • BTW, how much CO2 were these plants fed?

        Read the article and paper.

        Same thing happened with the FACE experiments at Duke, by the way.

      • So more weeds and more insects and more water usage isn’t a problem for agricultural crops?

        I think you are in love with a disaster scenario.

        Evidently, there’s a global diabetes epidemic, from, you know,
        people having too much to eat:

        https://ourworldindata.org/exports/food-supply-by-region-in-kilocalories-per-person-per-day-1961-2013_v3_850x600.svg

      • Crop yields depend on many factors that are all happening at the same time. (I don’t know why people can’t understand this.) Climate change is a negative factor:

        “For wheat, maize and barley, there is a clearly negative response of global yields to increased temperatures. Based on these sensitivities and observed climate trends, we estimate that warming since 1981 has resulted in annual combined losses of these three crops representing roughly 40 Mt or $5 billion per year, as of 2002.”
        — “Global scale climate–crop yield relationships and the impacts of recent warming,” David B Lobell and Christopher B Field 2007 Environ. Res. Lett. 2 014002 doi:10.1088/1748-9326/2/1/014002
        http://iopscience.iop.org/1748-9326/2/1/014002

        “With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions.”
        – B. Liu et al, “Similar estimates of temperature impacts on global wheat yields by three independent methods, Nature Climate Change (2016) doi:10.1038/nclimate3115, http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate3115.html

        “We also find that the overall effect of warming on yields is negative, even after accounting for the benefits of reduced exposure to freezing temperatures.”
        — “Effect of warming temperatures on US wheat yields,” Jesse Tack et al, PNAS 4/20/15
        http://www.pnas.org/content/early/2015/05/06/1415181112

        “Crop Pests Spreading North with Global Warming: Fungi and insects migrate toward the poles at up to 7 kilometers per year,”
        — Eliot Barford and Nature magazine, September 2, 2013
        http://www.scientificamerican.com/article/crop-pests-spreading-north-climate-change/

        “Suitable Days for Plant Growth Disappear under Projected Climate Change: Potential Human and Biotic Vulnerability,”
        — Camilo Mora et al, PLOS Biology, June 10, 2015
        http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002167

      • BTW, get back to us when US crop yields for 2019 come in….

      • “Read the article and paper.”

        You read it, then tell me. My reply will be; “so?”.

        The amount of CO2 increase we’ve had since the 1850s hasn’t had any appreciable effect, if any at all to date, on nutrition. Though engineered crops and certain types of farming do lessen nutrition, but that’s not what we’re talking about. It’s dubious the current projections for CO2 will have any appreciable effect on nutrition in plants that approaches what’s being done in the lab.

        But ponder that even less nutritious grasses will have more nutrition than what was barren before.

      • The amount of CO2 increase we’ve had since the 1850s hasn’t had any appreciable effect, if any at all to date, on nutrition.

        A peer reviewed study says you’re wrong, and yet you still deny it.

        Face it — you don’t care about the evidence whatsoever. Your standard is ideological, not scientific.

      • Appell, Your very first link is just a flat out lie. ““For wheat, maize and barley, there is a clearly negative response of global yields to increased temperatures.”

        Record yields have been occurring for many food crops around the world.

        One of key reasons commodity prices crashed earlier this decade, and have not appreciably recovered, is because of the commodities glut: for corn, wheat, maize, soy, etc., etc.; the result of record yields.

        I repost this link because the citations demonstrably indicate what’s going on in agriculture, not to mention it’s a pithy essay.

      • JT: Like most deniers, you are incapable of understanding that crop yields depend on more than one variable.

        Some inputs may be aiding yields, causing them to increase, while other factors are causing yields to decline.

        Did you ever study the calculus of functions of more than one variable?

      • jungletrunks wrote:
        TE “I think you are in love with a disaster scenario.”

        So tell us why there are no plants on Venus, where the atmosphere is 96% CO2? By your logic it should be overrun by plants….

      • “Like most deniers, you are incapable of understanding that crop yields depend on more than one variable.”

        First of all, I’m not a denier, though for you I’m sure it means one who hesitates to believe the earth will turn to ash in 10 years.

        “Some inputs may be aiding yields, causing them to increase, while other factors are causing yields to decline.”

        “Inputs are generally decreasing per citation in the last paper I posted.”

        Appell, Again; yields are NOT declining. The data is easy to acquire, try looking outside your sphere of Huffpost/WaPost compost, and like assorted ephemera.

        “TE “I think you are in love with a disaster scenario.”

        No, TE wrote it, it was accurate and pithy.

        “So tell us why there are no plants on Venus, where the atmosphere is 96% CO2?”

        The plants would have to have too much iron content to survive the crushing pressures of the Venetian atmosphere for starters. I’ll let you fret about the other possibilities.

      • Rather than debate whether CO2 is good for plants, how about considering that it can only be good for them provided they consume it?

        Ever since the PETM 56 million years ago, plants may have been reducing CO2 by consuming it. Eventually, very gradually, over millions of years, it drops to 180 ppm.

        At that point they actually are starving.

        This naturally reduces their population, hence easing up on their consumption. CO2 rises back up again, reviving the plants to the point where they draw it back down.

        Even if this feedback isn’t the primary trigger for Milankovitch’s cycles, it might at least have amplified them.

        If at some point during this century CO2 emissions decrease to the point where these thriving CO2-gorging plants finally get caught up, they’re now back in the business of drawing down CO2 by consuming it.

        The last time the atmosphere was above 400 ppm, plants and CO2 were very gently shrinking down over millions of years.

        This time the plants are more like Audrey 2: FEED ME!

        I can see it now: CO2 further below 180 than at any time in the last 400 million years.

        With the surviving plants still whimpering, feed me.

        And with the CO2 still decreasing.

        (Sorry, sometimes my imagination gets away from me.)

      • “What does “starving” mean for a plant?”

        Starvation: limiting of food necessary for continued metabolism or amount needed to survive in a given environment.

        CO2 benefits:
        1) Expansion in crop productivity.
        2) Greening of dry deserts.
        3) Warming mainly in the form of taming frigid temperate and polar region winters, causing greening and more human and wild habitat.

        CO2 cons:
        1) Acceleration of glacial melt, leading to acceleration in sea level rise.
        2) Theoretical possibility in increase in tropical cyclone winds and precipitation.

        If SLR can be mitigated directly CO2 is completely beneficial. The SLR can be mitigated indirectly by geoengineering an increased albido then we still have at least benefit #1 remaining. If we draw down atmospheric CO2 then we have zero benefit and still logically are left with the 20th century rate of 2mm/yr SLR.

        If we put all our technology eggs into diffusing the formation of tropical cyclones the coastal SLR problem goes away (for a century) as wells as wind and precipitation damage. We need to diffuse tropical storms.

      • David Appell: BTW, get back to us when US crop yields for 2019 come in….

        Is it not true that recent reviews (some presented here) have shown that the combination of increased temp, increased rainfall, and increased CO2 has been beneficial for crop yields and net primary productivity of land that has been undisturbed during those increases?

        CO2-induced warming does not happen in the absence of increased CO2, and the CO2 increase itself produces faster growth and greater resistance to drought — isn’t that well-established by research? (again, publications have been presented here at Climate Etc.)

        In the US, recent harvests may have been reduced compared to their potential by later planting (due to longer-lasting winters) and earlier frosts.

      • Soybeans are down in US. Perhaps because of the trade war.

        The longer winter/earlier frost: lol.

        It was spring and summer flooding that shortened the growing season. And, it’s snowing like crazy, which means the Big Mo and the Big Muddy could be flooding big again next spring.

      • RG: “What does “starving” mean for a plant?”

        The gross primary product of plants is carbohydrates, manufactured primarily from the carbon in the CO2 it takes in from the atmosphere, the hydrogen in the H2O it gets from wherever it can. To make amino acids it also needs nitrogen, while to make ATP it needs phosphorus.

        This is accomplished with the Krebs cycle, or citric acid cycle as its called these days. Just as with animals, part of this entails respiration: inspiring oxygen, expiring CO2. Net primary production is gross primary product net of expired CO2.

        By “starving” for a plant I had in mind any shortage of the above stunting its normal growth or making it more vulnerable to disease and pests, with the long term effect of reducing plant biomass as well as CO2 drawdown via photosynthesis.

      • Where I stated “Record yields have been occurring for many food crops around the world.” Obviously my intent is not to imply there are record yields for all crops year/year globally, every single year; I’m referring to a demonstrable trend that’s been ongoing since the early 20th century.

        More narrow in focus, the below are 30 year U.S. yield trends; natural variability of weather, pests, supply and demand variables necessitating crop rotation to maximize profit (i.e., the ongoing trend the last couple decades has been that less wheat has been planted, etc.) There’s numerous reasons for year-over-year seasonal blips in various yields overall, yet increasing yield trends have not waned for over a century. Though they don’t set records every single year, they continually, demonstrably, set records while using less land, and for many crops, less inputs to do it.

        Corn:
        https://www.nass.usda.gov/Charts_and_Maps/graphics/cornyld.pdf
        Rice
        https://www.nass.usda.gov/Charts_and_Maps/graphics/riceyld.pdf
        Soybean
        https://www.nass.usda.gov/Charts_and_Maps/graphics/soyyld.pdf
        Cotton
        https://www.nass.usda.gov/Charts_and_Maps/graphics/cotnyld.pdf
        Winter wheat
        https://www.nass.usda.gov/Charts_and_Maps/graphics/wwyld.pdf

      • David

        I see this claim a lot. What’s the evidence for it?

        https://royalsocietypublishing.org/doi/abs/10.1098/rstb.1998.0198

        The decline of atmospheric carbon dioxide over the last 65 million years (Ma) resulted in the ‘carbon dioxide–starvation’ of terrestrial ecosystems and led to the widespread distribution of C4 plants, which are less sensitive to carbon dioxide levels than are C3 plants. Global expansion of C4 biomass is recorded in the diets of mammals from Asia, Africa, North America, and South America during the interval from about 8 to 5 Ma. This was accompanied by the most significant Cenozoic faunal turnover on each of these continents, indicating that ecological changes at this time were an important factor in mammalian extinction. Further expansion of tropical C4 biomass in Africa also occurred during the last glacial interval confirming the link between atmospheric carbon dioxide levels and C4 biomass response. Changes in fauna and flora at the end of the Miocene, and between the last glacial and interglacial, have previously been attributed to changes in aridity; however, an alternative explanation for a global expansion of C4 biomass is carbon dioxide starvation of C3 plants when atmospheric carbon dioxide levels dropped below a threshold significant to C3 plants. Aridity may also have been a factor in the expansion of C4 ecosystems but one that was secondary to, and perhaps because of, gradually decreasing carbon dioxide concentrations in the atmosphere. Mammalian evolution in the late Neogene, then, may be related to the carbon dioxide starvation of C3 ecosystems.

        https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/j.1365-2745.2011.01905.x

        Our simulations indicate a reduction in the capacity of the terrestrial biosphere to weather continental silicate rocks by a factor of four in response to successively decreasing [CO2]a values (400, 280, 180 and 100 p.p.m.) and associated late Miocene (11.6–5.3 Ma) cooling. Marked reductions in terrestrial weathering could effectively limit biologically mediated long‐term carbon sequestration in marine sediments.

      • Vaughan Pratt wrote on December 30, 2019 at 10:47 pm, “…plants may have been reducing CO2 by consuming it. Eventually, very gradually, over millions of years, it drops to 180 ppm. At that point they actually are starving. This naturally reduces their population, hence easing up on their consumption. CO2 rises back up again, reviving the plants to the point where they draw it back down. Even if this feedback isn’t the primary trigger for Milankovitch’s cycles, it might at least have amplified them…”

        You’re describing a negative feedback loop (CO2 Fertilization Feedback), but it cannot oscillate like that. The only way to get a system to oscillate like that is with a long delay in the feedback path, or with integral feedback.

        In the case of CO2 Fertilization Feedback, the feedback is immediate. Plants’ photosynthesis rates respond to changing CO2 levels instantly (or, at most, in a matter of hours — if the CO2 level changes at night, you won’t see a response until morning). Global atmospheric CO2 level changes are necessarily much, much slower than that (several orders of magnitude). So there’s no possibility of “overshoot,” in which plants continue to drive CO2 levels ever lower, even though the CO2 concentration is already down to starvation levels.

        Vaughan Pratt continued, “If at some point during this century CO2 emissions decrease to the point where these thriving CO2-gorging plants finally get caught up, they’re now back in the business of drawing down CO2 by consuming it. … I can see it now: CO2 further below 180 than at any time in the last 400 million years. … With the surviving plants still whimpering, feed me. … And with the CO2 still decreasing. (Sorry, sometimes my imagination gets away from me.)”

        The only way that could happen is if some nitwit decided to engineer C4 trees.

        C4 plants have the ability to draw down atmospheric CO2 much further than most other plants. Thankfully, all current C4 plants are short-lived (mostly grasses). So they do not sequester CO2 for very long. They draw down CO2 as they grow in the spring and summer, but they release it as they die and rot in the fall and winter.

        But trees have the ability to sequester CO2 for much longer, often for centuries. That means C4 trees would be dangerous.

        In a nuclear & battery-powered world, with no appreciable anthropogenic CO2 emissions, CO2 levels would plummet, until CO2 fertilization feedback slowed the decline. With current flora, it would slowly bottom out in the neighborhood of 300 ppmv, as the growing shortage of atmospheric CO2 slowed plant growth.

        But C4 trees could cause the atmospheric CO2 level to bottom out much lower, perhaps low enough to extinguish most C3 plant life. So, if someone proposes engineering C4 redwoods, then, please, for the sake of life on Earth, stop him.

  48. Interesting analysis and post here, Dr. Pratt. I am not sure that I understand exactly how your TCR was derived. Could you provide more details? How does your proxy data correlate with that from AR5? You also gave criteria for selecting your filter after much experimenting. It is not clear whether the criteria were determined ex post facto or a prior. Did you do any sensitivity analysis with other filters and other data sources?

    • ” I am not sure that I understand exactly how your TCR was derived.”

      I started with HadCRUT4 as the gold standard for global mean surface temperature since 1850. I observed that CO2 is increasing much more slowly than almost all other climate influences visible in GMST.

      So I tuned them all out using the same sort of tuning technology used to tune out unwanted broadcast stations. For this case I designed a filter for that purpose, namely convolution with that trapezoid I mentioned, flat on its 75-year bottom with a 55-year flat top. I call this the centennial filter because it has the effect of filtering out all natural influences on GMST that vary with periods shorter than a century.

      This filter attenuated these sub-centennial fluctuations to an almost invisible total background noise of 2 mK.

      It also attenuated a 130-year cycle in total solar irradiance, but only to 60% of its contribution to GMST. To get rid of the remainder of that unwanted signal, instead of using a filter I simply subtracted it. This yielded the following picture showing GMST with all natural influences removed by filtering out the fast ones and subtracting the only slow one, alongside log₂(CO2).

      Obviously log2(CO2) is not rising anywhere near as fast as GMST, indicating that we need to scale it up more. Here are two plots showing the result of scaling up log2(CO2) by respectively 1.7 and 2.0.

      The last decade is where we have the most reliable data, and also where CO2 is rising fastest. Since GMST sits exactly between 1.7 and 2.0, their mean of 1.85 is a perfect choice of TCR for a great fit to GMST.

      Using this middle value for the first few decades isn’t quite as perfect. However the GMST data is a lot sparser there, and also the CO2 signal is a lot weaker. Even so, it’s not clear whether either of 1.7 or 2.0 is better than the other, making 1.85 still a good choice even as far back as the 19th century.

      Which is quite surprising given that the conventional wisdom is that CO2 was so weak back then as to be undetectable.

      Which shows how good a job the script has done tuning out the natural signals that had been drowning out the CO2 signal prior to 1940.

      The MATLAB script computing GMST, SF, and ARF (= 1.85*log2(CO2)) is freely available at http://clim8.stanford.edu/MATLAB/ClimEtc for inspection and trying out in a MATLAB environment. There is no climate data at that site: the script downloads it from three reliably sourced sites.

      • @DA: “Why is [HadCRUT4] the gold standard”.

        1. The Met Office in London and the Climatic Research Unit n East Anglia have done a bang-up job curating something like a billion temperature readnigs.
        2. They provide 30 more years of coverage than GISTEMP.
        3. GISTEMP rises disconcertingly fast, presumably because they weight polar data more heavily than HadCRUT4 does.
        4. BEST only offers land temperature; for sea surface temperature they borrow HadSST so why not go to the source?

        But if you have a preferred land-sea surface temperature data extending at least as far back as 1850 I’d be very interested to see it.

        @DA: “Who says [log2(CO2)] should [rise as fast as GMST]?”

        Are you asking about log2(CO2) itself, or TCR*log2(CO2) where TCR is the transient climate response as understood in this sense?

        Assuming the latter, whoever is proposing it as the explanation for rising GMST.

        If you have a better explanation let’s have it and we can ask around to see who prefers it. Science advances via competition between competing hypotheses judged by peer review.

      • VP: Your only logic is “MORE READINGS = BETTER DATA,” which is not how science works.

        part 2: I was asking why log base 2 instead of base something else, like e, which every sane person uses. Who needs the extra constant?

      • PS: Everyone thinks that deltaT = lambda*deltaF = lambda*alpha*ln(CO2/CO20) where “ln” is hopefully what you think it is.

      • Thanks for this disclosure the calculation of your estimated “TCR”. It’s now quite clear that the way is fatally flawed. You calculated value is something like “AFR” for Anthropgenic Forcing Response. It’s not the TCR by it’s definition:” The transient climate response (TCR) is defined as the average temperature response over a twenty-year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year 60 to 80 years following initiation of the increase in CO2.”
        However, you are able to get the real TCR when looking at the ERF data, I linked it in a comment above. HNY to you and yours.

      • David

        Surely more reading DO mean better data?That IS scientific. As an example take oceanic sst’s in the 19th century and much of the 20th, especially in the polar regions.

        Data is extremely sparse. Is taking one reading on one day in one grid box and then interpolating that for numerous grid cells n the general vicinity as good as taking daily actual readings In every grid cell?

        Of course it isn’t. In the former case the data does not merit the words ‘scientific’ or even ‘worthwhile’

        Tonyb

      • VP, I am not convinced that your 1.85 coefficient, that you have called TCR and PCR, can even be connected to an estimate of TCR. It appears as a number that has come out of your unorthodox modeling of forcing and global mean surface temperature (GMST) in matching the forcing and temperature curves with a little extra tinkering to make the mid part of the curves match better. You do make some interesting discussion points unconnected to your modeling.
        There are methods available for decomposing the GMST series that would appear to be more straight forward than your filter method.

        Singular Spectrum Analysis (SSA) and Completer Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) are two such methods that decompose time series into noise, periodical variations and secular trends. I prefer CEEMDAN because it is empirical and does not require any parameters, or linear or stationary series. I have used both in obtaining the type of trends that you are attempting to obtain. I have compared results using those methods with the end point averaging method that Nic Lewis has used in estimating observed TCR values. The end results were very much the same. Nic’s method does require a judicious choice of end points to avoid the potential biasing with multi-decadal natural variations and volcano eruptions. I use CEEMDAN and SSA in R but both methods are available in MATLAB.

        Forcing data are available for the historical period for the CMIP5 models. That data should match the estimated series for the observed forcing. The following link will get you to that data:
        http://www.pik-potsdam.de/~mmalte/rcps/

        Your assumption of using a term like TCR would come, in my mind, from the consideration that your filtered GMST trend is due exclusively to response to CO2. I think you should go back and consider all the forcing in the period under analysis. You might also want to use the Cowtan-Way infilled version of HadCRUT4.

      • @DA: “I was asking why log base 2 instead of base something else, like e, which every sane person uses. Who needs the extra constant?”

        Hopefully you’ve since been able to figure this out for yourself. If not you’re in the wrong line of work.

      • VP, I do not have MATLAB – I use R. I was thinking that it would have been an easier task for you to merely derive what I requested and put the results in a post. I wanted to compare your CO2 forcings and concentrations to those in the link that I provided in a post upstream.

        In the meantime I will use your formula on the linked CO2 concentration and determine how well that forcing compares with that from the link. AR5 I believe used the formula I have listed below. There have since been modifications to this formula that I believe I have seen in Nic Lewis’ articles and/or papers.

        ΔF=5.35*ln( C/Co) in Watts/m^2
        where C is the CO2 concentration in parts per million by volume and C0 is the reference concentration

        By the way, the data I have for forcings shows that the CO2 forcing is approximately 90% of the net total forcing during the historical period from 1861-2005 or even 1861-2016. The portion of the CO2 forcing to the total forcing without the direct and indirect aerosol forcings is approximately 2/3. I do not know how well at the moment that net total forcings will be tracked by CO2 forcing over an extended period of time and thus how much your curve shapes would be affected.

        I do not understand the need to filter the CO2 forcing data if that data is on an annual basis – outside a little smoothing. I tend to lose track of what you have done here, but I have to ask if you used the same filter for both the GMST and forcing series?

      • VP, using the AR5 formula on the CO2 concentrations and the CO2 forcings from the link below I obtain a forcing change of 1.88 watts per meter squared for the period 1850-2017. Using your formula and the CO2 concentrations from the link below I get a forcing change for the same period of 0.97 watts per meter squared. Why?

        http://www.pik-potsdam.de/~mmalte/rcps/

      • @kf: ” Using your formula and the CO2 concentrations from the link below I get a forcing change for the same period of 0.97 watts per meter squared. Why?”

        Because JC’s post and mine were both primarily interested in projecting temperature: she projected 1 more degree, I projected two more. Since I was using temperature to calibrate PCR, and since I didn’t know (or more precisely didn’t believe) the consensus conversion between degrees K for Earth’s surface temperature and irradiance in W/m2 at top of atmosphere, I made no attempt to estimate radiative forcing in units of W/m2 and gave it in units of degrees.

        If you have a conversion factor between RF at top of atmosphere and degrees at the surface I’d be more than happy to use your factor to convert my factor of 1.85 as degrees per doubling of CO2 to W/m2 or radiative per doubling of CO2.

        In the past I’ve been using 3.7 W/m2 per degree K as that conversion factor. This is the value of the derivative of the Stefan-Boltzmann relation F = σT⁴ at Earth’s effective temperature of 254 K. It is the value I used in my poster on December 12 at AGU19 in session A41M. The convenor of that session, Mark Richardson, stopped by my poster and told me that more recent estimates of the factor were well below 3.7. I need to look into this further.

        However W/m2 is an intermediate step between measurements of CO2 in ppm that we have equipment to make, and measurements of surface temperature in degrees K that we also have equipment to make. Measuring radiative forcing in W/m2 is much harder, and deciding how much of that forcing should be attributed to CO2 is even harder–presumably one would need a lot of spectrally sorted data at ToA over many years in order to get sufficient confidence. Given that, and also since it’s an intermediate step that isn’t needed for anything useful I haven’t taken it as seriously as L/C have.
        ,
        I see the main advantage of W/m2 as breaking a 1-step relation between CO2 concentration and temperature into two steps with an intermediate step of greater uncertainty than either CO2 level or temperature. The advantage is that this gives more room to fudge the data in whatever direction one wishes. A symptom of such fudging is arriving at a very different value from that of the consensus.

        Anyway, as I said, if you have a conversion factor between ToA radiative forcing in W/m2 and degrees K I’d be more than happy to use yours in order to answer your question. If you don’t then I’ll need to research the recent literature Mark Richardson promised to point me to to see how low I need to reduce the old value of 3.7 W/m2/K to. (I believe that value originated decades ago with Jules Charney’s committee.)

      • VP, I am one who does not mind seeing unconventional approaches to analyzing issues like the one you are presenting here, and further I do not expect a layperson discussing these issues in a specialized area of science to have all the jargon and specialized language of that field down, but your references here to TCR (now PCR) and ARF (conventionally in watts/m^2) now in degrees C is more than confusing to a reader of your analysis who is attempting to understand exactly what you have done. The quotes I have from your opening thread post explaining ARF surely appears to mean you are obtaining a value that is conventionally expressed in watts per meter squared.

        “Applying our centennial filter to HadCRUT4 yields the blue curve in Figure 1, while applying it to ARF (anthropogenic radiative forcing as estimated by our proxy) yields the red curve.”
        “..giving the formula ARF = 1.85*log ₂(CO2) for all anthropogenic radiative forcing..”

        To determine how that forcing change would translate to a GMST change you must know that it requires using a sensitivity response factor, but first you would have to put the forcing into the correct dimensions and be assured that the forcing estimate comes from a legitimate source.

        If TCR= F2XCO2*(ΔT/ΔF) then ΔT=TCR*ΔF/F2XCO2 . Lewis and Curry’s 2018 paper linked below is a good reference and also has literature references in that paper that can be helpful.

        https://www.nicholaslewis.org/wp-content/uploads/2018/07/LewisCurry_The-impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity_JCli2018.pdf

        Another relationship that can be used with models is ΔT=ΔF/ρ where ρ is the climate resistance and is equal to λ + κ where λ is the feedback parameter and κ is the ocean heat uptake efficiency. The relationship assumes that the forcing is increasing monotonically and comes from Forster et al. (2013).

      • “Mark Richardson promised to point me to to see how low I need to reduce the old value of 3.7 W/m2/K to. (I believe that value originated decades ago with Jules Charney’s committee.)”

        VP, that sounds more like the value for F2XCO2 where WITHOUT FEEDBACKS the radiative forcing of approximately 3.7 W/m2, due to doubling of the CO2 level from the pre-industrial 280 ppm would eventually result in roughly 1 °C global warming. The key proviso there is without feedbacks. The equilibrium global warming with feedbacks would be significantly higher than 1 degree. The mean of an ensemble of CMIP5 climate models yields a value of approximately 3.4 W/m2 for F2XCO2 and the average ensemble ECS of 3.2 degrees C. Your experiment is dealing with a transient condition were a good deal of the extra heat generated by the forcing is going into the ocean and thus is more applicable to TCR and the equation I listed in the previous post.

        I remain unclear what it is that your 1.85*log2(CO2) means in terms of forcing in watts/ m2 or in degrees C. To get something in degrees C I would think you will need another factor not shown in your equation. In watts/m2 it obviously gives the wrong values.

    • VP, could you list or table the CO2 forcings for each year that you derive from your CO2 concentrations using your formula? Including the CO2 concentrations would be helpful also.

      • @kf: “…list or table the CO2 forcings for each year that you derive from your CO2 concentrations using your formula” and “the CO2 concentrations”

        The forcings only exist in filtered form, namely as ARF during the run, and the concentrations only exist at the website they’re downloaded from (but I’ve added an extra line at the end that gives them, unfiltered, and you can get the unfiltered unit forcings as log2(CO2), or 1.85*log2(CO2) assuming PCR = 1.85).

        My entire script is at http://clim8.stanford.edu/MATLAB/ClimEtc as the file curry.m. For CO2 it downloads the concentrations from scrippsco2 at UCSD, converts them to unit forcings (equivalent to 1 °C per doubling) by applying the MATLAB function log2, filters out the subcentennial fluctuations, and centers them.

        Here’s the portion of curry.m that does this. It’s self-contained so you should be able to run it as it stands if you have MATLAB, let me know if you don’t. I added a line at the end to give you the concentrations for 1850-2017 in the variable CO2 since they don’t otherwise exist in one place in memory. You can then get the unfiltered forcings relative to 280 ppm as 1.85*log2(CO2/280). (David Appell would presumably insist on 2.67*log(CO2/280) where log is MATLAB’s name for natural log.)

        % PRELIMINARIES
        box = @(w)(ones(w, 1)/w); % Box shape, unit area
        K = conv(box(11), box(65)); % K = Centennial convolution kernel
        f = @(d)conv(d, K, ‘valid’); % f = Centennial filter
        c = @(d)(d – mean(d)); % c = Centering function

        Y = (1850:2017)’; % Time span for “recent” climate
        wo = weboptions(‘Timeout’, 15);% The default of 5 seconds is too short

        % DOWNLOAD
        CO2cell = webread( …
        [‘https://scrippsco2.ucsd.edu/assets/data/atmospheric’ …
        ‘/merged_ice_core_mlo_spo/spline_merged_ice_core’ …
        ‘_yearly.csv’], wo);

        % EXTRACT, FILTER, CENTER
        ARF = c(f(1.85*log2(CO2cell{Y, 2})));

        % (following not in curry.m)

        % CONCENTRATIONS (for kf)
        CO2 = CO2cell{Y, 2};

      • @kf: Oops, when I hit the the “Post Comment” button it broke two lines. MATLAB will find them for you by complaining about them. Join them back together.

      • @kf: Oh, and if translating to R, you can define f(d) in one line as
        ma(ma(d, 11), 65). (MATLAB doesn’t have ma.)

  49. Climate sensitivity is a thought experiment – gone horribly wrong IMHO. 😊

    CS = ΔT @ 2xCO2

    ΔT integrates all changes in forcings and feedbacks – as well as any internal variability. Changes in CO2 as the largest greenhouse gas forcing is a simple proxy for changes in forcings and feedbacks. So let’s get rid of those pesky long tails and wonder what this means. The central estimate of the ‘constrained’ TCR result below is consistent – within reasonable limits – with both Nic Lewis and Vaughan Pratt. My initial reaction is ho-hum.


    “New study determines Earth’s climate sensitivity from recent global warming”

    The Earth’s energy imbalance is neither constant or consistently positive. The difference between energy in and energy out cannot be directly measured from satellites – because of an intractable intercalibration problem. Measurement of outgoing radiation change is, however, relatively precise – despite some teething issues – and changes there are an order of magnitude larger than changes in effective insolation. Let’s go back to tropical ERBE data – itself consistent with independent ISCCP satellite and in-situ ocean heat data.


    https://journals.ametsoc.org/doi/pdf/10.1175/JCLI3838.1

    Net radiant flux up is planetary warming by convention.

    Net = -SW – IR

    It shows a decrease in reflected SW of 2.1 W/m2 in the period of coverage and an increase in IR emissions to space of 0.7 W/m2. It shows a pattern of changes in cloud cover consistent with an emerging understanding of MBL stratocumulus cloud dynamics. A positive feedback to AGW – with perhaps a ‘tipping point’ in the foreseeable future as CO2 concentrations double and then double again. But with an internal component seen as SST changes in the upwelling region of the eastern Pacific Ocean especially. Diverse observations in the region show low cloud cover and warming both in the last decades of the 20th century and post ‘hiatus’. We know that the latter shift episodically at decadal to millennial scales.

    The question in my mind is what implications a world cooling in IR and warming in SW has for a greenhouse gas induced radiative imbalance and thus ECS?

    • ECS doesn’t include any natural factors — it’s CO2e only.

      • The ECS notion extends TCR over millennia through exceedingly slow isopycnal mixing of atmospheric heat into oceans until a new energy equilibrium is established at TOA.

      • Again, ECS doesn’t include natural factors.

      • As it is model derived – one would again have to doubt your veracity. My question went to the underlying theory.

      • David Appell | December 30, 2019 at 5:16 pm | Reply
        “ECS doesn’t include any natural factors — it’s CO2e only.“
        Wrong, naturally.
        “A component of climate sensitivity is directly due to radiative forcing, for instance by CO
        2, and a further contribution arises from climate feedback, both positive and negative.[7] Without feedbacks the radiative forcing of approximately 3.7 W/m2, due to doubling CO
        2 from the pre-industrial 280 ppm, would eventually result in roughly 1 °C global warming. This is easy to calculate[note 2][8] and undisputed.[9] The uncertainty is due entirely to feedbacks in the system: the water vapor feedback, the ice-albedo feedback, the cloud feedback, and the lapse rate feedback.[9] Due to climate inertia, the climate sensitivity”

  50. Pingback: Two more degrees by 2100! – Weather Brat Weather around the world plus

  51. I agree with VP that TCR is the relevant metric for 21st century policy, however, at the same time I feel that such analysis gives one too much license to ignore the ocean’s effects of inertia. There are two very powerful variables that I have not seen analyzed on this post yet, ocean uptake of CO2 and of uptake of energy. Both variables will undoubtedly be in logarithm growth with a growth in ACO2 emissions as they get stretched further and further from their respective equilibriums.

    Also, my understanding of Lewis and Curry’s and other’s projections of GMST for 2100 as being the mean trend, not an actual number to land on an actual year. So I see not benefit for spreading the target years to a 2063-2137 range. It’s only were the trend line passes that matters.

    Thanks for your post.

    • “So I see not benefit for spreading the target years to a 2063-2137 range.”

      The benefit is much higher confidence in temperature than can be achieved by consideration of “where the trend line passes”. When you forecast that way you need to put large error bars around the intercept with the vertical 2100-year line. If instead you average the same way the filter does, you can use the 2 mK standard deviation of the filtered residual as an estimate of the error bars for the average over 2065-2137.

    • DA: “Let’s see all the math, please.”

      The beauty of my approach is that you don’t need any math to see that if you have a black box with an output that has been fluctuating about a fixed mean for a century with a standard deviation of 2.3 mK, and you let it continue to run for another century or more while the input to the black box continues to follow whatever laws had governed it during the previous century, then the most likely behavior of the output is continued fluctuation about the same mean, albeit with a gradually increasing uncertainty relative to the 2.3 mK standard deviation. I make no claim about the rate of that increase, which *would* require math–quite serious math I expect, or perhaps a Monte Carlo simulation.

      In this case the black box is the computation GMST – ARF – SF. Very simple, with a mean of zero when all three data are centered. Even if not centered the mean is still constant with σ = 2.3 mK.

      If something happens that is not typical of the past century, such as an asteroid hitting Earth, or a supervolcano with decades of darkening, or some parameter hitting a previously unexperienced limit such as the ocean no longer absorbing CO2, then again one does not need math to see that in that event all bets are off.

      I’m only projecting two degrees if the conditions of the first paragraph continue to hold. I have no projection for when they don’t.

    • @RG: “such analysis gives one too much license to ignore the ocean’s effects of inertia.”

      Agreed. However when projecting only to a neighborhood of 2100, how would you propose taking the thermal inertia of the ocean into account?

      Much of the heat accumulating in the “deep ocean”, which I would take to mean below the oceanic mixed layer, is likely to find its way to Earth’s ice caps and undermine them from below. While it’s doing that job, it is using that heat to melt ice instead of raising temperature anywhere. I therefore don’t see its influence on climate being felt signficantly this century, and perhaps not much in the 22nd century either.

  52. Ireneusz Palmowski

    In winter, the gravitational wave moves from the mesosphere to the lower atmosphere layers.

    Ozone sinks because the O3 ozone molecule is very heavy compared to O2 and N2. This is clearly seen as excess ozone falls into the troposphere.

    https://www.cpc.ncep.noaa.gov/products/stratosphere/strat_int/

  53. Pingback: Weekly Local weather and Power Information Roundup #392 – Daily News

  54. 3 C of global warming = 4 C of warming on land = 7 F.

    7 F is quite a bit of warming.

    • So buying Greenland WOULD be a prudent real estate investment.

      • Isn’t this so much easier, ignoring data we don’t like?

      • David, I appreciate your seriousness and only make light of the spectacle of yours and other’s lack of confidence in the future. You see the problems mankind has stumbled into and I see the amazing resilience of the collective through technological and cultural evolution.

        I am very serious when I propose that mankind should be heavily encouraging technological solutions rather than blocking them in fear that a solution would “let us off the hook.”

    • Why yes, it is easier to ignore uncomfortable data.

  55. Waiting for a reply from Dr. Pratt – so far crickets.

    • @kenfrisch: “so far crickets”

      Unlike Santa I don’t have to deliver to every household in a single day, just to 269 comments (so far). Last time I posted here I got the crickets complaint then too from impatient commenters wanting to cut the line.

      I finally got to your comment this evening, at
      https://judithcurry.com/2019/12/27/two-more-degrees-by-2100/#comment-906457

      Regarding your “Did you do any sensitivity analysis with other filters and other data sources?”, this filter was designed to tune out the subcentennial fluctuations in HadCRUT4 (mainly the 65-year AMO), CO2 (nothing worth filtering out), and TSI (mainly the 11-year solar cycle). I used Ingrid Daubechies’ method of wavelet discovery to see if I could converge to a better filter, but found the simple trapezoid impossible to beat with respect to three of the four criteria I gave in my post. Regarding the fourth criterion, length, a shorter filter was possible, but it was no longer clear what it was doing, so in the end I stuck to the simple trapezoid shape which did the job (tuning out the extant subcentennial “noise”) just fine.

      I didn’t test the filter on different data because I was tuning it to eliminate the subcentennial data at hand. For example had the solar cycle been 9 years instead of 11 my filter would not have worked as well, but this would be easy to fix: just make the sides a bit steeper.

      Regarding CO2 as a proxy for all anthropogenic forcings, had aerosols shown up in the centennial climate data I’d have had to deal more explicitly with them. Since they didn’t show up, either they aren’t sufficiently significant or they’re inverse to CO2: whenever CO2 goes up aerosols go down. Or maybe yet another explanation for why there’s no sign of them, but I don’t currently have one.

      Hopefully that answers at least some of your questions. Happy to field more if you don’t mind the inevitable delay as I deal with Rudolph’s sore nose and food poisoning from milk left out too long. ;)

  56. Ireneusz Palmowski

    It’s interesting that people pretend they don’t know about changes in solar activity.

  57. Ireneusz Palmowski

    Carbon dioxide is important in warming the lower stratosphere, where it interacts with ozone, during periods of galactic radiation increase. An increase in the temperature of the lower stratosphere leads to surface cooling.
    “Carbon-14 is produced in the upper layers of the troposphere and the stratosphere by thermal neutrons absorbed by nitrogen atoms. When cosmic rays enter the atmosphere, they undergo various transformations, including the production of neutrons. The resulting neutrons (1n) participate in the following n-p reaction:

    n + 14/7N→ 14/6C+ p
    The highest rate of carbon-14 production takes place at altitudes of 9 to 15 km (30,000 to 49,000 ft) and at high geomagnetic latitudes.”
    “After production in the upper atmosphere, the carbon-14 atoms react rapidly to form mostly (about 93%) 14CO (carbon monoxide), which subsequently oxidizes at a slower rate to form 14CO2, radioactive carbon dioxide. The gas mixes rapidly and becomes evenly distributed throughout the atmosphere (the mixing timescale in the order of weeks). Carbon dioxide also dissolves in water and thus permeates the oceans, but at a slower rate.[17] The atmospheric half-life for removal of 14CO2 has been estimated to be roughly 12 to 16 years in the northern hemisphere.”
    https://en.wikipedia.org/wiki/Carbon-14

  58. Rather than respond individually to comments about my use of the term “TCR”, let me address them all as follows.

    In the past I’ve always used “prevailing climate response”, PCR, for the notion I called TCR in this post, so as not to confuse it with the IPCC’s notion which specifies that it is for the case when CO2 rises at 1% a year.. And I’ve always been aware that at some point CO2 will drift out of sync with whatever other anthropogenic forcings have helped to raise GMST about a degree thus far.

    Since people seemed to be ignoring the 1% CAGR aspect of the IPCC’s notion of TCR, I decided not to waste time making the case for a separate notion of climate response and just conflate PCR with TCR.

    What I failed to foresee was that people would insist that I’ve proved something about how they understood TCR.

    I claim no such thing. I only claim that climate has the *appearance* of rising 1.85 C with every doubling of CO2.

    I see two ways this can happen.

    1. Other anthropogenic forcings to date have tracked CO2 so well as to make their independent fluctuations invisible.

    2. Other anthropogenic forcings have less influence on rising CO2 than people seem to expect.

    In the former case, when the other forcings cease to track CO2 we should be able to see this in the climate record as a decrease in HadCRUT4’s slope below the slope of 1.85*log2(CO2).

    In the latter case my figure of 1.85 applies to both PCR and TCR. Changing ratios won’t change this.

    So far I see two things people have been looking forward to in the near future: the CAGR of ACO2 dropping below 2%, and the other anthropogenic forcings besides CO2 ceasing to grow as fast as CO2.

    I will be thrilled to see any evidence of either. Monitoring log(ACO2) for any significant departure from straightness is straightforward. But how to detect any shift in attribution of climate change to CO2 vs. other influences?

    Let me propose the following. Using my software or its functional equivalent, monitor whether HadCRUT4 continues to rise at the rate 1.85*log_2(CO2). If it does so all the way up to 2100, the answer is that the proxy is working perfectly, making it irrelevant how to apportion attributions of climate change to the various influences on climate. That situation is empirically indistinguishable from CO2 being the only influence.

    This is just like log(ACO2) bending down. So far it hasn’t, leaving open the possibility that it will stay straight through the rest of the century.

    If either log(ACO2) bends down or the slope of HadCRUT4 drops below the slope of 1.85*log_2(CO2), I will be thrilled.

    Until then I don’t trust the claimed evidence for a rise of less than two degrees by 2100.

    Someone claimed here that the IPCC had pegged TCR in a similar range to Nic, or something to that effect (can’t find it just now). Here’s the exact statement from Section D.2 of AR5 WG1’s Executive Summary for Policy Makers.
    “The transient climate response is likely in the range of 1.0°C to 2.5°C (high confidence) and extremely unlikely greater than 3°C.
    Even though my notion of PCR is not the same as the IPCC’s notion of TCR, my figure of 1.85 is only 0.1 degrees above the mean of 1.0 and 2.5.

    • “What I failed to foresee was that people would insist that I’ve proved something about how they understood TCR.”
      There is no way for any individuum to “understand” the phrase TCR in different ways because it’s defined.
      “I only claim that climate has the *appearance* of rising 1.85 C with every doubling of CO2.”
      This is also not the case. From your own caclulations you get a TCR of 1.23 ( it seems to be that one can’t give any uncertainty with your approach) when considering the ERF data. Your appoach of mixing up all anthropgenic forcing agents will certaintly fail in the future because the warming will be much more CO2 related than in the presence. Therefore your 1.85 “PCR” or whatever will give to much warming for the future. I don’t know why you don’t consider the forcing data and correct your estimated “PCR” to the only possible ( defined) TCR value. A small step for the author, a giant leap for the audience ( with appologies to N. Armstrong).

      • “I only claim that climate has the *appearance* of rising 1.85 C with every doubling of CO2.”

        Duh, the oceans are huge carbonated drinks. The vapor pressure of CO2 in the atmosphere is a simple function of the temperature of the water. Open a cold and a warm carbonated drink and see the difference. More CO2 is a natural result from warmer oceans.

    • Judy, I have a comment in moderation and I swaer that it doesn’t conflict with the rules of this site. :-)

    • Vaughan Pratt: In the past I’ve always used “prevailing climate response”, PCR, for the notion I called TCR in this post, so as not to confuse it with the IPCC’s notion which specifies that it is for the case when CO2 rises at 1% a year.

      Thank you for your responses to comments, even when indirect like this one.

      • Vaughan Pratt: Right. However as far as “comparables” are concerned, what prompted my post was not L/C’s TCR but JC’a 12/23 post claiming just one more degree by 2100.

        thank you again.

    • VP, your PCR value is close to the TCR values derived from the CMIP5 models – which is approximately 1.8 degrees C. Using the energy budget model, as Lewis and Curry have, with observed climate data, and as you have in your analysis here, a number of investigators have estimated a TCR significantly lower than that from the models. There have been a number of papers attempting to show why this difference is a matter of methodology and unaccounted for effects. Nic Lewis has presented his analyses and criticisms of these papers at this blog and in published articles, and in my view, stands by an observed TCR of around 1.3 degrees C.
      I continue to be unsure of your definition of what you are now calling PCR. In simple language are you saying that it represents the expected rise of the observed global mean surface temperature (GMST) of 1.85 degrees C per the expected doubling of the observed atmospheric concentration of CO2 and ignoring responses from all other forcings. If that is the case it would be simple matter to look at published data that included the total radiative forcings and that attributed to CO2 and from there scale your PCR to a TCR. To keep this simple I have not noted here the effects to be considered with regards to multi-decadal natural variations in GMST and large volcano eruptions.

      • @kf: “are you saying that it represents the expected rise of the observed global mean surface temperature (GMST) of 1.85 degrees C per the expected doubling of the observed atmospheric concentration of CO2 and ignoring responses from all other forcings.”

        To be clear, I am not *ignoring* other forcings. Raher I’m assuming that GMST has been responding to all anthropogenic sources at a rate that can be reliably estimated, at least to date, by using CO2 as a proxy for all such sources.

        I have no idea whether CO2 by itself is contributing 10% or 90% of all anthropogenic forcings. However if it was as low as 10% if would be astonishing if the remaining 90% of non-CO2 forcings had been tracking CO2 so precisely as to give rise to my Figure 6.

        If it were 75% as some seem to be suggesting here, the remaining 25% still needs to track the CO2 sufficiently precisely throughout the period 1900-1980 to account for the precise placement of the blue curve in Figure 6 within the region bounded by 1.7*log₂(CO2) and 2.0*log₂(CO2).

        I would be more than happy to look at the data for whatever you believe that remaining 25% to consist of, to see if tracks CO2 with sufficient precision to account for that placement of the blue curve.

      • Vaughan, The point is that to compare your “TCR” to Lewis and Curry you need to realize that you are using all forcings. You need to scale your value down by about 30% to get a comparable number. TCR is defined in terms of CO2 only.

      • Vaughn, we needn’t guess about other GHGs, like CH4, CFCs, etc. We have the numbers. For recent times, we have direct measurements at places like Mauna Loa, and for earlier times we have them from ice cores.

        The ice core measurements are a nice match to your method, because they’re naturally “smoothed” over similar time windows.

        For instance, consider methane (CH4). Here’s a graph:


        Interactive version: http://www.sealevel.info/co2_and_ch4.html

        In that graph, I wanted the CH4 and CO2 gird lines to line up, so I used the nice, round ratio of 25:1, but that understates the CH4 forcing. 40:1 or 45:1 would have been more accurate.

        The bottom line is that your work confirms Lewis & Curry.

        (BTW, this is timed to be the last comment of the year, here, in the Eastern time zone!)

      • VP: It can’t make us wonder when your fig.6 gives a good approach with your 1.85 because the GMST rise is the result of all anthropogenic forcing. And it would take you only a few minutes to fill your gap when you state: “I have no idea whether CO2 by itself is contributing 10% or 90% of all anthropogenic forcings.” It’s a pitty that you aren’t aware of well known issues ( ERF data) . It’s my final comment relating this post.

      • @db: “The bottom line is that your work confirms Lewis & Curry”

        If you mean inferring their TCR from my PCR, fine. However (a) I do not understand how they’re able to estimate TCR from climate data, and (b) regardless of what they think TCR is, in order to project only one more degree of rise after we’ve already had one degree since 1750, their projection needs to depend on at least one of the following:
        1. My Figure 4 “bending down”, i.e. pretty soon ACO2 needs to stop rising at 2% a year; and/or
        2. The other anthropogenic forcings that have helped CO2 raise the temperature by one degree need to advance more slowly from now on than CO2.

        With neither 1 nor 2, we’re in for two more degrees by 2100. So far not a single one of the 332 comments to date has addressed this point, they’ve all depended on 1 or 2 or both.

      • 1. Depends on the emission scenarios shown somewhere.
        2. Depends on aerosols, feedbacks and something other than TSI.

      • Oh and I showed a graph from a recent thesis showing TCR as a pdf.
        Epistemically equivalent to your 65 year window.

      • VP wrote, “in order to project only one more degree of rise after we’ve already had one degree since 1750, their projection needs to depend on at least one of the following:
        1. My Figure 4 “bending down”, i.e. pretty soon ACO2 needs to stop rising at 2% a year; and/or…”

        CO2 gives a logarithmically diminishing climate forcing, so ACO2 rising at 2%/year would get you a CO2 forcing trend which asymptotically approaches linear, even if there weren’t negative feedbacks removing CO2. But they are removing CO2, and they’re doing so at an accelerating pace. Because of those feedbacks, the CO2 forcing trend is already almost perfectly linear, and has been for several decades. You can see that in the log-scale CO2 level plot: notice how straight the trend line has become over the last forty years:

        So, unless ACO2 rises even faster than +2%/year, which would be very surprising, or unless some other forcing accelerates, you should not expect to see an acceleration of the current warming trend.

        One more degree by 2100 would require an average of +0.125 °C per decade. Two more degrees by 2100 would require +0.250 °C per decade. Not even GISS shows a current rate of warming that high.

        Since the 1958 start of the Mauna Loa CO2 measurements, temperatures have risen between 0.4 and 0.9 °C over sixty years, depending on whose temperature index you use. That’s +0.067 to +.15 °C per decade:

        A continuation of that trend for another eighty years would yield +0.53 °C (UAH) to +1.20 °C (GISS) by 2100.

        Obviously, that gets you nowhere near two degrees, even using GISS.

        However, since temperatures didn’t rise at all during the first 20 years of that 60 year period, you can get 50% higher warming rates simply by cherry-picking a starting date of 1978, instead of 1958. That’s a pretty blatant cherry-pick, but if you do that, it gets the trend up to +0.10 to +0.225 °C per decade.

        A continuation of that trend for another eighty years would yield +0.80 to +1.80 °C by 2100.

        That gets you close with GISS, but still nowhere near 2°C with UAH.

        I also think it’s highly unlikely that ACO2 can continue to increase at +2%/year for another seventy years (i.e., quadrupling by 2090), let alone accelerate even faster than that.

        So, how could we possibly get to +2°C of anthropogenic warming by 2100, even with GISS? It would require an acceleration of some other forcing. I can think of two plausible candidates: CH4 (due to leaks as natural gas use expands), and/or Asian air pollution mitigation.

        Summing up, to get +2°C of anthropogenic warming by 2100 would require:

        1. An improbable continuation of the +2%/yr ACO2 trend for another seventy years or so (i.e., a quadrupling of fossil fuel production). And,

        2. An acceleration of some other forcing, like CH4 and/or air pollution mitigation. And,

        3. Use of a temperature index calculated by climate activists, who are dedicated to “finding” the maximum defensible warming trend.

        #2 is possible. #1 seems unlikely. But #3 might be the most difficult of all. Better measurements, e.g., from the USCRN, make it more challenging to “find” corrections to increase warming, and I expect that sometime this century the current climate hysteria will fizzle out.

      • @dpy6629: ” You need to scale your value down by about 30% to get a comparable number.”

        Right. However as far as “comparables” are concerned, what prompted my post was not L/C’s TCR but JC’a 12/23 post claiming just one more degree by 2100.

        Now if the 30% responsible for the additional temperature is due to anthropogenic forcings that are likely to start dying down soon, but for some reason the CAGR of ACO2 remains stubbornly at 2%, then we’re only in for 2.3*0.7 = 1.6 more degrees.

        If on the other hand those other anthropogenic forcings keep going, then I stick to the title of my post.

  59. Ireneusz Palmowski

    Drought in southern Australia is due to blockage of latitudinal circulation in the south.

    • Please, please, label the axes *before* posting. No excuses about “it’s obvious”, or “just google” or … etc

      This drought is killing people. We need accuracy.

      • Ireneusz Palmowski

        The reason is the rise in temperature in the lower stratosphere above the southern polar circle and the increase in pressure. Now the temperature is back to normal.

      • Ireneusz Palmowski

        January 3 low it should operate in the south Australia.

  60. Ireneusz Palmowski

    Let’s look at the distribution of ozone in the central stratosphere in the north.

  61. Vaughan Pratt ==> It worries me that such calculations are being made as if the GAST numbers before the 1920s (pre-WWI) are accurate to any scientific standard. Personally, I have serious doubts about pre-WWII global numbers.

    I think we can reliably say that the climate has generally (not uniformly) warmed since the end of the LIA (whatever date one wishes to assign) — and we might be able to distinguish and maybe quantify some warming since the middle of the 1970s — but only really from the satellite data.

    • @KH: “I have serious doubts about pre-WWII global numbers.”

      Very understandable. However if those numbers were as unreliable as you claim then how do you account for the fact that the start of the blue curve in Figure 6 manages to stay so reliably between the 1.7 and 2.0 TCR hypotheses for CO2?

      Just because there are *relatively* few numbers compared to today is not by itself evidence that there are *too* few numbers from that time. Figure 6 would appear to support the claim that the data since 1850, however sparse, is still sufficient to support a meaningful analysis.

      • Vaughan ==> Personally, I think that this might be a bit of circular mathematics. Temperature records (reliable or not) were used to derive the TCR estimates, and now the TCR estimate is used to support the temperature record?

        I have no particular opinion on the TCR issue except that climateers have generally pegged it way too high.

        Mine is just a general comment on the reliability of the global temperature record that early. Don’t mean to pick a fight.

      • Kip
        The curve fits between CO2 warming residual and solar influence are most striking at the beginning and the end of the studied period of just over a century.

        However both curves are strongly smoothed with a 75 year kernel that – presumably – extends both forward and back in time from any given time point.

        Therefore is it possible that the apparent fits between the CO2 derived and the solar derived curves are artefacts resulting from both smoothed curves being subject to the same truncation artefact both the start and the end of the analysed time period?

      • @KH: “Personally, I think that this might be a bit of circular mathematics. Temperature records (reliable or not) were used to derive the TCR estimates, and now the TCR estimate is used to support the temperature record?”

        Circular mathematics. Now where have I heard that complaint before?

        Clearly CE is where to post if you want to get that complaint. Mike Jonas lodged it on CE way back on 12/04/2012 [1]. He was so deeply offended that he made an entire WUWT guest post about it a few days later [2]. He dismissed my protest that there was nothing circular about least squares fitting as “a long rambling point-avoiding response” and “an exercise in obdurate stupidity”, adding that “nonsense dressed up in complicated technical language is still nonsense.” He demanded “a formal retraction of [my AGU12] poster.” (Searching for the word “retract” in the 2,430 comments on my post turned up 67 hits further to Mike’s initial “demand”.)

        The following graph refutes this claim of circularity for my Figure 6 above in a different way that merely pointing out that curve fitting isn’t “circular”.

        It appeared in my comment [6] as part of my post [5] supporting JC’s post [4] criticizing Shaun Lovejoy’s “Climate Closure” article in EOS on 20/10/2015 which fitted a trend line to estimate what Lovejoy called “climate sensitivity” as 2.33°C per CO2 doubling.[3].

        My graph estimated PCR based on CO2 and HadCRUT4 since 1944, on the ground that there is more data after 1944 and also a better signal-to-noise ratio when CO2 is the signal.. (This is the methodology suggested this past week by some commenters on the present post.)

        Initially I was puzzled as to why HadCRUT4 rose faster than log2(CO2) prior to 1944, though the turn-up in HadCRUT4 at far left was “clearly” nothing but noise due to the sparseness of the data back then. Had I tried to fit log2(CO2) to the whole of HadCRUT4 (a) I would have got a PCR closer to Lovejoy’s 2.33, and (b) the R2 (goodness of fit) would have been less.

        (The figure of 1.67 for the fit in that graph is lower than 1.85 because the x-axis is logarithmic in CO2, following Lovejoy’s approach, which I liked at first. Unfortunately, as I later realized, this creates an artificial sparseness on the right that exaggerates the effect of ignoring the Sun’s contribution on the left, which accounts for a significant part of Lovejoy’s 2.33. These days I plot the x-axis linearly with time to avoid that artifact.)

        But then it occurred to me that the first half of the 20th century was when the Sun had warmed. Detrending HadCRUT4 by TSI improved the fit considerably. It also removed that “turn-up” at the far left, which would appear to be due not to sparseness noise as I had previously thought but instead to the grand solar minimum at around 1905.

        Figure 6’s blue curve is still slightly more wobbly on the left than on the right, but nowhere near as much as before the explanation of its previous much greater variation prior to bringing in TSI to account for it. It still seems reasonable to explain the remaining wobble in terms of sparseness, but the TSI explanation has greatly reduced that wobble.

        I don’t see any circularity here.
        ———————————————————–
        REFERENCES.

        Who on earth has the patience to follow all the links people love posting on CE?

        But if you really care, keep http://clim8.stanford.edu/refs/ open in a separate window and click on [1], [2], [3] etc when they come up in this comment. This has the advantage that clicking on a reference won’t make my comment disappear while reading the reference.

    • Fuuny you have doubts you can’t quantify. Not exactly science.

      • Steven Mosher: Fuuny you have doubts you can’t quantify. Not exactly science.

        ……

        illogical.

        It’s a matter of the total amount of evidence. It is quite reasonable to think that increased tropospheric CO2 will raise tropospheric temperature, and that increased stratospheric CO2 will lower stratospheric temperature, while having doubts about the estimates of the magnitudes of those effects.

        The “consensus” is like that: Earth has warmed since the late 1800s, some of the warming has been caused by humans, some of the human effect is due to anthropogenic CO2, but the 95% confidence interval on the climate sensitivity (either transient or immediate or equilibrium — take your pick) is very wide, representing realistic and scientific doubt as to its value.

    • “I think we can reliably say that the climate has generally (not uniformly) warmed since the end of the LIA (whatever date one wishes to assign) ”

      X is the average global temperature at the end of the LIA.
      Y is the average global temperature today.

      Your claim is that we can reliably say Y>X

      How much? and how did you estimate X?

      you have doubts about prior to WWII. hint the LIA ended prior to WWII
      so whatever X is, you have doubts about it. But no doubts about Y being greater than X.

      illogical.

      Start with your evidence for what you think X was?

      Instruments? Proxies? why do you accept these
      Locations? where do you have these estimates? how many? is that enough?

      • Mosher ==> I think you understand this perfectly well.

        Afterall, I am just restating your famous quip on the topic — with which I agree.

        It is the quantification that is the subject of the doubt — the uncertainty.

        We are, generally, fairly confident of the SIGN of the change since the end of the LIA. Masses of evidence for that.

        What we don’t have good solid scientific evidence for is the quantification — on the scale of single digit degrees C — as the early records are spatially and temporally spotty and known to be imprecise (accuracy of early thermometers).

        You, of all people, are very aware of these issues.

        It is pretense to attempt to use the early thermometer record as if it were accurate to single degrees C — and folly to use it at fractional degrees.

      • Steven Mosher

        Kip you avoided my simple questions.
        next you are wrong about precision, like most skeptics.

        1. You claim it is spatially Spotty. Please quantify HOW many are required, and how you determined this. For example, Kevin Cowtan has done this with math, you wave your arms.
        2. temporal lacuna. You think its a problem. Its not.
        3. Precision.

        The easiest way to show precision is not a problem is to add a randomly distributed 6C of error to all the measurments and calculate the trend again. Doesnt change

  62. If I am correct, we have actual readings of the Vostok Ice Core from the beginning of new Ice Age. We also have the de[th of the oceans below present for the same period. This gives CO2, height of the new ice, and the ocean height below present all for the same period. It would be interesting to see these 3 graphed on the same timeline graph.

    • @RC: ” It would be interesting to see these 3 graphed on the same timeline graph.”

      Very interesting as it would give some idea of what’s been going on at multimillennial time scales.

      However at centennial time scales covering recent centuries, with subcentennial fluctuations carefully tuned out, I see no reason why centennial climate responses should be similar to mulimillennial responses, any more than the physics of the FM band resembles that of the AM band two decades (6.6 octaves) lower down.

  63. Ireneusz Palmowski

    In periods of low solar activity, ionization of the lower stratosphere increases, which leads to an increase in the temperature of the lower stratosphere in high latitudes and cooling of the surface in winter at medium latitudes.
    “Carbon-14 is produced in the upper layers of the troposphere and the stratosphere by thermal neutrons absorbed by nitrogen atoms. When cosmic rays enter the atmosphere, they undergo various transformations, including the production of neutrons.
    The highest rate of carbon-14 production takes place at altitudes of 9 to 15 km (30,000 to 49,000 ft) and at high geomagnetic latitudes.
    Production rates vary because of changes to the cosmic ray flux caused by the heliospheric modulation (solar wind and solar magnetic field), and due to variations in the Earth’s magnetic field.
    The atmospheric half-life for removal of 14CO2 has been estimated to be roughly 12 to 16 years in the northern hemisphere. ”
    https://en.wikipedia.org/wiki/Carbon-14

  64. VP
    I love your description of the AMO as “bothersome” 😂

    It’s not surprising that the ocean is bothersome to efforts to analyse climate based on atmospheric and solar factors only, ignoring the said billion or so cubic km of seawater, in constant flow.

    “Will someone rid me of this turbulent beast?”

    • @ps: “Will someone rid me of this turbulent beast?”

      Indeed.

      I assigned that task to the 65-year component of my 11-65 year trapezoidal filter. Had the AMO been a perfect sine wave of period 65 years, and had the period under analysis been an integer multiple of 65 years, the filter would have removed 100% of the AMO.

      I’ve had several AGU posters about the AMO, none even mentioning “stadium waves”. Some people think the AMO is a decaying oscillation initially triggered by a massive slip at the CMB around 1870. A 65-year box filter only removes most of a decaying oscillation, but the part not removed isn’t clearly visible in my Figure 6.

    • Incidentally if the origin of the AMO is below the ocean floor, one might ask, why isn’t it also seen on land?

      The continential crust is both thicker and less dense than the oceanic crust, making the former a better thermal insulator than the latter. One would therefore expect thermal fluctuations originating from deeper down to be more visible in the ocean than on land. Evidence for this was part of my AGU14 talk in the “400 ppm” SWIRL session.

    • Much more likely to originate in patterns of globally coupled spatio-temporal ocean and atmospheric circulation.


      https://www.ocean-sci.net/10/29/2014

      Perhaps modulated by solar variability.

      Stadium wave not mentioned by AGU posters? Is that the best that you can do? You need to get out more Vaughan.

  65. Would then all the papers “recounting” the deleterious effects of temp rise also be seen with this 75 year window and not the momentary summer fires in Australia and droughts in California? No sure what trends will be left other than people losing jobs, since no deleterious effect can be shown.

    • debbee

      you should know by now that one day of warm weather or extremes is climate, whereas it takes 30 years of cold weather to achieve that same status.

      tonyb

  66. For those of you who are using a 2/3 ratio for the CO2 to total forcing are not considering the negative forcing of the aerosol direct and indirect effects.

    • @kf: “For those of you who are using a 2/3 ratio for the CO2 to total forcing are not considering the negative forcing of the aerosol direct and indirect effects.”

      Excellent point.

      My centennial filter was designed to eliminate all subcentennial fluctuations, aerosols included. I have no idea what ratio the IPCC or anyone else would use when considering only centennial fluctuations in anthropogenic forcings. However the evidence from the residual in my Figure 2 would appear to indicate that, whatever the ratio might have been in 1880, it doesn’t appear to have changed significantly by 1980.

  67. “At the transition from the 19th to the 20th century, a sharp decline in annual Nile runoff occurred, which initially was attributed erroneously to the impact of the building of the old Aswan Dam on the discharge rating curve. However, Todini and O’Connell (1979) showed that this decline was linked with a sudden widespread failure of the monsoon in the tropical regions (Kraus 1956). This may reflect some sort of tipping point in the climate system whereby a sudden and largely unexplained transition to a new climatic regime occurs, which can be taken to correspond to a stochastic shifting mean representation of change.” https://www.tandfonline.com/doi/full/10.1080/02626667.2015.1125998

    From the sublime to the ridiculous. The CERES products – btw – I have not been able to update since the Trump government shutdown.

    “This study examines changes in Earth’s energy budget during and after the global warming “pause” (or “hiatus”) using observations from the Clouds and the Earth’s Radiant Energy System. We find a marked 0.83 ± 0.41 Wm−2 reduction in global mean reflected shortwave (SW) top-of-atmosphere (TOA) flux during the three years following the hiatus that results in an increase in net energy into the climate system. A partial radiative perturbation analysis reveals that decreases in low cloud cover are the primary driver of the decrease in SW TOA flux. The regional distribution of the SW TOA flux changes associated with the decreases in low cloud cover closely matches that of sea-surface temperature warming, which shows a pattern typical of the positive phase of the Pacific Decadal Oscillation.” https://www.mdpi.com/2225-1154/6/3/62


    The CERES product shows an energy dynamic at TOA dominated by changes in low level cloud cover. It is a robust power flux forcing of the system involving marine boundary level stratocumulus cloud that cumulatively adds more or less energy to oceans. And that warmed the planet in the 20th century. How long the state of reduced upwelling – a departure from the millennial mean – in the eastern Pacific will persist is the question.

    • “The Earth’s albedo (or reflectance) is defined as the fraction of solar radiation that is reflected back to space through the top of the atmosphere (TOA). The global albedo value is ∼0.3 in the visible range, and its evolution is controlled by changes in the type and amount of clouds [Cess and Udelhofen, 2003; Ramanathan et al., 2001], the ice/snow cover, and by any changes in continental surface reflectance [Randall et al., 1994]. Thus, it is a fundamental regulator of the energy budget of the planet [Stephens et al., 2015].” https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2016GL068025

      As changes in incident TSI are negligible – one must look for indirect solar effects to explain apparent correlations with the MWP and the LIA. Or indeed with early 20th century warming.

      • Robert I Ellison January 1, 2020 at 10:12 pm :

        Thanks for the link.

        Here’s more.

        Abstract The Earth’s albedo is a fundamental climate parameter for understanding the radiation budget of the atmosphere. It has been traditionally measured not only from space platforms but also from the ground for 16 years from Big Bear Solar Observatory by observing the Moon. The photometric ratio of the dark (earthshine) to the bright (moonshine) sides of the Moon is used to determine nightly anomalies in the terrestrial albedo, with the aim of quantifying sustained monthly, annual, and/or decadal changes. We find two modest decadal scale cycles in the albedo, but with no significant net change over the 16 years of accumulated data. Within the evolution of the two cycles, we find periods of sustained annual increases, followed by comparable sustained decreases in albedo. The evolution of the earthshine albedo is in remarkable agreement with that from the Clouds and the Earth’s Radiant Energy System instruments, although each method measures different slices of the Earth’s Bond albedo.

        Oscillations in albedo looks like a story worth following.

      • have a purpose for quotes. The purpose of cutting and pasting the abstract of the paper I linked to escapes me.

        An albedo of 0.3 allows quantification in SW energy terms of δp*.

      • Robert I Ellison: The purpose of cutting and pasting the abstract of the paper I linked to escapes me.

        So it did.

      • icymi,

        Oscillations in albedo looks like a story worth following.

      • My reaction to that the first time was ho hum. Anything else of such great interest?

      • Robert I Ellison: My reaction to that the first time was ho hum.

        OK, you are not interested.

        There are some obvious followup questions:

        (1) can effects of the oscillation in albedo be detected in changes in Earth temperature? Other weather measurements?

        (2) what accounts for their quasi-periodicity?

        (3) are they spatially and temporally heterogeneous?

        (4) are the authors writing proposals for grants to study these and other related questions? If so, will the proposals receive funding?

        I expect the answers to (4) to be Yes, and Yes.

      • Robert I Ellison: I quite obviously had an interannual to centennial context.

        What you wrote was that you had no interest in the oscillation, only in the mean value over that study interval.

      • “Since irradiance variations are apparently minimal, changes in the Earth’s climate that seem to be associated with changes in the level of solar activity—the Maunder Minimum and the Little Ice age for example—
        would then seem to be due to terrestrial responses to more subtle changes in the Sun’s spectrum of radiative output. This leads naturally to a linkage with terrestrial reflectance, the second component of the net sunlight, as the carrier of the terrestrial amplification of the Sun’s varying output.”
        http://bbso.njit.edu/Research/EarthShine/literature/Goode_Palle_2007_JASTP.pdf

  68. It would be interesting to apply VP’s method to the parallel temperature and CO2 data below from Greenland over the Holocene, and see what TCR emerges.

    • That plot of CO2 and Greenland Temperature is more than all we need to know. CO2 change followed Temperature change for thousands of years, but, seven thousand years ago, CO2 started going up without Temperature Increase and recently the much faster CO2 increase has not had corresponding Temperature Increase. They make a big deal about a few decades of correlation and ignore thousands of years without correlation.

    • Checking the data (mainly out of curiosity) I compared to earlier comparisons I did from various sources for very different reasons. What had been noticed from very early on, was that data from Gisp2 was out of sync chronologically with data from Vostok (Vostok and Kilimanjaro are in sync). My reason for the checks was to find alignment to events, which events seem to influence trends in both temperature and CO2 production. The latter may be linked to C14 trend based on dendrochronology (intcal 13).
      Link here: https://melitamegalithic.wordpress.com/2017/12/15/comparing-proxies/
      The lower figure is confusing (sorry for that) but intended mainly to show how shifting the added data chronologically matched with the rest, – a sort of proof that the chronological datum of various proxies out there need to be checked.

    • RIE linked paper above has interesting/noteworthy snippets:
      — millennial-scale temperature changes of the north and south Polar Regions were coupled and synchronized.
      — new insights into the dynamic processes that link Greenland’s Dansgaard-Oeschger (DO) abrupt temperature fluctuations to Antarctic temperature variability.
      — Detection of synchronization among paleoclimate time series can likewise help explain hitherto incompletely understood processes involving the occurrence and timing of abrupt climatic change.
      — These records exhibit large, sudden warming events — the transition to the high temperature state in the DO occurs in human timescales (years or decades), and hence the urgency in understanding their origin and dynamics.

      Another paper here https://www.ncbi.nlm.nih.gov/pmc/articles/PMC34297/pdf/pq001331.pdf With ref to chronological sync: “– Byrd and Vostok also contain indications of events that may be correlative to nearly all of the Greenland events . However, the ice isotopes indicate an anti-phase behavior, with Byrd warm during the major events when Greenland was cold; dating control is not good enough to determine the phase of the smaller events —- To further complicate the issue, the Taylor Dome core from a near-coastal site in East Antarctica appears to be in-phase with Greenland and out-of-phase with Byrd during the de-glacial interval centered on the Younger Dryas — Some Southern Hemisphere sites also exhibit the Greenland pattern during the deglaciation,— However, southern sites near and downwind of the south Atlantic show an anti-Greenland pattern ”

      Some confusion there. My aim was to detect evidence of antiphase of Polar versus Equatorial. Vostok and Kilimanjaro show that antiphase at the critical dates. Gisp2 corresponds to Vostok – if it is chronologically sync’ed with the other two at the end of the YD.

    • @PS: “It would be interesting to apply VP’s method to the parallel temperature and CO2 data below from Greenland over the Holocene”

      This is like suggesting using a receiver tuned to the 5 GHz band to analyze AM signals at 540 kHz.

      It wouldn’t work. When making such a dramatic move across any spectrum you have to use completely different hardware.

      Or software when using https://en.wikipedia.org/wiki/Software-defined_radio which is basically what my filter corresponds to. You need a completely different filter tuned to a completely different purpose.

  69. “These reanalyses assimilate only surface observations of synoptic pressure into NOAA’s Global Forecast System and prescribe sea surface temperature and sea ice distribution in order to estimate e.g., temperature, pressure, winds, moisture, solar radiation and clouds, from the surface to the top of the atmosphere throughout the 19th and 20th centuries.” https://www.esrl.noaa.gov/psd/data/20thC_Rean/

    It gives a remarkably detailed – and ground truthed – picture of the evolution of the global stadium wave since the 19th century. It can be compared with tuned models – that are found to lack verisimilitude. Curve fitting using overly simple assumptions seem well beyond the pale.

    “The global-mean temperature trends associated with GSW are as large as 0.3 °C per 40 years, and so are capable of doubling, nullifying or even reversing the forced global warming trends on that timescale.” https://www.nature.com/articles/s41612-018-0044-6

    Unless there is a case to be made that this cancelled out in the 20th century – then there is unexplained warming. This has been traced to positive SST/cloud cover feedback in which the upwelling region of the Pacific Ocean features. That changes over decades to millennia – with a relevant transition to less upwelling and warmer SST at the beginning of the 20th century.

  70. Direct solar forcing of climate as an explanation for early 20th century warming is impossible. Likewise the quantified effects of early industrialization on surface temperature since the LIA. Or sulphate cooling as an explanation of cooling in the middle of the the last century. Collectively a grasping at straws.

    “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.” 😊

    It is evident that ENSO varies at decadal to millennial scales in synch with with the state of the Pacific more generally.


    https://www.nature.com/articles/nature01194


    “ENSO causes climate extremes across and beyond the Pacific basin; however, evidence of ENSO at high southern latitudes is generally restricted to the South Pacific and West Antarctica. Here, the authors report a statistically significant link between ENSO and sea salt deposition during summer from the Law Dome (LD) ice core in East Antarctica. ENSO-related atmospheric anomalies from the central-western equatorial Pacific (CWEP) propagate to the South Pacific and the circumpolar high latitudes. These anomalies modulate high-latitude zonal winds, with El Niño (La Niña) conditions causing reduced (enhanced) zonal wind speeds and subsequent reduced (enhanced) summer sea salt deposition at LD. Over the last 1010 yr, the LD summer sea salt (LDSSS) record has exhibited two below-average (El Niño–like) epochs, 1000–1260 ad and 1920–2009 ad, and a longer above-average (La Niña–like) epoch from 1260 to 1860 ad. Spectral analysis shows the below-average epochs are associated with enhanced ENSO-like variability around 2–5 yr, while the above-average epoch is associated more with variability around 6–7 yr. The LDSSS record is also significantly correlated with annual rainfall in eastern mainland Australia. While the correlation displays decadal-scale variability similar to changes in the interdecadal Pacific oscillation (IPO), the LDSSS record suggests rainfall in the modern instrumental era (1910–2009 ad) is below the long-term average. In addition, recent rainfall declines in some regions of eastern and southeastern Australia appear to be mirrored by a downward trend in the LDSSS record, suggesting current rainfall regimes are unusual though not unknown over the last millennium.” https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-12-00003.1

  71. Other work calls, obliging me to sign off at this point. I will try to check back here on a weekly basis, but if you want to draw my attention to a comment more quickly, email me at the address on my home page.

  72. I made some time to test the Pratt filter method used in this thread against two trend decomposition methods which were: Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Singular Spectrum Method (SSA).

    I used the HadCRUT temperature set that Pratt used in his filter method and that Nic Lewis used in Lewis and Curry (2018) in his determination of TCR and ECS with updated forcing and ocean heat data. Forcing data used in my comparisons were that used in Lewis and Curry (2018). I used the ma(ma(x,11),65) Pratt filter in R with centering. The trends were taken from the 1850-2016 time period.

    To assure that I was using the same data and methods in the Lewis paper, I also verified the following TCR values from the Lewis paper with the end point averaging periods noted below. I included PCR values where PCR is:

    PCR=F2XCO2*ΔT/ΔF where for PCR, ΔT is the change in Global Mean Surface Temperature, F2XCO2=3.7 as used in the Lewis paper and ΔF for PCR is the change in CO2 forcing (which in TCR is the change in total forcing).

    For Lewis method:
    Start=1869-1882, End=2007-2016 TCR=1.20 PCR=1.80
    Start=1869-1882, End=1995-2016 TCR=1.22 PCR=1.83
    Start=1850-1900, End=1980-2016 TCR=1.23 PCR=1.89
    Start=1930-1950, End=2007-2016 TCR=1.20 PCR=1.79

    For CEEMDAN method:
    Start=1850, End= 2016 TCR=1.33, PCR=2.00

    For SSA method:
    Start=1850, End= 2016 TCR=1.21, PCR=1.85

    For Pratt method:
    Start=1887, End= 1979 TCR=1.33, PCR=2.08

    • Hi Ken. Did you take into account that my filter removes essentially all natural effects save those attributable to the Sun? If you ignore the influence of the Sun, you’ll be including solar fluctuations in what you’re calling PCR, which is why you’re seeing PCR=2.08 with my method, which is too high.

      Ignoring the Sun accounts for half of Shaun Lovejoy’s overestimate of what he called climate sensitivity. The other half of his overestimate is an artifact of his taking the x-axis to consist of the log of the annual CO2 concentrations, which results in the left half of his graph having a lot more than just half of his datapoints. This further increases his overestimate.

      • Vaughn, I have to assume you are referring to the GMST response to forcing and not the forcing itself – as the forcing is straightforward. I would suggest that you run some simulations with known inputs and noise with your filter. I have done that with the CEEMDAN, SSA and end point averaging methods. While no method of this kind is going to be exact in extracting the secular trend of interest some can come close with narrow distributions.

    • Regarding Lovejoy, I should add that the two effects I mentioned gave him a PCR (although he called it “climate sensitivity”) of 2.33. Not taking the Sun into account with his trend-fitting method added about 0.22 to his result, which the logarithmic x-axis then doubled to about 0.44. (Had he taken the Sun into account the log x-axis would have had little if anything to double and he’d have gotten a PCR of around 1.9.

    • Hi Ken, IMO you should take a ERF change of 3.8 W/m² for a doubling of CO2 if you use the forcing data of J/C 18. It was sightly upward corrected when the latest ERF CH4 -Data were taken into account.

      • Frank, then all values of TCR and PCR go up by a factor of 1.027 and the relative differences of the methods stay much the same.

        Did you mean Lewis and Curry 2018 by your reference to J/C 18? The Lewis paper used 3.7 for F2XCO2 or at least that value was consistent with my calculations. They also used the updated conversions of CO2, CH4 and N2O concentrations to forcings.

      • Frank, I used the formulas as referenced in Lewis and Curry (2018) that appear in Table 1 in the paper Etminan et al. (2016). The CO2 forcing depends on a change to the 5.36 coefficient that is used in conversion equation 5.36*log(C/Co). Those changes by my calculations are quite small relative to 5.36. The forcing dependence of CO2 is on the mean concentration of N2O in ppb and subtracts from 5.36 coefficient. The other 2 terms depend on the increase in CO2 concentration (subtracts) and the square of that increase (adds) in ppm. The coefficients are very small.

        Since the doubling of CO2 concentrations depend on N2O concentrations, I assumed that N2O doubles with CO2 doubling. From that assumption I arrived at 3.71 watts per meter squared for a value of F2XCO2.

  73. My problem is that neither him or Dr. Curry address the important issue at hand. CAGW. Catastrophic Anthropogenic Global Warming. The alarmists have been screeching at us for a couple of decades that we are all doomed if the average global temperature rises more than 2 degrees centigrade from what it was in pre industrial times. That is what is important, is the earth and mankind doomed to catastrophe if the average global temperature rises .5 or.9 centigrade in the next eight decades. If we are not all doomed then what is all the fuss about?

Leave a Reply to stevenreincarnated Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s