What’s the worst case? Emissions/concentration scenarios

by Judith Curry

Is the RCP8.5 scenario plausible?

This post is Part II in the possibility series (for an explanation of the possibilistic approach, see previous post link).  This paper also follows up on a recent series of posts about RCP8.5 [link].

3. Scenarios of emissions/concentration

Most worst-case climate outcomes are associated with climate model simulations that are driven by the RCP8.5 representative concentration pathway (or equivalent scenarios in terms of radiative forcing). No attempt has been made to assign probabilities or likelihoods to the various emissions/concentration pathways (e.g. van Vuuren et al. 2011), based on the argument that the pathways are related to future policy decisions and technological possibilities that are considered to be currently unknown.

The RCP8.5 scenario was designed to be a baseline scenario that assumes no greenhouse gas mitigation and no impacts of climate change on society. This scenario family targets a radiative forcing of 8.5 W m-2 from anthropogenic drivers by 2100, which is nominally associated with an atmospheric CO2 concentration of 936 pm (Riahi et al. 2007). Since the scenario outcome is already specified (8.5 W m-2); the salient issue is whether plausible storylines can be formulated to produce the specified outcome associated with RCP8.5.

A number of different pathways can be formulated to reach RCP8.5, using different combinations of economic, technological, demographic, policy, and institutional futures. These scenarios generally include very high population growth, very high energy intensity of the economy, low technology development, and a very high level of coal in the energy mix. Van Vuuren et al. (2011) report that RCP8.5 leads to a forcing level near the 90th percentile for the baseline scenarios, but a literature review at that time was still able to identify around 40 storylines with a similar forcing level.

Storylines for the RCP8.5 scenario and its equivalents have been revised with time as our background knowledge changes. To account for lower estimates of future world population growth and much lower outlooks for emissions of non-CO2 gases, more CO2 must be released to the atmosphere to reach 8.5 W m-2 by 2100 (Riahi et al., 2017). For the forthcoming IPCC AR6, the comparable SSP5-8.5 scenario is associated with an atmospheric CO2 concentration of almost 1100 ppm by 2100 (O’Neill et al. 2016), which is a substantial increase relative to the 936 ppm reported by Riahi et al. (2007).

As summarized by O’Neill et al. (2016) and Kriegler et al. (2017), the SSP5-8.5 baseline scenarios exhibit rapid re-carbonization, with very high levels of fossil fuel use (particularly coal). The plausibility of the RCP8.5-SSP5 family of scenarios is increasingly being questioned. Ritchie and Dowlatabadi (2018) challenge the bullish expectations for coal in the SSP5-8.5 scenarios, which are counter to recent global energy outlooks. They argue that the ‘return to coal’ scenarios exceed today’s knowledge of conventional reserves. Wang et al. (2017) has also argued against the plausibility of the existence of extensive reserves of coal and other easily-recoverable fossil fuels to support such a scenario.

Most importantly, Riahi et al. (2017) found only one single baseline scenario of the full set (SSP5) reaches radiative forcing levels as high as the one from RCP8.5 (compared with 40 cited by van Vuuren et al. 2011). This finding suggests that 8.5 W/m2 can only emerge under a very narrow range of circumstances. Ritchie and Dowlatabadi (2018) notes that further research is needed to determine if plausible high emission reference cases consistent with RCP8.5 could be developed with storylines that do not lead to re-carbonization.

Given the socio-economic nature of most of the assumptions entering into the SSP-RCP storylines, it is difficult to argue that the SSP5-RCP8.5 scenarios are impossible. However, numerous issues have been raised about the plausibility of this scenario family. Given the implausibility of re-carbonization scenarios, current fertility (e.g. Samir and Lutz, 2014) and technology trends, as well as constraints on conventional coal reserves, a categorization of RCP8.5 as ‘borderline impossible’ is justified based on our current background knowledge.

Based on this evidence, Ritchie and Dowlatabadi (2017) conclude that RCP8.5 should not be used as a benchmark for future scientific research or policy studies. Nevertheless, the RCP8.5 family of scenarios continues to be widely used, and features prominently in climate change assessments (e.g. CSSR, 2017).

JC note:  next installment is climate sensitivity

98 responses to “What’s the worst case? Emissions/concentration scenarios

  1. The worst case for high emissions of CO2 is a lack of CO2.

    A best case is the highest level we can reach because it helps grow everything, people, plants animals trees grass, everything.

    There is no proof of any harm from more CO2 and there is plenty of proof for good from more CO2.

    • There is plenty of proof that when CO2 gets too low, everything green dies.
      But then again, this is just data, observations, real science and common sense. It does disagree with alarmist theory, luke warm theory and the climate model output they seem to trust who never worry about CO2 getting too low. Now that is a crazy obsession that I will never understand.

    • Based on this evidence, Ritchie and Dowlatabadi (2017) conclude that RCP8.5 should not be used as a benchmark for future scientific research or policy studies.

      Based on actual real evidence, from every real trusted source, CO2 sensitivity should not be used for any benchmark for future scientific research or policy studies.

  2. The big lie is the identification of a highly implausible worst case scenario RCP 8.5 with a business as usual scenario. BAU scenarios by definition arise from simple extrapolation of long term trends. RCP 8.5 is not such case.

    • In defense of Riahi et al. (2007) they only referred to it as a conservative BAU scenario, making it clear that a number of the assumptions were at the 90th percentile. Shortening to BAU only came later.

      • That’s a wrong use of the word conservative. In forecasting conservative means it deviates less from established trends, not more.

        Over 900 ppm CO2 by 2100 is a huge unjustified deviation from established trends.

        Under no circumstance RCP8.5 can be considered BAU, whoever says so is lying, willingly or not. Since we are talking about scientists I assume they know what they are doing and should know RCP8.5 is not BAU.

  3. While that all sounds so logical, in fact the RCP 8.5 scenario is quite implausible when examined from a supply-driven rather than a demand-driven perspective. I discuss this in some detail here, looking at fourteen supply-driven estimates.

    TL;DR? None of the fourteen get over 650 ppmv by 2100 …

    w.

  4. double sixsixman

    Since no human knows the exact effect
    of CO2 on the average temperature,
    it is meaningless to add a second wild guess
    about the future CO2 level.

    There have been predictions of a
    coming global warming catastrophe
    since the late 1950’s, starting with
    Roger Revelle..

    How many more decades will it take
    to realize the predictions have been
    60 years of wrong wild guesses?

    The strangest thing about climate change
    is that PAST actual climate change,
    since Detroit and Chicago were
    covered by glaciers 20,000 years ago,
    has been 100% GOOD NEWS.

    But predicted FUTURE climate change
    is always 100% BAD NEWS.

    The predictions get more hysterical
    as the actual climate gets better !

    This is the biggest science hoax in history
    (the false claim that man man CO2,
    which is beneficial for our planet,
    will cause a climate change catastrophe,
    including runaway warming).

    The future climate is:
    No one knows.

    The ECS of CO2 is:
    No one knows.

    The evidence of an existential threat
    = NONE !

    The evidence of good news from
    adding more CO2 to the atmosphere
    = STRONG !

    Temperature has been rising
    less than +0.1 degrees C.
    per decade since the era
    of “man made CO2”
    after the Great Depression
    ended.

    That’s mild, harmless warming,
    mainly in the six coldest months
    of the year, at night, and in the
    higher, colder latitudes, where the
    few people living there are thrilled
    by warmer winter nights.

    Why speculate about the future
    climate, when we have 300+ years
    of global warming to study, starting
    from the coldest period during the
    Maunder Minimum in the late 1600s,
    probably up at least +2 degrees C.
    of global warming since then ?

    Past climate change was good news.

    There’s no logical reason to believe
    future climate change will not be
    additional good news !

    My climate science blog:
    http://www.elOnionBloggle.Blogspot.com

  5. Gerald Ratzer

    The chart below shows two approximations to the radiative forcing of CO2.
    The black dots are from Arrhenius and the IPCC with measured data to the present and projections into the future. The measured values are not in dispute, but those outside the measured range are.
    https://www.dropbox.com/s/isl14631g20apkq/CO2%20radiative%20forcing%20approx.pdf?dl=0

    The new approximation is based on the simple Physics point when you plot CO2 against concentration – and there is no CO2 there can be no forcing from CO2. There may be forcing from other GHGs, but this chart is meant to just show the effect of CO2 and none of the other GHGs.

    The equation of the new approximation goes through the origin (0,0) and the other 7 measured data points. The best fit is a quadratic as shown in the chart.
    The important point of this approximation is a new curve with an asymptote at about 655 ppm.
    There is a small shift in the new curve relative to the logarithmic curve, so the message is that increasing the CO2 concentration from (say) 700 to 800 or more will add no extra radiative from CO2. The other GHG may continue to add heat to the atmosphere, but it will not be CO2.
    This is further reason why RCP8.5 is most unlikely.

  6. Reblogged this on Climate Collections.

  7. The worst case is snowball earth.
    Glacial – interglacial flicker is a transitional stage prior to descent to deep possibly global glaciation (“snowball earth”) lasting sone tens of millions of years.

    Humanity huddled around the equator – if we’re lucky.

    But yes – you’re right of course – the worst case is warming and CO2 fertilisation.

    The colder the better.

    • I dissent, Phil.

      What I see is that at the Mid-Pleistocene transition, with the start of the 100-kyr ice cycle both glacial periods became colder and interglacials became warmer, and the result was that the cooling trend of the Plio-Pleistocene glaciation ended.

      https://i.imgur.com/vmPIBsx.png

      The flickering is actually stabilizing the system. How about that?

    • There is a broad range of complex and dynamic interactions between components.

      “The global climate system is composed of a number of subsystems — atmosphere, biosphere, cryosphere, hydrosphere and lithosphere — each of which has distinct characteristic times, from days and weeks to
      centuries and millennia. Each subsystem, moreover, has its own internal variability, all other things being constant, over a fairly broad range of time scales. These ranges overlap between one subsystem and another. The interactions between the subsystems thus give rise to climate variability on all time scales.”

      At the paradigm there is a distinction to be made between simplistic mechanistic explanations governing entire epochs – that leave far too many open question – and that cannot be reconciled cognitively with complex dynamics. And that are expressed dogmatically and with impossible as a revealed truth of science.

      “Evidence is presented supporting the hypothesis of polar synchronization, which states that during the last ice age, and likely in earlier times, millennial-scale temperature changes of the north and south Polar Regions were coupled and synchronized. The term synchronization as used here describes how two or more coupled nonlinear oscillators adjust their (initially different) natural rhythms to a common frequency and constant relative phase. In the case of the Polar Regions heat and mass transfer through the intervening ocean and atmosphere provided the coupling. As a working hypothesis, polar synchronization brings new insights into the dynamic processes that link Greenland’s Dansgaard-Oeschger (DO) abrupt temperature fluctuations to Antarctic temperature variability. It is shown that, consistent with the presence of polar synchronization, the time series of the most representative abrupt climate events of the last glaciation recorded in Greenland and Antarctica can be transformed into one another by a π/2 phase shift, with Antarctica temperature variations leading Greenland’s. This, plus the fact that remarkable close simulations of the time series are obtained with a model consisting of a few nonlinear differential equations suggest the intriguing possibility that there are simple rules governing the complex behavior of global paleoclimate.” J. A. Rial 2014 – Synchronization of polar climate variability over the last ice age: in search of simple rules at the heart of climate’s complexity – http://www.ajsonline.org/content/312/4/417.short

      I much prefer Phil’s approach as a more correct paradigm of system behavior – and far more open to the possibility of ‘simple rules of climate’ that are properties of complex and dynamic systems.

      “Climate is ultimately complex. Complexity begs for reductionism. With reductionism, a puzzle is studied by way of its pieces. While this approach illuminates the climate system’s components, climate’s full picture remains elusive. Understanding the pieces does not ensure understanding the collection of pieces. This conundrum motivates our study.” Marcia Wyatt

      Until we can see climate through this lens of dynamical complexity – the full picture will remain elusive.

  8. I asked if possibilities warrant action. But rather realities offer opportunities for social, environmental and economic progress. Reduction in CFC’s, nitrous oxides and methane emissions – for instance – reduce atmospheric pollution and its health and environmental effects. Short term thinking may be advantageous here.

    https://www.nature.com/articles/s41558-018-0328-1

    Mixed black carbon and sulfate emissions are especially pernicious.

    https://www.pnas.org/content/pnas/113/16/4243/F2.medium.gif
    https://www.pnas.org/content/113/16/4243

    With a warming potential enhanced by sulfate lensing – and dire health effects.

    In the mid term most new global electricity generation will be HELE coal plants that are 10% more efficient and don’t have health damaging emissions.

    https://watertechbyrie.files.wordpress.com/2017/06/hele-e1550170804379.jpg

    In the longer term – it is all supply and demand as energy demand doubles and triples – and costs of alternatives decline. But it is still just a mere 25% of CO2-eq emissions.

    https://watertechbyrie.files.wordpress.com/2015/10/em-break-even.jpg

    Increasing efficiency and declining carbon intensity across sectors remains necessary to retain economic competitiveness.

    https://iopscience.iop.org/article/10.1088/1748-9326/aad965/pdf

    In ecosystems and soils? For a plethora of reasons that have little to do with AGW.

    https://www.youtube.com/watch?v=5mbSzIojsRQ

    In the end emissions scenarios are more or less improbable assertions put into theoretically impossible projections. None of it matters a damn in the real world.

  9. Robert Clark

    The Earth has an average surface temperature of around 63’F. Absolute Zero is -459.67’F’. Every 24 hours the Earth radiates heat to the black sky which is considered absolute zero. All of this heat is lost erery 24 hours never to return. Every 24 hours the Sun radiates heat to the Earth more than Earth loses. The area of the Earth surface covered by water varies as the oceans rise and fall. Water reflects radiant heat. At present the earth is losing more heat than it gains.
    As I said earlier, if Ms. Curry thinks I am wrong I would say goodbye. I will monitor.
    GOODBYE

  10. To account for lower estimates of future world population growth and much lower outlooks for emissions of non-CO2 gases, more CO2 must be released to the atmosphere to reach 8.5 W m-2 by 2100 (Riahi et al., 2017).

    This isn’t true. Population estimates have actually been revised higher in recent years. The 2010 UN median estimate for 2100 was just above 10 billion. The latest revision has it at 11.2 billion. The IIASA’s 2007 median estimate was for 2100 population of just over 8 billion. Their 2014 revision pushed that up to 9.3 billion. But of course there is uncertainty. The IIASA’s low and high variants (which are actually SSP1 and SSP3 – the same base socio-economic scenarios used by Riahi 2017) hit 7.2 billion and 13.6 billion respectively at 2100. Nor is SSP5 anything new as a low population growth, high emissions scenario. A1FI from SRES is about 20 years old and was very similar.

    I’m not sure where you’re getting “lower outlooks for emissions of non-CO2 gases” from? SSP5 does have methane growth stalling, but this appears to be due to the particulars of that socio-economic storyline. SSP3 (RCP7.0) has stronger methane growth tracking just below Riahi 2011 RCP8.5 levels (probably due to slightly lower GDP per capita growth).

    Most importantly, Riahi et al. (2017) found only one single baseline scenario of the full set (SSP5) reaches radiative forcing levels as high as the one from RCP8.5 (compared with 40 cited by van Vuuren et al. 2011).

    This is not a valid comparison. van Vuuren et al. were looking at hundreds of scenarios. Riahi et al. were looking at five, meaning 20% of their scenarios hit RCP8.5 level.

    • “Population estimates have actually been revised higher in recent years.”

      I don’t believe you have a good source for this, because:

      Fertility rates are trending slightly lower than the UN “medium variant”,
      and will soon be less than replacement rate:
      https://climatewatcher.webs.com/TFR2017.png
      ( data from UN and CIA )

      Most of the worlds emissions are from countries with below replacement fertility rates:
      https://climatewatcher.webs.com/2018_TFR_CO2.png

      And, fertility rates are falling for ALL countries.
      https://ourworldindata.org/uploads/2014/02/World-population-by-level-of-fertility-without-projections-750×525.png

      Compound fertility with the fact that per capita emissions continue to fall for most countries.

      Also compound with the fact that both carbon efficiency and fertility improve with economic development, and together, 8.5 is not plausible.

      And the future is bright 😎

      • And the future is bright 😎

        (cool shades, TE, really cool)…

      • The population projections assume huge increases in Africa and the Middle East, in nations unable to carry such growth. Their dynamic system models seem to ignore the Rwanda type wars and genocide which takes place as excess population drives migration waves and violence to reduce stress. This is already seen in Honduras and Haiti, and will be seen more in future decades.

      • I don’t believe you have a good source for this

        My sources, as explained, are the UN and IIASA population projections. About as good as you’re going to get.

        According to your extrapolation the birth rate will be negative by mid-22nd Century. Doesn’t sound like a bright future. Or maybe naive linear extrapolation just isn’t a bright idea.

      • fernandoleanme,

        That’s true of course, though it’s not clear that wars significantly stall long-term population growth. You tend to get a post-war boom. For example, the Rwandan population curve is largely back on the pre-civil war growth trend.

        But the projections also produce things like a 40-70% reduction in Chinese and Japanese working age population by 2100, with similar (though less dramatic) demographic shortfalls in many other countries. It’s quite fanciful to believe that these countries would allow such a thing to transpire. And we can see from policies being introduced there and elsewhere, which aim to increase fertility rates, that they won’t.

      • “My sources, as explained, are the UN and IIASA population projections. About as good as you’re going to get.

        According to your extrapolation the birth rate will be negative by mid-22nd Century.

        Clearly fertility has a zero lower bound.

        Also clearly, the observed fertility rates are lower than what’s being modeled.

      • “According to your extrapolation the birth rate will be negative by mid-22nd Century.

        Hmmm…

        You are the one extrapolating to mid-22nd Century, not me.

        The extrapolations I did graphically indicate are much nearer term, of just a few decades, which is a reflection of just how close we are to replacement fertility rates.

        Persistence is the best starting forecast, then modify by prediction of understood dependent factors.

        Why would you believe these trends to reverse?

      • But the projections also produce things like a 40-70% reduction in Chinese and Japanese working age population by 2100, with similar (though less dramatic) demographic shortfalls in many other countries. It’s quite fanciful to believe that these countries would allow such a thing to transpire. And we can see from policies being introduced there and elsewhere, which aim to increase fertility rates, that they won’t.

        Governments don’t have babies, women have babies.

        Lots of Western governments have economic incentives including cash payments, for having children which have been in place for decades.

        None have stopped fertility rates from falling.

      • TE,

        Persistence is the best starting forecast, then modify by prediction of understood dependent factors.

        But we do have predictions based on understanding of dependent factors – those produced by the IIASA and UN, so why would anyone want to devolve to naive linear extrapolation?

        Also clearly, the observed fertility rates are lower than what’s being modeled.

        The current CIA figure for 2017 is 2.42. That’s below the UN median expectation but about the same as the IIASA figure (2.41 for 2015-2020).

        Observations have uncertainties and near-term estimates in demography are likely to be quite uncertain, being based on small samples and reports (not all events are reported, particularly in poorer rural regions) rather than comprehensive census data.

        Censuses provide a useful point of observation for testing these projections. The most recent major census was in Pakistan in 2017 (a number of major censuses are due to happen over the next couple of years). The IIASA and UN median estimates suggest that the current fertility rate in Pakistan is about 3.3-3.5, with crude birth rates of 27-28 per 1000. By contrast the 2018 CIA estimate suggests current Pakistani TFR of 2.55 and birth rate of 21.6.

        The big problem for the CIA estimates is that the 2017 census returned a population of 207.8 million, which was significantly greater (by about 5%) than the median expectation of both UN and IIASA. This suggests that the CIA figures, at least for Pakistan, are way waaaay too low.

        Lots of Western governments have economic incentives including cash payments, for having children which have been in place for decades.

        None have stopped fertility rates from falling.

        Fertility rates stopped falling in Western countries decades ago, been static since around the 1980s. Rates have actually risen in France over the past few decades, almost back up to replacement rate. Also a big rise in Russia, which has also made a big push to promote fertility, though much of that rise may just be a natural recovery from the post-Soviet fertility collapse.

        No guarantees of success, but it’s absolutely clear that the motivations for these countries to increase fertility are huge. No good reason to believe they won’t have some effect.

        Why would you believe these trends to reverse?

        No-one has suggested that trends will reverse. The median variants of both UN and IIASA projections feature declining global average TFR, going below replacement level at some point around mid-Century. Even the high end 13+ billion projection features largely static rather than growing TFR. What does appear to be the case is that the TFR decline has slowed, and is expected to slow in future. How much it slows will determine where we end up, in a probable range of about 8-12 billion.

      • TE,

        it’s absolutely clear that the motivations for these countries to increase fertility are huge.

        This is an interesting discussion, because it goes to the heart of the political divide about “climate change” and other political matters.

        Adam Smith observed the unit of action in economies is the transaction between individuals. As such, even though individuals are motivated by their own interests, the activity tends to benefit the national economy. And countries with respect for individual liberty are also those in which individual transactions produce wealthier national economies.

        Similarly, the unit of action in natural population is most commonly a negotiation between two individuals, primarily a mother.

        In both the case of economy as well as population, the general tendency of individual choice benefits the larger community.

        There are those that believe that governments should impose control over individuals to prevent some harm.

        There are those that believe, with the evidence of history detailing government tyranny in the name of some good, that government intrusion is the source of harm.

      • But we do have predictions based on understanding of dependent factors – those produced by the IIASA and UN, so why would anyone want to devolve to naive linear extrapolation?

        We also had the now notoriously failed predictions of Paul Erhlich – why would anyone want to compare them to actual observation?

        “Fertility rates stopped falling in Western countries decades ago, been static since around the 1980s. Rates have actually risen in France over the past few decades, almost back up to replacement rate.

        Yes, the number of countries in the world past peak population continues to grow, and Western countries continue to have lower than replacement fertility meaning natural population decline.

        “No guarantees of success, but it’s absolutely clear that the motivations for these countries to increase fertility are huge. No good reason to believe they won’t have some effect.”

        Look at this list of incentives.
        Lots of money from indebted nations, but still implied shrinking population.

        “What does appear to be the case is that the TFR decline has slowed, and is expected to slow in future. How much it slows will determine where we end up, in a probable range of about 8-12 billion.

        Global TFR continues to fall.
        Global TFR is falling at a rate faster than the UN medium variant. And,
        Global TFR is close to replacement rate.

        https://climatewatcher.webs.com/TFR2017.png

      • TE,

        This is an interesting discussion…

        It may be, though I’m not sure who you’re having it with. Nothing in that comment addresses anything I’ve said.

        Though I will point out, as you seem to be unaware, that the decline in TFR has involved a huge amount of government intervention. Such as legalisation of abortion, promotion and free provision of contraception and family planning services, free/low-cost provision of medical services reducing infant mortality, free/low-cost education. Indeed, some believe the Pakistani TFR has failed to drop as fast as some were expecting because some government interventionist programs started to fall by the wayside a decade or so ago.

        We also had the now notoriously failed predictions of Paul Erhlich – why would anyone want to compare them to actual observation?

        What? What’s Ehrlich got to do with anything? And if you don’t compare predictions with observations how would you know they failed?

        Global TFR continues to fall.

        It has fallen, and it’s reasonable to expect it will likely continue to fall to some extent.

        Global TFR is falling at a rate faster than the UN medium variant.

        As I’ve already pointed out, and you’ve failed to address (instead written multiple irrelevant paragraphs), the most important recent observational test (Pakistan census) indicates underestimation of growth by all the projections. Maybe that’s an aberration and the UN median estimate for the past few years will indeed prove to be too high for the global picture but that remains to be seen.

        You’re also repeatedly ignoring that I cited two population projections which were revised upwards (UN and IIASA). And since you’ve taken things on a wild tangent here, it probably requires reminding that my entire point was that major population projections have been revised upwards in recent years, not downwards as claimed in the original article.

        Global TFR is close to replacement rate.

        Linguistically the word ‘close’ could be considered reasonable. But then it would also be true that TFR is close to replacement rate in the UN and IIASA population projections I’ve cited, so I can’t see what the point is in saying this.

  11. Dr C is putting up posts at break neck speed leaving denizens of Climate, etc with whiplash in her wake. Seems the karate kickin’, cross between a lady & a tomboy isn’t about to give up fighting the good fight. The despondency of sitting up all night in an Arizona airport (wondering aloud why?) seems but a distant mirage in the desert now. And those of us standing on the sidelines of the great climate change debate can be heard cheering her on:
    YOU GO CLIMATE GIRL(!)

  12. David Albert

    “For the forthcoming IPCC AR6, the comparable SSP5-8.5 scenario is associated with an atmospheric CO2 concentration of almost 1100 ppm by 2100 (O’Neill et al. 2016), which is a substantial increase relative to the 936 ppm reported by Riahi et al. (2007).”

    The Mona Loa data show atmospheric CO2 increasing at about 2PPM per year independent of human emissions for the last 20 years. That rate gets us up to about 580 ppm by 2100. There is some disconnect between the real world and the estimates of future CO2 in these quoted works.

    • Dave, forgive my peeve, but it’s Mauna Loa. (i’s born in honolulu and i bristle every time i sees the misspelling… 😖) Agreed that it should at least get honorable mention in a post like this. If the growthrate is driven by temperature, for whatever the reason, as it appears so for 60 years and counting now, then the possibility of it should not be ignored. It would render RCPs moot. As i mentioned to you a couple posts back, it’s a very entrenched paradigm that emissions drive the growthrate. (and fightin’ it is about like spittin’ in the wind)…

    • The Mauna Loa data show atmospheric CO2 increasing at about 2PPM per year independent of human emissions for the last 20 years.

      It does not show such thing obviously. CO2 levels have accelerated their rate of increase from less than 0.5 ppm/year in 1950 to 2.5 ppm/year now.

      https://i.imgur.com/H9doRBQ.png

      Gaussian smoothing of temperature from HadCRUT4 and CO2 rate of increase from Law Dome and Mauna Loa.

      The increase in CO2 levels matches well our emissions. What it does not match is temperature. We can reject that temperature changes are the main cause of the increase in CO2. We can equally reject the opposite, that changes in CO2 are the main cause of temperature changes.

      If you want to calculate BAU CO2 levels for 2100 you have to include the acceleration in its rate of increase. You simply do a linear regression of the CO2 rate of increase between 1959 and 2018 and then extrapolate to every year to 2100. Then you add the accumulated increase to 2018 levels. Five minutes in Excel, but then you have to know how to spell Mauna Loa to find the data.

      • David Albert

        My apologies for the misspelling to both Javier and afonzarelli. I am lost if spellcheck fails me. So is Munshi wrong about the responsiveness of the atmospheric CO2 to emissions (https://tambonthongchai.com/2018/12/19/co2responsiveness/)? Are there measurements that disprove Harde 2017 and Salby?

      • Murry Salby lost his way, has not published his claims, and has been refuted multiple times. See for example the comments:
        https://judithcurry.com/2016/08/10/murry-salbys-latest-presentation/

        I can’t follow Harde as I am not an expert on models and climate sensitivity, but he has failed to impress anybody else and he is only cited by a handful of skeptics, so I wouldn’t put much weight on him being right.

        That site I can’t access.

      • Regarding Harde 2017, see:
        http://dx.doi.org/10.1016/j.gloplacha.2017.09.015

        And then the open comment by the editors Grosjean et al. 2018.

        I would think Harde’s 2017 paper has been thoroughly demolished.

      • Javier, puh-lease, once again, the correlation is with hadsst3sh, not hadcrut4gl. (presumably the southern ocean is where all the action is) It’s very difficult to have a discussion when there is no agreement on the data to use. Secondly, there is no point in going back any further than 1958 because ice cores may well have a smoothing problem. Especially so will a presumed large change in the growthrate in the 1930s. Smoothing would be a non problem were there little change in the growthrate over time as was certainly a case in the 1800s. (and guess what? the temperature correlation shows a near perfect match with cores in the 19th century)…

      • Especially so will… should read Especially so with

      • Javier, i left the series up with the graph so that you can fiddle with it. Take the temperature series all the way back to 1850 and you’ll see just how good a fit with ice cores that it is. (generally greater than zero and less than .05 ppm/ month) Those numbers in the 1800s depend entirely on the amount of warming that we have had since 1958. If we had warmed just a mere .1C more or .1C less than we actually have, then there wouldn’t be that nice fit. It would be way off. (a remarkable co-inkidink and as far as i know, mine is the only soul who has ever noticed this)…

      • Afonzarelli, a few points:
        A correlation by itself does not point to the causal agent. Unless you have a strong theory well supported by evidence that indicates that one agent cannot be the cause of the other you are left with a correlation.

        Second, by working with a multi-month mean and calculating a derivative over it you discount the year-round stable increase due to our emissions. Your derivative shows changes of ± 0.15 ppm/time^2 that fit SH SST changes. Big deal, this is known since about 1976. The important change is the 1.5 ppm increase on average that we have been causing since 1958 and that disappears from your graph when you take the derivative.

        It is not that nobody but you notices this, it is that everybody else knows the wiggles in CO2 are due to the Southern Ocean, but the long-term trend is due to us.

      • …the wiggles in CO2 are due to the Southern Ocean, but the long-term trend is due to us.

        Then why is it that the variability features (wiggles) and the long term trend features of the southern ocean SSTs are a near perfect match with those of the carbon dioxide growthrate?

      • Further more, Javier, how long must a correlation go on before one can say that we have something more than just a correlation? This has been going on for well over half a century now. If it goes on for a whole century, then should we take notice? (how long?) And if the southern SSTs should tank for an extended period of time and the carbon dioxide growthrate along with it, should we pay it any mind then? At what point would you say that we have something more than just a correlation?

      • why is it that the variability features (wiggles) and the long term trend features of the southern ocean SSTs are a near perfect match with those of the carbon dioxide growthrate?

        The wiggles in CO2 are caused by the Southern and Pacific oceans SST, that’s why they match.

        The long term trend doesn’t match. Between 1975-1998 CO2 increased and SST increased, so you can make them match. Between 1998-2018 they diverge. CO2 increased SST didn’t except for 2016 El Niño.

        http://woodfortrees.org/plot/esrl-co2/mean:24/plot/hadsst3sh/from:1958/mean:24/scale:160/offset:330

        At what point would you say that we have something more than just a correlation?

        A correlation is just a correlation unless you can show causality. If you can’t, a good hypothesis with a mechanism that explains the relationship can be convincing if it does not contradict other evidence.

        A correlation between two things that are generally increasing (temp and CO2) or decreasing over time is not very impressive. It becomes more convincing if both change the sign of their trend following each other up and down. Then at least you can be more sure that they actually correlate.

      • Javier, you’ve got the wrong correlation(!) The southern SSTs correlate with the growthrate, not the rise in CO2. (look closely at my woodfortrees graph that i posted above) That’s why it’s important to at least understand the argument before we engage in discussion. Rud made the exact same mistake that you are making in his Salby hit job post over at wuwt (and even Ferdinand called him out on that). i remember telling Bart that it was like trying to play chess with someone who doesn’t know how the pieces move. Bart quipped that he felt like arthur dent teaching a bunch of cavemen how to play scrabble (☺️)

      • It is no mistake, Afonzarelli. At least on my part. When you take the derivative you calculate the instant rate of change, so the long term rate is no longer represented. See what happens when you take the derivative of CO2:
        http://woodfortrees.org/plot/esrl-co2/derivative
        Where is the long-term trend? Gone.

        Your comparison of the long-term trend with the derivative of one variable is an apples to oranges comparison. If SST changes in the SH are driving the increase in CO2, then their long-term trends (without derivation) should match. They don’t.

      • Alright, Javier, never mind the countless graphs that show the correlation. Rate of change, derivative, whatever. Bottom line is that when the SSTs stall, then so does the growthrate. When they move, so also the growthrate. (the graphs are at least useful tools in demonstrating, quantifying this) No one out there is making the claim that the correlation is with SSTs & the keeling curve, except Beck and he’s dead. That is/b> a mistake on your part. It’s either one or the other. And no one is making the claim with the keeling curve, because, as you have noted, there is no match. But, there is a match with temperature and rate of change even if just on a yearly basis, for whatever the reason. And if that correlation continues, then sooner or later it will be undeniable. If SSTs tank and the growthrate tanks right along with it, then will it then change your mind? (ferdinand has agreed to concede the point were that the case) There is no reason for this correlation to be happening at all, so don’t count on it being spurious after sixty plus years. And as i showed, there is even a match with cores when they exhibit little in the way of smoothing, the late 1800s. If and when this paradigm does find acceptance, then we’ll have to ask then the question why?. That would indeed be an interesting discussion. (too bad it has to be put off until cooler heads prevail)…

      • Oops, something messed up with the bold there, Javier. (hate it when that happens… ☹️)

      • there is a match with temperature and rate of change even if just on a yearly basis, for whatever the reason.

        This is the reason:
        Bacastow, R.B., 1976. Modulation of atmospheric carbon dioxide by the Southern Oscillation. Nature, 261 (5556), p.116.
        Notice the date.

        Bottom line is that when the SSTs stall, then so does the growthrate.

        That is actually not correct. When SST stalls, if CO2 growth rate is above zero (in the Y axis) it is increasing even if the curve is flat.

        if that correlation continues, then sooner or later it will be undeniable.

        As far as I know nobody is denying that.

        There is no reason for this correlation to be happening at all

        I guess you should say that you don’t know of any reason. There is a reason for everything, and a reasonable one has been proposed for this. I can’t say if it is true, but I am changing it only for a better one.

      • There is no reason for this correlation to be happening at all

        Javier, poor articulation here on my part. If i were to rephrase it i’d say something like there is no reason for this correlation to be happening at all by chance. i really don’t like making comments using the bell. All that i have right now is my phone and it gives this tiny window for posting comments. Very difficult to proof read with. (probably had something to do with me failing to close the bold, too) Easter i plan on buying a tablet and maybe it will go better with that…

      • Javier, i think the problem is that we need a theory that fits all the data and not just the data that we cherry pick at our convenience. We’re not here to make things easier on ourselves nor to dismiss valid points of view that are inconvenient for us. Down a few comments below i believe that i have the definitive argument for the validity of ice cores. This runs counter to the notion that the rise in CO2 isn’t anthropogenic. When we have all the pieces accurately assembled then maybe we have a shot at uncovering the truth. (and that truth may well be something that none of us expected)…

      • Javier wrote: “The increase in CO2 levels matches well our emissions. What it does not match is temperature. We can reject that temperature changes are the main cause of the increase in CO2. We can equally reject the opposite, that changes in CO2 are the main cause of temperature changes.”

        You should be asking how temperature has varied with total forcing, not just CO2. No one ever plots this. Most of the change in forcing has occurred (about 0.4 W/m2/decade) and in temperature (about 0.2 K/decade). Both changes are fairly linear with over this period, and the ratio gives a TCR of about 1.7. Both are modestly perturbed by other factors: Temperature is perturbed by chaotic phenomena like ENSO and probably slower oscillation. Volcanic eruptions dramatically perturb forcing, changing too quick for the mixed layer to fully respond. If you set aside chaos the ocean and large transient change in forcing from volcanos, the big picture is clear, increased forcing causes warming. Rising CO2 is a forcing.

        Now you can, if you wish, go back further in time. The change in forcing is much slower, as is the overall warming rate. The reliability of the data is worse. The influence of chaos (unforced/internal variability) is more important. The warming from 1920-1945 (0.4 K appears to be about twice as big as expected from the forcing change. This discrepancy is only about 0.2 K. A major El Nino (a chaotic perturbation in upwelling and subsidence) can produce 0.3 K of warming in six months. Given the chaotic nature of our climate system, looking for perfection in the relationship between forcing and temperature is a fool’s errand. Plotting only changes with time obscures cause an effect.

        The laws of quantum mechanics and spectra of GHGs tell us that rising CO2 will produce a forcing, a reduction in the rate of radiative cooling to space. The law of conservation of energy tell us that our planet will warm until that radiative imbalance is corrected. Given the chaotic nature of climate and the difficulty of measuring small changes in temperature over several decades, it should be clear that our climate system is a lousy place to conduct experiments on this subject, but the big picture appears to be correct.

      • Franktoo,

        Forcings are theoretical calculations from an imperfect knowledge. Therefore they do not constitute evidence and cannot be given precedence over evidence.

        I agree that physics is very clear that an increase in GHGs should produce warming, and if it only depends on the Stefan-Boltzmann equation (it does not) it should produce ~ 1 °C of warming per doubling of CO2, ceteris paribus. But obviously the rest of the system responds to any change, and we don’t know how or by how much, so we don’t know how much warming an increase in CO2 produces, and thus considering only its theoretical forcing without knowing the response and feedbacks is the real fool’s errand.

        We have been told that between 1945-1975 the world cooled from a combination of increasing CO2 and increasing aerosols. Logic demands that the same combination should have produced cooling between 1910-1945. Solar activity was also lower in the 1910-45 period than in the 1945-75 period. The 1910-45 warming was of 0.5 °C versus 0.6 °C for the 1975-2000 period. Phil Jones acknowledged that they are not statistically significantly different. The 1910-45 warming period is unexplainable by your paradigm.

        A scientist only has the evidence to guide him/her. The evidence is very clear that CO2 cannot be the main cause of the warming of the world. Essentially it did not contribute to the early 20th century warming, and this indicates it cannot have contributed by more than one third to the late 20th century warming. The evidence will be the lack of warming for the first third of the 21st century.

      • Javier wrote: “Forcings are theoretical calculations from an imperfect knowledge. Therefore they do not constitute evidence and cannot be given precedence over evidence.”

        Nonsense! Radiative transfer calculations are derived from quantum mechanics and the heart of modern physics. The reliability of these calculations has been tested in the real atmosphere numerous times. See:

        https://en.wikipedia.org/wiki/Schwarzschild%27s_equation_for_radiative_transfer

        The exact composition of the atmosphere through which radiation transfer occurs has an impact on the calculations. After modeling changes to the atmosphere for future scenarios, the forcing for 2XCO2 ranges from 2.4 to 4.4 W/m2 because the clouds, humidity and lapse rates in these hypothetical future atmosphere vary significantly. However, there is much less uncertainty about the forcing for doubling CO2 in today’s atmosphere and that of the past half-century. Therefore, several W/m2 less heat has been escaping through our atmosphere than would have before GHGs began rising. That is a lot of energy. If all of the heat from an imbalance of 1 W/m2 remained in a 50 m mixed layer of the ocean, it would warm 0.2 degC/year.

        Javier wrote: “I agree that physics is very clear that an increase in GHGs should produce warming, and if it only depends on the Stefan-Boltzmann equation (it does not) it should produce ~ 1 °C of warming per doubling of CO2, ceteris paribus. But obviously the rest of the system responds to any change, and we don’t know how or by how much, so we don’t know how much warming an increase in CO2 produces, and thus considering only its theoretical forcing without knowing the response and feedbacks is the real fool’s errand.

        A gross oversimplification. It is possible to put some limits on how little warming anthropogenic radiative forcing could have produced. The climate feedback parameter for a graybody model or the IPCCs models without feedbacks is -3.3 or -3.2 W/m2/K. There is observational evidence from space during the seasonal cycle that LWR feedback through clear skies (where only WV and LR feedbacks operate) and cloudy skies is about -2.2 W/m2/K, meaning WV+LR feedback is about +1 W/m2/K. AOGCMs make similar predictions for WV+LR feedback during both seasonal and global warming. The absence of amplified warming in the upper tropical troposphere means some modeled negative lapse rate feedback is missing. Feedback from changes in surface albedo (aka ice-albedo feedback, mostly seasonal snow and sea ice) is almost certainly positive, but hopefully as small as the IPCC projects. So, before accounting for changes in clouds, a reasonable estimate for the climate feedback parameter is about -2 W/m2/K (ECS of about 1.8 K/doubling).

        AOGCMs produce high climate sensitivity by adding a total of about 1 W/m2/K of positive cloud feedback in the LWR and SWR channels. In contrast to the predictions of AOGCMs, there is no positive cloud LWR feedback in response to seasonal warming. We might optimistically say the same is true for global warming. That leaves SWR cloud feedback. About 100 W/m2 of SWR is reflected back to space, but only 2/3rds is by clouds. So a +/-1%/K change in reflection of SWR by clouds is about a feedback of +/-0.7 W/m2/K. And given that the LGM was 6 K colder, changes dramatically bigger than +/-1%/K don’t seem reasonable. That would put the overall climate feedback parameter between -1.3 and -2.7 W/m2/K (ECS 1.3 to 2.7 K/doubling). With the estimated current forcing at 2/3rds of a doubling (or higher if the aerosol indirect effect is smaller than assumed), equilibrium warming would be about 0.9 K and the transient response 0.6 K.

        Since the climate feedback parameter is in the denominator when calculating the amount of warming from a forcing, negative feedback doesn’t cause as big as change as a positive feedback of the same magnitude that bring the climate feedback parameter closer to zero and a runaway GHE. No matter how I try (and you should try for yourself), I can’t convince myself that negative cloud feedbacks can bring ECS can be below 1 K/doubling. That means at least half of current warming was anthropogenically forced. The ECS’s obtained from various EBMs assume that all observed warming was anthropogenically forced.

        I can’t find convincing evidence in the proxy record for the last 70 centuries that natural variability has changed global temperature by 0.9 K in a half century. (Yes, there are several large warming events in Greenland ice cores, but they are regional proxies and subject to Arctic amplification.) If there are some large warming events I’ve missed, they certainly don’t happen very often. The odds of such an unforced event beginning exactly when GHGs began rising rapidly must be pretty low. Judy has compared the partially unforced warming of SSTs in 1920-1945 to that of 1975-2000, but global temperature is up another 0.3-0.4 K since 2000. Nor can I believe in a climate feedback parameter consistent with half of less of this warming being unforced. Above you suggest that almost anything is possible. That doesn’t make sense to me.

      • Nonsense! Radiative transfer calculations are derived from quantum mechanics and the heart of modern physics. The reliability of these calculations has been tested in the real atmosphere numerous times.

        Yes, but the problem is that there is the underlying assumption that only radiative changes matter. Let’s take for example solar forcing. It only takes into account TSI. It does not take into account The different effect of UV on ozone at the stratosphere, solar wind, magnetic coupling, electric coupling, solar particle rain. It is obvious for anybody unbiased that “Forcings are theoretical calculations from an imperfect knowledge. Therefore they do not constitute evidence and cannot be given precedence over evidence.”

        The rest of your estimates following the IPCC have the same problem. They rely on unproven assumptions, most of which you are not even aware of. For example one big assumption is that loss of energy by the Earth is essentially uniform on average through the globe. However in reality a big part of the energy lost by the Earth takes place at the winter dark pole. This loss is unrelated to albedo (has a zero value), very little affected by clouds (very reduced), and couldn’t care less about how much CO2 there is, as it is going away anyway. It essentially depends only on the amount of heat transported to the pole during winter, and that is not uniform. It depends on the relative strength of atmospheric meridional/zonal wind circulation, that is not random or averaged. Now show me where that is taken care of in your calculations.

        The problem is defining climate change exclusively in terms of radiative budget, when it is clearly a lot more. From then on it is all fantasy, and the final value of ECS can perfectly be outside what it is being considered. In fact the ECS concept might not be very relevant as a doubling of CO2 might not produce a fixed amount of warming, depending on a lot of other things.

        Who is the one that is oversimplifying?

        The odds of such an unforced event beginning exactly when GHGs began rising rapidly must be pretty low.

        I don’t know if you are aware that a stretch of seven decades with above average solar activity between 1935-2005 defining the Modern Solar Maximum has not happened before for at least 600 years (we know the LIA had below average solar activity). What are the odds that a one in 600 years warming event and a one in 600 years high solar activity period should coincide? And then why are we ruling it out as an explanation without good evidence?

    • David Albert asked: “Are there measurements that disprove Harde 2017?”

      Harde has been rebutted: “https://epic.awi.de/id/eprint/46881/1/revision_harde_comment.pdf

      His original paper is here:
      https://edberry.com/SiteDocs/PDF/Climate/HardeHermann17-March6-CarbonCycle-ResidenceTime.pdf

      I haven’t studied either paper thoroughly. It seems obvious to me that CO2 remaining essentially constant throughout almost 100 centuries of the Holocene and then rising 100 ppm in the past century must be due to anthropogenic emissions (which total enough to explain a rise of more than 200 ppm). Yes there is some outgassing from the warming ocean, but that process is limited by the rate of overturning of the deep ocean, ca a millennium. On a shorter time scale, we know that the mixed layer of the ocean (about the top 50 m) warms and cools by physical mixing by winds. During any year, CO2 (and heat) from the mixed layer can equilibrate with the atmosphere. The warmer ocean during the 1997/8 El Nino (+0.3 K) appears to have caused an enhanced annual rise in CO2 at Mauna Loa from from 2 ppm to 3 ppm. The following La Nina (-0.3 K) resulted in rise of only 1 ppm. IMO, this gives us a reasonable measure of how much outgassing immediately occurs from an 0.3 K increase in SST. The total increase in SST is about twice this amount.

      Now, there has been slow overturning of the ocean this brings new water to the surface where it can outgas CO2, because it is warmer. The same process is taking heat below the mixed layer. During the LIA, the drop in CO2 due to cooling was barely detectable, less than 10 ppm. I think that is a practical estimate for how CO2 might have outgassed due to warming in the past century.

      • David Albert

        Frankto
        Thanks for this explanation. I have tried to understand the argument from the IPCC that “all the increase in CO2 since the industrial revolution is anthropogenic” but find more problems than explanations. Harde 2017 builds on Muray Salby’s work from his textbook and video presentations. Kohler’s response to Harde misses the main points and, in my opinion, fails at rebuttal. Harde’s response to Kohler, with which I agree, was censored by the journal (https://hhgpc0.wixsite.com/harde-2017-censored). Munshi’s work I referenced above seems to corroborate Harde. Dr. Ed Berry (https://edberry.com/blog/climate-physics/agw-hypothesis/contradictions-to-ipccs-climate-change-theory/ )shows more reasons to accept Harde and reject the IPCC statement I am testing. I have concluded that this IPCC statement is false and human emissions are only a small part of the atmospheric CO2 which has a residence time of about 4 years rather that the 50 mentioned by IPCC. That is why I don’t think RCPs are useful for projecting forcing or warming.

      • Dave, the problem that i have with ice cores being so readily dismissed is that there are a good number of cores with different accumulation rates and yet they all tell us essentially the same thing. If cores were not accurate, then cores with higher accumulation rates would show us significantly higher ppm than those with lower accumulation rates. What other reason could there be for this other than the cores themselves being accurate?

      • I have concluded that this IPCC statement is false and human emissions are only a small part of the atmospheric CO2 which has a residence time of about 4 years rather that the 50 mentioned by IPCC.

        Well then you are as wrong as Harde and Salby, and there is nothing further to discuss.

      • David Albert

        afonzarelli says: “What other reason could there be for this other than the cores themselves being accurate?”
        It has been several years since I thought about the ice core accuracy but I remember this (http://www.co2web.info/ESEF3VO2.pdf ) being important in my decision process. Salby talks about the trapping of CO2 in ice cores as a neoconservative process probably referring to the formation of clathrates and I question the ability of the ice cores to “catch” changes in atmospheric CO2 that last less than a century or so. I will spend some time considering your question. Thank you for your thoughtful response.

      • David: Thanks for the links. I’ve spent some time trying to understand the Bern model and its critics. As best I can tell, Harde isn’t using a multi-compartment model and is therefore “wrong”. There IS a mixed layer of the ocean where both heat and CO2 equilibrate with the atmosphere within months and a vastly larger deeper ocean compartment that equilibrates over a millennium. And land compartments with intermediate time scales. The system of coupled differential equations that results stretches beyond my mathematical experience, but it is my understanding that their solution can be described by impulse-response functions, the series of exponential decays in the Berne model. However, the size and time constant for these various compartments are obtained by a best fit to the record of emissions and atmospheric level of CO2. Therefore the slowest processes – the ones most important to the future – have the greatest uncertainty. In the long run (according to my calculation), the airborne fraction is going to drop to below 20% as more CO2 is absorbed by the deep ocean, so the talk of saturating sinks is exaggerated. See Nic Lewis’s recent post here about TCRE and ECR.

        To the best of my knowledge, Harde hasn’t shown that the Berne model fails to explain the rate of disappearance of CO2 from the atmosphere after atomic bombs were tested; he has just applied a simpler model to that data. IMO, that simpler model is grossly inadequate.

        Worst of all, if one accepts Harde’s model, there is no explanation for the rise in CO2 – that happened to occur in parallel with rising anthropogenic emissions. Temperature changes the solubility of CO2, but rising temperature alone can’t possibly explain the rise in CO2 over the last half-century. If temperature were the correct explanation, ice cores would show a significant rise and fall in CO2 associated with the LIA, MWP, and other warm periods. IIRC, CO2 has varied less than 10 ppm since it reached a plateau early in the Holocene.

        It is too bad that ClimateDialogue never sponsored an online debate about the carbon cycle. My guess is that this is one case where the skeptical position would have been completely discredited. However, you never know what would happen.

    • Javier,
      “See what happens when you take the derivative of CO2:
      Where is the long-term trend? Gone.”

      http://woodfortrees.org/plot/esrl-co2/derivative/plot/esrl-co2/derivative/trend

      It is not gone. The y-axis shows it directly (in this case in ppm/month – long term trend 0,13 ppm/month or 1,56 ppm/year). I see this a lot, people claiming that taking the derivative removes the trend, when it actually highlights it, showing it directly and not as the slope on the graph.

      Furthermore, the correlation of CO2 rate of change is with all temperature indices, not only sh sst. This should be obvious since all temperature indices correlate with eachother.
      http://woodfortrees.org/plot/esrl-co2/mean:12/derivative/normalise/plot/hadcrut4nh/from:1958/normalise

      This correlation is surprisingly too easily dismissed by many. It is (off the top of my grad) the best correlation of all climate change science.

      • off the top of my HEAD

      • It is not gone. The y-axis shows it directly

        Of course. It goes from being represented by the steepness of the curve to being represented by the Y-value. Which means that comparing the steepness of temperature change to the steepness of the derivative of CO2 change is an apples to oranges comparison, as I said.

        The correlation is not dismissed. It was explained by Bacastow in 1976. That’s all.

  13. Steven Mosher

    worst cases are more likely to be driven by unknowns like changes in sinks…and new sources…think methane releases..

  14. Geoff Sherrington

    Steven,
    “driven by”?
    Is that a past tense of ‘drive by’?
    Cheers Geoff

  15. I would be cautious in supposing ignorance of physical laws justifies statistical inference. The zeroth law of nature is that such laws do exist, laws which require less than 10^24 parameters to guarantee reproducibility. Those laws with which we are most familiar may be expressed as variational extrema typically for time and energy. Nature appears predisposed to doing things the quickest way possible with the least amount of work (dissipation). The crux of climate science today is focused on getting energy away from the earth’s surface through the atmosphere. To minimize the work required, Nature will presumably seek solutions maximizing flux and minimizing thermal gradients – keep the surface as cool as constraints allow. All other factors being equal, the ‘best’ solution becomes the ‘coolest’ calculation, not the median.

  16. Pingback: What’s the worst case? Emissions/concentration scenarios « How to s..t on humans

  17. I think RCP8.5 is implausible because, if we accept that 2XCO2 causes
    a 3.71 Wm-2 energy imbalance, then 5.25 X ln( 1370/280) is what would cause an 8.5 Wm-2 imbalance.
    That is 960 ppm above today’s level. The CO2 level would have to
    increase at an average rate of 960/81= 12 ppm per year to reach that level.
    12 ppm per year is 4 times higher than the highers observed growth,
    and it would have to average that for all 81 years. I question if we can
    find, extract and burn the amount of hydrocarbons necessary to come anywhere close to that range.

  18. In 1950 total global motor vehicles production stood at 10.6 million. Today it is 97.3 million. There are few assets that are more underutilized than the personal automobile. In some cases autos are idle over 95% of the time.
    The market has responded by innovation and the movement toward automated share riding. Who knows what effect on carbon emissions over the next 100 years these and other market driven solutions will emerge. There will be ideas, though, that would not have been imagined decades ago.
    Over the last 400 years there have been ~450 major inventions and innovations. Inventors from France have accounted for 20 of that number while Germany accounted for 32 and England has 50. Not surprisingly the American number is 250. Those numbers will continue to grow but most likely with greater global participation.

    In 1920, Agricultural workers made up 25% of the US workforce. Now it is ~2%. In 1944 manufacturing employees were 40% of the American workforce. Now it is less than 10%. Who could have seen those changes 100 years ago?

    Or who would have thought about lasers, nanobiotechnology, autonomous hunter killer drones, unmanned aerial and maritime systems or limb regeneration?

    After Real Growth in US Gross Income averaging +3% from 1945 to 2000 who would have predicted only 1% Real Growth for the next 16 years.

    The list of major unanticipated trends and events is endless. The only certainty is that we will continue to be surprised by the unimaginable. Any emissions scenarios have to be cognizant of what we have been able to do in the past and the exponential growth in ideas we are likely to see.

    • Ceresco Kid asks: “After Real Growth in US Gross Income averaging +3% from 1945 to 2000 who would have predicted only 1% Real Growth for the next 16 years?”

      Anyone with an analytical mind shouldn’t be satisfied with this simple-minded analysis. Firstly, GDP is produced by workers, so this issue makes more sense expressed in terms of the growth of real GDP/worker (or even GDP/hour worked). So, you need to correct for population growth (which has slowed dramatically in recent decades) and for changes in the labor force participation rate. The LFPR increased dramatically as millions of women joined the work force in the 1970’s and 1980’s. Since 1970, the best decade for growth in real GDP/worker was 2%/yr and the 1970’s and 1980’s were not far behind. Then you might also want to take into account the increasing education of our workforce, which makes it more productive. Some economists think education has added about 0.4%/yr to real GDP/worker. The rest of productivity growth might be attributed to business capital investment (relatively constant as a percentage of GDP since the 1970’s, but apparently lower than the preceding century). Then we can ask your controversial questions: Have the technological developments you describe made such investments more effective during some periods in the past than others? (Answer: Certainly not 3-fold more effective.) What will happen in the future? Is the development of the Internet still making business investment more productive?

      Since 2000, we have seen a significant decline in the LFPR, which didn’t bottom out until 2015 and has recovered only marginally since. Only part of this is due to our aging population; the prime age (25-54) LFPR is about halfway back to its peak in 2000. The decline in the LFPR and slowing population growth explains a decent chunk, but certainly not all, of the underperformance after 2000.

      Economists who study the relationship between GDP and productivity (“Multi-factor Productivity Growth”) aren’t nearly as surprised as you are. Unfortunately, today there is a general disdain for experts and a switch to supporting politically-motivated extreme positions: “secular stagnation” (a concept first invented during the Great Depression and resurrected during every major downturn) vs. tax-cuts promoting a return to 3% or 4% real GDP growth/year. With immigration continuing at today’s rate, our work force will grow only about 0.5% in the next decade. Improvements in education may have plateaued. If you hear the Fed or other responsible organization mentioning a “natural rate of real GDP group” (currently about 2%, but controversial), their estimates are probably based on models relying on these factors.

      For a readable introduction, see http://www.bancaditalia.it/pubblicazioni/qef/2014-0231/QEF-231.pdf

      • Frank

        I understand all that. In hindsight it is all obvious. The linked document was
        written in 2014. I had noticed changes in growth in 2008.. I looked at how demographic and other factors had changed up to then from earlier decades. I’m not sure there was anything new in the report. You might want to read a little harder to grasp what I had written.

        To clarify my point, the question was, in the 1980s and 1990s, would they have predicted 16-17 years of lower growth after 2000. Apparently not, since we have a complete governmental apparatus and entitlement mentality that suggests the assumption was continued growth of Real Income +3%. I have no idea what the next 30-50 years will be like as to growth in income. I know that FDRs economic advisers were in a funk during the 1937 recession wondering if they were in a new normal of bad economic news. I have seen Larry Summers many times and know his views about future growth. I don’t know if he is correct. Apparently, Obama bought into Summer’s thesis since he went paws up and suggested we would never see over 2% growth again.

        LFPR began to plateau in the mid 1990s. Men’s had been declining since the 1940s. The overall rise was carried by women after 1965 until that stopped growing in the mid 1990s. Also, the shift from manufacturing to services jobs, begun in the 1940s, meant less productivity since the amount of capital associated with the latter is less than that associated with the former.

        My overall point was who knows what the future is going to be relative to emissions. We shouldn’t be surprised at anything.

      • cerescokid wrote: My overall point was who knows what the future is going to be relative to emissions. We shouldn’t be surprised at anything.

        Almost anything is possible, but many of the factors that determine the likely future are at least partially understood. When multiple independent factors are involved, the probability that they are all wrong in one direction is low. Does anyone expect that the number of workers in the US will consistently grows at more than 10% per decade as it did in the past? No. I’d personally be surprised if the education level of US workers continued to increase, but improved or more targeted education might occur if workers continue to be in short supply. I personally would be surprised to see the average US large business investing more revenue in future growth whatever US corporate tax policy we have. Large companies are largely owned by “share renters”, mutual funds worried about this quarter and this year with turnover rates of 50%/year.

        I presume you’ve heard of the Kaya Identity:

        CO2 = people * ($GDP/capita) * (energy/$GDP) * (CO2/energy)

        We have some idea of what factors have driven and are likely to drive these parameters, at least for the next few decades.

      • One of things I should have pointed out was the myths about top marginal tax rates. It’s commonly believed that the cause of more income equality in the 1950s and 1960s was the top marginal rate of 91% because of massive income redistribution. Wrong. There was no massive income redistribution. Firstly, those earning over $1 million paid less than.3% of total taxes and those paying the 91% paid less than.5% of total tax revenue. The other assumption is that the Effective Tax Rate with the top marginal rate at 91% was much higher than today. No, wrong again. In 1948 and 1949 the Effective Tax Rate was only 9%. Add in the SS Taxes gives an Effective Tax Rate on Individuals of 10%. Compare that to today’s combined Rate of 22%. So individuals are paying more than 2 times the % of their income in taxes today, which has been identified by some as the cause of slowed economic growth from 2000 to 2016.
        The other myth is that the Clinton Tax increase balancd the budget. No, it was a very small part of the increase in the Tax Revenue of $925 Billion. The change in marginal rates contributed only $100 Billion. The internet bubble and Real Growth generated the overwhelming majority of tax revenue increase. Bush 1 increased taxes in 1991. Clinton in 1993. The combined increase of both hikes in marginal rates by 1993 was only $8 Billion. Which was only.8% of total tax revenue. Therein lies the continued myth that hiking top marginal rates solve budget deficits. They don’t. Real Growth in economic activity and Gross Income create the vast majority of increases in tax revenue.
        The increase in spending on Social Programs $2 Trillion per year since 2000 is the real driver of deficits and explosive Debt Growth, especially during the Obama years when the Debt Held by the Public went up by $8 Trillion. Contrast that with increased Defense Spending of only $400 Billion.
        The other myth about taxes is that the Bush 2 tax cuts caused the deficits during 2001 to 2008. No, dramatic slowing growth in Gross Income caused it. Between 1948 and 2000, there was not a single year where the Adjusted Gross Income was below a previous year. There were 4 years where that occurred 2001-2008. If the nominal growth of AGI for Bush II and Clinton switched , then Clinton would not have balanced the budget and Bush II would have.
        Since there is a correlation between economic growth and per capita emissions then assuming anything about total emissions in 2100 is dependent on that economic growth. Continued leftist, anti Growth policies put into question the wisdom of assuming the mid 20th Century growth in Adjusted Gross Income until 2100.

    • Ceresco, keep in mind, too, that the federal reserve has been particularly tight fisted since the year 2000. Greenspan was really hitting his stride after the dot.com crash, not allowing much at all in the way of fast growth. And then after him, well, they don’t call Bernanke “cranky” for nothing. Then came Obama as potus, who never did instill much in the way of consumer confidence. So there’s a lot of “intelligent design” (or lack thereof) regarding the slow growth of the last two decades…

      • Yes, it doesn’t do much for the economic vitality of a country for the Big Cheese to come out of the box the first month and bad mouth corporations. Whether they like it or not we still live in a country that is what it is because of Capitalism, warts and all.

        But we shouldn’t despair. Now we have Little Cheese, aka, AOC, who is coming to the rescue.

  19. Gerald Ratzer

    John – in support of your position, please see my earlier post above, which shows a better approximation to the radiative forcing of CO2. This new approximation shows an asymptote at 655 ppm. So values of CO2 above 700 will cause no extra heating. Link to the previous post.
    $https://www.dropbox.com/s/isl14631g20apkq/CO2%20radiative%20forcing%20approx.pdf?dl=0$

  20. Perhaps a little o/t: An Interview with Bjorn Stevens upcoming ( in German, I used Google translate which does a great job IMO) in the printed “Spiegel”. Very interesting what he thinks about models and their reliability. See http://www.dh7fb.de/Bjorn/bjorn.pdf .

    • thanks much for this, v. good article

    • “Simulating natural processes in the computer is always particularly sensitive when small causes produce great effects. For no other factor in the climatic events, this is as true as for the clouds. All the clouds in the sky taken together, condensed into water, would cover the earth with a film just 0.1 millimeters thick. This tiny amount of water is enough to affect the climate massively.”

      Amazing, especially since the thickness of a US dime is 1.35 mm. Still trying to get my head around that.

      I wish them luck on the models. But it seems an infinite amount of computing power still won’t do the trick if they are on the wrong track.

      Good link.

  21. Judy, I notice that the time horizon in scenarios is being moved from the year 2100 to as far as 2300 in some studies, of course you can make the worst case scenario worser in this way. Do you consider these moving goalposts in your evaluation?

    • Nope, I am sticking to 2100, the longest time horizon of relevance for risk management and adaptation. year 2300 scare scenarios are intended to motivate mitigation (since benefits of mitigation won’t be felt much in 21st century)

  22. David Wojick

    As a semantic point, given that CO2 emissions are a rough measure of economic activity, high emissions are the best case, not the worst case. Calling them worst case implicitly endorses the notion of human CO2 caused, and seriously damaging, climate change.

    Concepts embody claims, including false claims.

  23. Pingback: What’s the worst case? Emissions/concentration scenarios | Watts Up With That?

  24. Pingback: What’s the worst case? Emissions/concentration scenarios – Enjeux énergies et environnement

  25. Pingback: What’s the worst case? Emissions/focus situations – Daily News

  26. Cynthia Maher

    Scott Adams seems PERSUADED by the idea that the Russian INM-CM5 model is the only one that has correctly predicted the past.
    Therefore, he believes it is a better for predicting the future than the numerous models that failed.
    This is a fairly SiMPLE CONCEPT, easily grasped by non-scientists.
    It would be useful to share John Christy’s figure (4th chart in https://cei.org/blog/national-climate-assessment-still-needs-reset).
    Although, that figure is for INM-CM4, and if someone can find a similar figure of INM-CM5, that would be better.

  27. Pingback: What’s the worst case? Emissions/concentration scenarios – Brojo

  28. Pingback: Weekly Climate and Energy News Roundup #354 – Enjeux énergies et environnement

  29. Why focus on extreme worst case scenarios for climate change but not for other causes of catastrophe? I suggest the reason is that it is being driven by Green ideology. It’s not rational. The same irrational nonsense that has blocked the development and deployment of nuclear power for the past 50 years.

    Most of the effort should go into estimating the consequences (impacts) and probabilities of the most likely scenarios. That’s not being done.

    If we are going to spend an enormous amount of effort looking at worst case scenarios then we need to apportion the effort appropriately between all the possible causes of catastrophe, not spend inordinate amounts of effort on climate change.