by Alberto Zaragoza Comendador
The IPCC’s First Assessment Report (FAR) made forecasts or projections of future concentrations of carbon dioxide that turned out to be too high.
From 1990 to 2018, the increase in atmospheric CO2 concentrations was about 25% higher in FAR’s Business-as-usual forecast than in reality. More generally, FAR’s Business-as-usual scenario expected much more forcing from greenhouse gases than has actually occurred, because its forecast for the concentration of said gases was too high; this was a problem not only for CO2, but also for methane and for gases regulated by the Montreal Protocol. This was a key reason FAR’s projections of atmospheric warming and sea level rise likewise have been above observations.
Some researchers and commentators have argued that this means FAR’s mistaken projections of atmospheric warming and sea level rise do not stem from errors in physical science and climate modelling. After all, emissions are for climate models an input, not an output. Emissions depend largely on economic growth, and can also be affected by population growth, intentional emission reductions (such as those implemented by the aforementioned Montreal Protocol), and other factors that lie outside the field of physical science. Under this line of reasoning, it makes no sense to blame the IPCC for failing to predict the right amount of atmospheric warming and sea level rise, because that would be the same as blaming it for failing to predict emissions.
This is a good argument regarding Montreal Protocol gases, as emissions of these were much lower than forecasted by the IPCC. However, it’s not true for CO2: the over-forecast in concentrations happened because in FAR’s Business-as-usual scenario over 60% of CO2 emissions remain in the atmosphere, which is a much higher share than has been observed in the real world. In fact, real-world CO2 emissions were probably higher than forecasted by FAR’s Business-as-usual scenario. And the only reason one cannot be sure of this because there is great uncertainty around emissions of CO2 from changes in land use. For the rest of CO2 emissions, which chiefly come from fossil fuel consumption and are known with much greater accuracy, there is no question they were higher in reality than as projected by the IPCC.
In the article I also show that the error in FAR’s methane forecast is so large that it can only be blamed on physical science – any influence from changes in human behaviour or economic activity is dwarfed by the uncertainties around the methane cycle. Thus, errors or deficiencies in physical science are to blame for the over-estimation in CO2 and methane concentration forecasts, along with the correspondent over-estimation in forecasts of greenhouse gas forcing, atmospheric warming, and sea level rise. Human emissions of greenhouse gases may indeed be unpredictable, but this unpredictability is not the reason the IPCC’s projections were wrong.
Calculations regarding the IPCC’s First Assessment Report
FAR, released in 1990, made projections according to a series of four scenarios. One of them, Scenario A, was also called Business-as-usual and represented just what the name implies: a world that didn’t try to mitigate emissions of greenhouse gases. In FAR’s Summary for Policymakers, Figure 5 offered projections of greenhouse-gas concentrations out to the year 2100, according to each of the scenarios. Here’s the panel showing CO2:
I’ve digitized the data, and the concentration in the chart rises from 354.8ppm in 1990 to 422.75 by 2018; that’s a rise of 67.86 ppm. Please notice that slight inaccuracies are inevitable when digitizing, especially if it’s a document, like FAR, that was first printed, then scanned and turned into a PDF.
For emissions, the Annex to the Summary for Policymakers offers a not-very-good-looking chart; a better version is this one (Figure A.2(a) page 331, the Annex to the whole report):
Some arithmetic is needed here. The concentrations chart is in parts per million (ppm), whereas the emissions chart is in gigatons of carbon (GtC); one gigaton equals a billion metric tons. But the molecular mass of CO2 (44) is 3.67 times bigger than that of carbon (12). Using C or CO2 as the unit is merely a matter of preference – both measures represent the same thing. The only difference is that, when expressing numbers as C, the figures will be 3.67 times smaller than when expressed as CO2. This means that, while one ppm one ppm of CO2 contains approximately the weight 7.81 gigatons of CO2 of said gas, if we express emissions as GtC rather than GtCO2 the equivalent figure is 7.81 / 3.67 = 2.13.
Under FAR’s Business-as-usual scenario, cumulative CO2 emissions between 1991 and 2018 were 237.61GtC, which is equivalent to 111.55ppm. Since concentrations increased by 67.86ppm, that means 60.8% of CO2 emissions remained in the atmosphere.
Now, saying that a given percentage of emissions “remained in the atmosphere” is just a way to express what happens in as few words as possible; it’s not a literally correct statement. Rather, all CO2 molecules (whether released by humankind or not) are always being moved around in a very complex cycle: some CO2 molecules are taken up by vegetation, are others released by the ocean into the atmosphere, and so on. There is also some interaction with other gases; for example, methane has an atmospheric lifespan of only a decade or so because it decays into CO2. What matters is that, without man-made emissions, CO2 concentrations would not increase. Whether the CO2 molecules currently in the air are “our” molecules, the same ones that came out of burning fossil fuels, is irrelevant.
And that’s where the concept of airborne fraction comes in. The increase in concentrations of CO2 has always been less than man-made emissions, so it could be said that only a fraction of our emissions remains in the atmosphere. Saying that “the airborne fraction of CO2 is 60%” may be technically incorrect, but it rolls off the keyboard more easily than “the increase in CO2 concentrations is equivalent to 60% of emissions”. And indeed the term is commonly used in the scientific literature.
Anyway, we’ve seen what FAR had to say about CO2 emissions and concentrations. Now let’s see what nature said.
Calculations regarding the real world
Here I use two sources on emissions:
- BP’s Energy Review 2019, which has data up to 2018.
- Emission estimates from the Lawrence Berkeley National Laboratory. These are only available until 2014.
BP counts only emission from fossil fuel combustion: the burning of petroleum, natural gas, other hydrocarbons, and coal. And both sources are in very close agreement as far as emissions from fossil fuel combustion are concerned: for the 1991-2014 period, LBNL’s figures are 1% higher than BP’s. The LBNL numbers also include cement manufacturing, because the chemical reaction necessary for producing cement releases CO2; I couldn’t find a similarly authoritative source with more recent data for cement.
There is also the issue of flaring, or burning of natural gas by the oil-and-gas industry itself; these emissions are included in LBNL’s total. BP’s report does not feature the word “flaring”, and it seems unlikely they would be included, because BP’s method for arriving at global estimates of emissions is by aggregating national-level data on fossil fuel consumption. Now, I’ll admit I haven’t emailed every country’s energy statistics agency to be sure of the issue, but flared gas is by definition gas that did not reach energy markets; it’s hard to see why national agencies would include this in their “consumption” numbers, and many countries would have trouble even knowing how much gas is being flared. For what it’s worth, according to LBNL’s estimate flaring makes up less than 1% of global CO2 emissions.
For concentrations, I use data from the Mauna Loa Observatory. CO2 concentration in 1990 was 354.39ppm, and by 2014 this had grown to 398.65 (an increase of 44.26ppm). By 2018, concentrations had reached a level of 408.52 ppm, which meant an increase of 54.13 ppm since 1990.
It follows that the airborne fraction according to these estimates was:
- In 1991-2014, emissions per LBNL were 182.9GtC, which is equivalent to 85.88 ppm. Thus, the estimated airborne fraction was 44.26 / 85.88 = 51.5%
- In 1991-2018, emissions according to BP were 764GtCO2, equivalent to 97.82ppm. We get an airborne fraction of 54.13 / 97.82 = 55.3%
Unfortunately, there is a kind of emissions that aren’t counted either by LBNL or BP. So total emissions have necessarily been higher than estimated above, and the real airborne fraction has been lower – which is what the next section is about.
Comparison of FAR with observations
This comparison has to start with two words: land use.
Remember what we said about the airborne fraction of CO2: it’s simply the increase in concentrations over a given period, divided by the emissions that took place over that period. If you emit 10 ppm and concentrations increase by 6ppm, then the airborne fraction is 60%. But if you made a mistake in estimating emissions and those had been 12ppm, then the airborne fraction in reality would be 50%.
This is an issue because, while we know concentrations with extreme accuracy, we don’t know emissions nearly that well. In particular, there is great uncertainty around emissions from land use: carbon released and stored due to tree-cutting, agriculture, etc. The IPCC itself acknowledged in FAR that estimates of these emissions were hazy; on page 13 it provided the following emission estimates for the 1980-89 period, expressed in GtC per year:
- Emissions from fossil fuels: 5.4 ±5
- Emissions from deforestation and land use: 1.6 ± 1.0
So, even though emissions from fossil fuels were believed to be three-and-a-half times higher than those from land use, in absolute terms the uncertainty around land use emissions was double that around fossil fuels.
(FAR didn’t break down emissions from cement; these were a smaller share of total emissions in 1990 than today, and presumably were lumped in with fossil fuels. By the way, I believe the confidence intervals reflect a 95% probability, but haven’t found any text in the report actually spelling that out).
Perhaps there was great uncertainty around land-use emissions back in 1990, but this has now been reduced? Well, the IPCC’s Assessment Report 5 (AR5) is a bit old now (it was published in 2013), but it didn’t look like uncertainty had been reduced much. More specifically, Table 6.1 of the report gives a 90% confidence interval for CO2 emissions from 1980 to 2011. And the confidence interval is the same interval in every period: ± 0.8GtC/year.
Still, it’s possible to make some comparisons. Let’s go first with LBNL: for 1991-2014, emissions according to FAR’s Business-as-usual scenario would be 196.91GtC, which is 14.17GtC more than LBNL’s numbers show. In other words: if real-world land use emissions over the period had been 14.17GtC, then emissions according to FAR would have been the same as according to LBNL. That’s only 0.6GtC/year, which is well below AR5’s best estimate of land use emissions (1.5GtC/year in the 1990s, and about 1GtC/year in the 2000s).
For BP, emissions of 764.8GtCO2 convert to 208.58GtC. Now, to this figure at a minimum we’d have to add cement emissions from 1991-2014, which were 7.46GtC. By 2014 emissions from cement were well above 0.5GtC, so even a conservative estimate would put the additional emissions until 2018 at 2GtC, or 9.46GtC in total. This would mean BP’s figures, when adding cement production, give a total of 218.04GtC. I don’t consider flaring here, but according to LBNL those emissions were only about 1GtC.
Therefore BP’s fossil-fuel-plus-cement emissions would be 19.57 GtC lower than the figure for FAR’s Business-as-usual scenario (237.61GtC). For BP’s emissions to have matched FAR’s, real-world land-use emissions would have needed to average 0.7 GtC/year. Again, it seems real-world emissions exceeded this rate, and indeed the figures from AR5’s Figure 6.1 suggest total emissions for 1991-2011 alone were around 25GtC. But just to be clear: it is only likely that real-world emissions exceeded FAR’s Business-as-usual scenario. The uncertainty in land-use emissions means one can’t be sure of that.
I’ll conclude this section by pointing out that FAR didn’t break down how many tons of CO2 would come from changes in land use as opposed to fossil fuel consumption, but its description of the Business-as-usual scenario says “deforestation continues until the tropical forests are depleted”. While this statement isn’t quantitative, it seems FAR did not expect the apparent decline in deforestation rates seen since the 1990s. If emissions from land use were lower than expected by FAR’s authors, yet total emissions appear to have been higher, the only possible conclusion is that emissions from fossil fuels and cement were greater than FAR expected.
The First Assessment Report greatly overestimated the airborne fraction of CO2
The report mentions the airborne fraction only a couple of times:
- For the period from 1850 to 1986, airborne fraction was estimated at 41 ± 6%
- For 1980-89, its estimate is 48 ± 8%
So according to the IPCC itself, the airborne fraction of CO2 in observations at the time of the report’s publication was 48%, with a confidence interval going no higher than 56%. But the forecast for the decades immediately following the report implied a fraction of 60 or 61%. There is no explanation or even mention of this discrepancy in the report; the closest the IPCC came is this line:
“In model simulations of the past CO2 increase using estimated emissions from fossil fuels and deforestation it has generally been found that the simulated increase is larger than that actually observed”
Further evidence of FAR’s over-estimate of the airborne fraction comes from looking at Scenario B. Under this projection, CO2 emissions would slightly decline from 1990 on, and then make a likewise slight recovery; in all, annual emissions over 1991-2018 would be on average lower than in 1990. But even under this scenario CO2 concentrations would reach 401 ppm by 2018, compared with 408.5ppm in reality and 422ppm in the Business-as-usual scenario.
So real-world CO2 emissions were probably higher than under the IPCC’s highest-emissions scenario, yet concentrations ended up closer to a different scenario in which emissions declined from their 1990 level.
The error in the IPCC’s forecast of methane concentrations was enormous
In this case the calculations I’ve done are rougher than for CO2, but you’ll see it doesn’t really matter. This chart is from FAR’s Summary for Policymakers, Figure 5:
From a 1990 level just above 1700 parts per billion (ppb), concentrations reach about 2500 ppb by 2018. Even in Scenario B methane reaches 2050 ppb by that year. In the real world concentrations were only 1850 ppb. In other words:
- The increase in concentrations in Scenario B was about two-and-a-half times larger than in reality
- For Scenario A, the concentration increase was five or six times bigger than in the real world
The mismatch arose because methane concentrations were growing very quickly in the 1980s, though a slowdown was already apparent; this growth slowed further in the 1990s, and essentially stopped in the early 2000s. Since 2006 or so methane concentrations have been growing again, but at nowhere near the rates forecasted by the IPCC.
Readers may be wondering if perhaps FAR’s projections of methane emissions were very extravagant. Not so: the expected growth in yearly emissions between 1990 and 2018 was about 30%, far less than for CO2. See Figure A.2(b), from FAR’s Annex, page 331:
There’s an obvious reason the methane miss is even more of a head-scratcher. One of the main sources of methane is the fossil fuel industry: methane leaks out of coal mines, gas fields, etc. But fossil fuel consumption grew very quickly during the forecast period – indeed faster than the IPCC expected, as we saw.
It’s also interesting that the differences between emission scenarios were smaller for methane than for CO2. This may reflect a view on the part of the IPCC (which I consider reasonable) that methane emissions are less actionable than those of CO2. If you want to cut CO2 emissions, you burn less fossil fuel: difficult, yet simple. If by contrast you want to reduce methane emissions, it probably helps to reduce fossil fuel consumption, but there are also significant methane emissions from cattle, landfills, rice agriculture, and other sources; even with all the uncertainty around total methane emissions, more or less everybody agrees that non-fossil-fuel emissions are a more important source for methane than for CO2. And it’s not clear how to measure non-fossil-fuel emissions, so it’s far more difficult to act on them.
CO2 and methane appear to account for most of the mistake in FAR’s over-estimate of forcings
Disclosure: this is the most speculative section of the article. But as with land-use emissions before, it’s a case in which one can make some inferences even with incomplete data.
Let’s start with a paper by Zeke Hausfather and three co-authors; I hope the co-authors don’t feel slighted – I will refer simply to “Hausfather” for short.
Hausfather sets out to answer question: how well have projections from old climate models done, when accounting for the differences between real-world forcings and projected forcings? This is indeed a very good question: perhaps the IPCC back in 1990 projected more atmospheric warming than has actually happened only because its forecast of forcing was too aggressive. Perhaps the IPCC’s estimates of climate sensitivity, which is to say how much air temperature increases as a response to a given level of radiative forcing, were spot on.
(Although Hausfather’s paper focuses on atmospheric temperature increase, the over-projection in sea level rise has been perhaps worse. FAR’s Business-as-usual scenario expected 20 cm of sea level rise between 1990 and 2030, and the result in the real world is looking like it will be about 13 cm).
Looking at the paper’s Figure 2, there are three cases in which climate models made too-warm projections, yet after accounting for differences in realized-versus-expected forcing this effect disappears; the climate models appear to have erred on the warm side because they assumed excessively high forcing. Of the three cases, the IPCC’s 1990 report has arguably had the biggest impact on policy and scientific discussions. And for FAR, the authors estimate (Figure 1) that forecasted forcing was 55% greater than realized: the trend is 0.61 watts per square meter per decade, versus 0.39 in reality. Over the 1990-2017 period, the difference in trends adds up to 0.59 watts per square meter.
Now, there is a lot to digest in the paper, and I hope other researchers dig through the numbers as carefully as possible. I’m just going to assume the authors’ calculations of forcing and temperature increase are correct, but I want to mention why a calculation like this (comparing real-world forcings with the forcings expected by a 1990 document) is a minefield. Even if we restrict ourselves to greenhouse gases, ignoring harder-to-quantify forcing agents such as aerosols, there are at least three issues which make an apples-to-apples comparison difficult. (Hausfather’s supplementary Information seems to indicate they didn’t account for any of this — they simply took the raw forcing values from FAR))
First, some greenhouse gases simply weren’t considered in old projections of climate change. The most notable case in FAR may be tropospheric ozone. According to the estimate of Lewis & Curry (2018), forcing from this gas increased by 0.067w/m2 between 1990 and 2016, the last year for which they offer estimates (over the last decade of data forcing was still rising by about 0.002w/m2/year). Just to be sure, you can check Figure 2.4 in FAR (page 56), as well as Table 2.7 (page 57). These numbers do not include tropospheric ozone, but you’ll see the sum of the different greenhouse gases featured equals the total greenhouse forcing expected in the different scenarios. The IPCC did not account for tropospheric ozone at all.
Second, the classification of forcings is somewhat subjective and changes over time. For example, the depletion of stratospheric ozone, colloquially known as the ‘ozone hole’, has a cooling effect (a negative forcing). So, when you see an estimate of the forcing of CFCs and similar gases, you have to ask: is it a gross figure, looking at CFCs only as greenhouse gases? Or is it a net figure, accounting for both their greenhouse effect and their impact on the ozone layer? In modern studies stratospheric ozone has normally been accounted for as a separate forcing, but I’m not sure how FAR did it (no, I haven’t read the whole report).
Finally, even when greenhouse gases were considered and their effects had a more-or-less-agreed classification, our estimates of their effect on the Earth’s radiative budget changes over time. For the best-understood forcing agent, CO2, FAR estimated a forcing of 4 watts/m2 if atmospheric concentrations doubled (the forcing from CO2 is approximately the same each time concentration doubles). In 2013, the IPCC’s Assessment Report 5 estimated 3.7w/m2, and now some studies say it’s actually 3.8w/m2. These differences may seem minor, but they’re yet another way the calculation can go wrong. And for smaller forcing agents the situation is worse. Methane forcing, for example, suffered a major revision just three years ago.
Is there a way around the watts-per-square-meter madness? Yes. While I previously described climate sensitivity as the response of atmospheric temperatures to an increase in forcing, in practice climate models estimate it as the response to an increase in CO2 concentrations, and this is also the way sensitivity is usually expressed in studies estimating its value in the real world. Imagine the forcing from a doubling of atmospheric CO2 is 3.8w/m2 in the real world, but some climate model, for whatever reason, produces a value of 3w/m2. Obviously, then, what we’re interested in is not how much warming we’ll get per w/m2, but how much warming we’ll get from a doubling of CO2.
Thus, for example, the IPCC’s Business-as-usual forecast of 9.90 w/m2 in greenhouse forcing by 2100 (from a 1990 baseline) could instead be expressed as equivalent to 2.475 doublings of CO2 (the result of diving 9.90 by 4). Hausfather’s paper, or a follow-up, could then apply this to all models. Just using some made-up numbers as an illustration, it may be that FAR’s Business-as-usual forecast expected forcing between 1990 and 2017 equivalent to 0.4 doublings of CO2, while in reality the forcing was equivalent to 0.26 doublings. This would still mean the difference in forcings was about 55%, meaning FAR overshot real forcings by around 55%; however, this would be easier to interpret than a simple w/m2 measure.
Now, even with all these caveats, one can make some statements. First, there are seven greenhouse gases counted by FAR in its scenarios, but one of them (stratospheric water vapor) is created through the decay of other (methane). I haven’t checked if water vapor forcing according to FAR was greater than in the real world, but if that happened the blame lies on FAR’s inaccurate methane forecast; in any case stratospheric H2O is a small forcing agent and did not play a major role in FAR’s forecasts.
Then there are three gases regulated by the Montreal Protocol, which I will consider together: CFC-11, CFC-12, and HCFC-22. That leaves us with four sources to be considered: CO2, methane, N2O, and Montreal Protocol gases. In previous sections of the article we already saw CO2 and methane, so let’s turn to the two remaining sources of greenhouse forcing. I use 2017 as the finishing year, for comparison with Hausfather’s paper. The figures for real-world concentrations and forcings come from NOAA’s Annual Greenhouse Gas Index (AGGI)
For N2O, Figure A.3 in FAR’s page 333 shows concentrations rising from about 307ppb in 1990 to 334 ppb by 2017. This is close to the level that was observed (2018 concentrations averaged about 332 ppb). And even a big deviation in the forecast of N2O concentration wouldn’t have a major effect on forcing; FAR’s Business-as-usual scenario expected forcing of only about 0.036w/m2 per decade, which would mean roughly 0.1w/m2 for the whole 1990-2017 period. Deviations in the N2O forecast may have accounted for about 0.01w/m2 of the error in FAR’s forcing projection – surely there’s no need to keep going on about this gas.
Finally, we have Montreal Protocol gases and their replacements: CFCs, HCFCs, and in recent years HFCs. To get a sense of of their forcing effect in the real world, I check NOAA’s AGGI and sum the columns for CFC-11, CFC-12, and the 15 minor greenhouse gases (almost all of that is HCFCs and HFCs). The forcing thus aggregated rises from 0.284w/m2 in 1990 to 0.344 w/m2 in 2017; in other words, forcing from these gases between these years was 0.06 w/m2.
Here’s where Hausfather and co-authors have a point: the world really did emit far smaller quantities of CFCs and HCFCs than FAR’s Business-as-usual projection assumed. In FAR’s Table 2.7 (page 57), the aggregated forcing of CFC-11, CFC-12 and HCFC-22 rises by 0.24w/m2 between 2000 and 2025. And the IPCC expected accelerating growth: the sum of the forcings from these three gases would then increase by 0.28w/m2 between 2025 and 2050.
A rough calculation of what this implies for forcing between 1990 and 2017 now follows. In 2000-2025 FAR expected Montreal Protocol gases to account for 0.0096 w/m2/year of forcing; multiplied by the 27 years that we’re analysing, that would mean 0.259w/m2. However, forcing was supposed to be slower over the first period than later, as we’ve seen; Table 2.6 in FAR’s page 54 also implies smaller growth in 1990-2000 than after 2000. So I round the previously-calculated figure down to 0.25w/m2; this is probably higher than the actual increase FAR was forecasting, but I cannot realistically make an estimate down the last hundredth of a watt, so it will have to do.
If FAR expected 1990-2017 forcing from Montreal Protocol gases of 0.25w/m2, that would mean the difference between the real world and FAR’s Scenario A was 0.25 – 0.06 = 0.19w/m2. I haven’t accounted here for these gases’ effect on stratospheric ozone, as it wasn’t clear whether that effect was already included in FAR’s numbers. If stratospheric ozone depletion hadn’t been accounted for, then the deviation between FAR’s numbers and reality would be smaller.
Readers who have made it to this part of the article probably want a summary, so here it goes:
- Hausfather estimates that FAR’s Business-as-usual scenario over-projected forcings for the 1990-2017 period by 55%. This would mean a difference of 0.59 w/m2 between FAR and reality.
- Lower-than-expected concentrations of Montreal Protocol gases explain about 0.19 w/m2 of the difference. With the big caveat that Montreal Protocol accounting is a mess of CFCs, HCFCs, HFCs, stratospheric ozone, and perhaps other things I’m not even aware of.
- FAR didn’t account for tropospheric ozone, and this ‘unexplains’ about 0.07 w/m2. So there’s still 0.45-0.5 w/m2 of forcing overshoot coming from something else, if Hausfather’s numbers are correct.
- N2O is irrelevant in these numbers
- CO2 concentration was significantly over-forecasted by the IPCC, and that of methane grossly so. It’s safe to assume that methane and CO2 account for most or all of the remaining difference between FAR’s projections and reality.
Again, this is a rough calculation. As mentioned before, an exact calculation has to take into account for many issues I didn’t consider here. I really hope Hausfather’s paper is the beginning of a trend in properly evaluating climate models of the past, and that means properly accounting for (and documenting) how expected forcings and actual forcings differed.
By the way: this doesn’t mean climate action failed
There is a tendency to say that, since emissions of CO2 and other greenhouse gases are increasing, policies intended to reduce or mitigate emissions have been a failure. The problem with such an inference is obvious: we don’t know whether emissions would have been even higher in the absence of emissions reductions policies. Emissions may grow very quickly in an economic boom, even if emission-mitigation policies are effective; on the other hand, even with no policies at all, emissions obviously decline in economic downturns. Looking at the metric tons of greenhouse gases emitted is not enough.
Dealing specifically with the IPCC’s First Assessment Report, its emission scenarios used a common assumption about future economic and population growth; however, the description is so brief and vague as to be useless.
“Population was assumed to approach 10.5 billion in the second half of the next century. Economic growth was assumed to be 2-3% annually in the coming decade in the OECD countries and 3-5 % in the Eastern European and developing countries. The economic growth levels were assumed to decrease thereafter.”
So it’s impossible to say the amount of emissions FAR expected per unit of economic growth or population growth. The question ‘are climate policies effective?’ can’t answered by FAR.
The IPCC’s First Assessment report greatly overestimated future rates of atmospheric warming and sea level rise in its Business-as-usual scenario. This projection also overestimated rates of radiative forcing from greenhouse gases. A major part of the mis-estimation of greenhouse forcing happened because the world clamped down on CFCs and HCFCs much more quickly than its projections assumed. This was not a mistake of climate science, but simply a failure to foresee changes in human behaviour.
However, the IPCC also made other errors or omissions, which went the other way: they tended to reduce forecasted forcing and warming. Its Business-as-usual scenario featured CO2 emissions probably lower than those that have actually taken place, and its forcing estimates didn’t include tropospheric ozone.
This means that the bulk of the error in FAR’s forecast stems from two sources:
- The fraction of CO2 emissions that remained in the atmosphere was much higher than has been observed, either at the time of the report’s publication or since then. There are uncertainties around the real-world airborne fraction, but the IPCC’s figure of 61% is about one third-higher than emission estimates suggest. As a result, CO2 concentrations grew 25% more in FAR’s Business-as-usual projection than in the real world.
- The methane forecast was hopeless: methane concentrations in FAR’s Business-as-usual scenario grew five or six times more than has been observed. It’s still not clear where exactly the science went wrong, but a deviation of this size cannot be blamed on some massive-yet-imperceptible change in human behaviour.
These are purely problems of inadequate scientific knowledge, or a failure to apply scientific knowledge in climate projections. Perhaps by learning about the mistakes of the past we can create a better future.
This Google Drive folder contains three files:
- BP’s Energy Review 2019 spreadsheet (original document and general website)
- NOAA’s data on CO2 concentrations from the Mauna Loa observatory (original document)
- My own Excel file with all the calculations. This includes the raw digitized figures on CO2 emissions and concentrations from the IPCC’S First Assessment Report.
The emission numbers from LBNL are available here. I couldn’t figure out how to download a file with the data, so these figures are included in my spreadsheet.
NOAA’s annual greenhouse gas index (AGGI) is here. For comparisons of methane and N2O concentrations in the real world with the IPCC’s forecasts, I used Figure 2.
The IPCC’s First Assessment Report, or specifically the part of the report by Working Group 1 (which dealt with the physical science of climate change), is here. The corresponding section of Assessment Report 5 is here.
Moderation note: As with all guest posts, please keep your comments civil and relevant.
“The methane forecast was hopeless: methane concentrations in FAR’s Business-as-usual scenario grew five or six times more than has been observed. It’s still not clear where exactly the science went wrong.”
Not a science. A pure speculation.
First of all I want to thank Judith Curry for publishing this piece. Now, an article is never really ‘complete’ – there’s always something you realize after publication. And indeed shortly after writing the article it dawned on me that Hausfather’s paper makes a pretty big mistake, at least regarding the First Assessment Report.
Hausfather compares the w/m2 of forcing expected by old climate models with modern estimates of forcing. The key line or from his paper is:
‘We express implied TCR with units of temperature using a fixed value of F_2x = 3.7 W/m2”’
F_2x is the forcing that arises from a doubing of atmospheric CO2. So Hausfather calculates the implied TCR of different models using a value of 3.7 w/m2 for a doubling of CO2… but many of these models used different values!
Think about a world in which the only forcing agent is CO2, and a climate model assumes the exact same increase in concentrations as happens in the real world. If the climate model has a forcing value for a doubling of CO2 of let’s say 5w/m2, then Hausfather’s method will interpret this as “the climate model used forcings about one third too high”, because he’s simply dividing 5 by 3.7. Of course this forcing “overstimation” is strictly true if you’re talking about w/m2, but then it makes no sense to calculate an “implied TCR” that assumes F_2x = 3.7w/m2.
This matters because FAR expected a doubling of CO2 to result in 4w/m2 of forcing, not 3.7w/m2. So Hausfather’s method would end up saying that FAR “over-estimated” forcing by 8.1% even if FAR’s concentrations of CO2 absolutely matched reality (because 4 / 3.7 = 1.081). FAR’s “overstimation” of forcings is to some extent an artifact. I’m not totally sure of the right way to calculate this, but my feel is that the real over-estimate of forcings in FAR is not 55%, but 1.55 / 1.081 = 43%. Or close to that value. (It may be mentioned that 40-45% is also the rate by which FAR over-estimated temperature change).
A more intuitive way to look at this, in my view, is to express forcing change as a percentage of each model’s F_2x. I’ve done the math with the forcings from Lewis&Curry 2018, as I’m more familiar with those than with Hausfather’s numbers; Lewis&Curry use 3.8w/m2 as the value of F_2x. I get similar results to the simple adjustment I did above; from 1990 to 2016, FAR appears to have over-projected forcing by 40 or 45%.
Just to be clear, the sum of methane, CO2 and Montreal Protocol over-estimates adds up to *more* than 40 or 45% of real-world forcing. The explanation is in the article: FAR over-estimated the forcing from these three agents, but it also excluded tropospheric ozone. And it ignored aerosols; according to Lewis&Curry’s 2018 figures, from 1990 to 2016 the combined aerosol+black carbon forcing increased by about 0.1w/m2 (the aerosol forcing is of course negative over the historical period, but it has become smaller since 1990, which means that since then it has acted as a positive forcing).
To be sure, figuring out the composition of FAR’s over-estimate is a challenge, due to changes in classification and grouping of forcings over time. Just to use a made-up example, the numbers could perhaps be: methane +25%, CO2 +15%, Montreal Protocol gases +15%, tropospheric ozone -5%, aerosols -10%. Total or net: +40%
Again, the numbers in the previous paragraph are *made up* for the purpose of illustration. My point is that, even though FAR over-estimated forcing from Montreal Protocol gases, it also under-estimated the positive forcing from tropospheric ozone and aerosols. So the *net* forcing over-estimate of 40-45% is perhaps entirely explained by its over-projection of CO2 and methane concentrations.
Thanks Alberto, an interesting article, though I admit I do not have the time or concentration span to deal with more than half that amount of writing.
What is the basis of that claim? Ozone depletion cools the lower stratosphere since it interacts with incoming solar. That implies that it has the opposite effect on the lower climate.
See fig 11 in my article here, with refs.
Also please note that the unit of power watt ( lower case w ) is abbreviated with a capital W. Despite citing sources where it is correct, you consistently get it wrong. It does not help your credibility if you don’t even know how to write the units.
About ozone of both kinds, see figure 8.7 of AR5 and the section around it.
Posted both here and at WUWT.
I’ll qualify the above statement. In FAR, Hausfather’s method is wrong because it takes forcing as stated by the report’s Figure 6; this is what the Supplementary Information says, and indeed I digitized Figure 6 and the raw forcing values correspond to what Hausfather says: just over 0.6W/m2/decade, or 1.64W/m2 from 1990 to 2017. This raw forcing arises from the model’s F_2x of 4W/m2, but Hausfather then calculates an implied TCR on the basis of F_2x = 3.7W/m2. So the model’s real TCR (at least for the years involved) will be 8.1% higher than what Hausfather reprots.
In general, old climate models used a of 4W/m2 as F_2x. However, this does not mean that the implied TCRs calculated for all these models are wrong. I’m going out now so I don’t have time to check, I’ll try to see this in more detail later, but off the top of my head some old models did NOT provide a chart of forcings, like FAR’s Figure 6. Rather, Hausfather calculated the forcings on the basis of CO2 concentrations projected by these models. If that’s the case, then the implied TCR calculated by Hausfather will be correct, because he’s being consistent in using the same value of F_2x both to calculate forcings and to estimate TCR.
We on this globe are doing something not done prior to the turn to this Century. We are emitting gases from producing wind and power systems. How much is being introduced from this never been done before activity? Are we chasing our tails.
The confidence limits for some of these estimates was +/- 40%. And this is dwarfed by model uncertainty.
“Generic behaviors for chaotic dynamical systems with dependent variables ξ(t) and η(t). (Left) Sensitive dependence. Small changes in initial or boundary conditions imply limited predictability with (Lyapunov) exponential growth in phase differences. (Right) Structural instability. Small changes in model formulation alter the long-time probability distribution function (PDF) (i.e., the attractor).” https://www.pnas.org/content/104/21/8709
There are however a whole new set of emission projections using much more sophisticated methods – the basis for AR6.
And a new class of Earth system models likely to feature prominently in AR7.
Haven’t checked my image links – again.
Reblogged this on Utopia, you are standing in it!.
I would love to see a similar article on the more recent IPCC reports.
Very nice article indeed and as I may say totally useless . Zeke Hausfather is complicating things unnecessary and may have an agenda differing from morals and science . Basically the matter is simple Carbondioxide raised from 3 particles to 4 particles per 10000 in our atmosphere during last 100 years . Since carbondioxide will only absorb only energy from very limited parts of frequencies ,which in a spectrometre are shown in a couple of very small lines in the spectrum visible to any amateur who is interested in this matter . So the absorbtion of energy by carbondioxide is very limited and is regressing exponentially at higher concentrations meaning more or less that at rising concentrations will become neglectable and at 6 parts per 10000 next to zero . As i have told many many times before the title of climate-scientist should only be given to persons who have a degree in thermodynamics and the rest of the bunch should eradicate the world from all the garbage that was produced the last 100 years .
“The HITRAN compilation, and its analogous database HITEMP (high-temperature spectroscopic absorption parameters), are developed at the Atomic and Molecular Physics Division, Harvard-Smithsonian Center for Astrophysics under the direction of Dr. Iouli E. Gordon.”
The real scientific question is why this forcing has produced no atmospheric warming.
This thermodynamics alone cannot answer. The climate system is simply not that simple. Unfortunately
I have been interested in cloud feedback to sea surface temperature for a dozen years now. What is known from the 1950’s on without a doubt is that geophysical time series exhibit Hurst-Kolmogorov dynamics.
“By ‘Noah Effect’ we designate the observation that extreme precipitation can be very extreme indeed, and by ‘Joseph Effect’ the finding that a long period of unusual (high or low) precipitation can be extremely long. Current models of statistical hydrology cannot account for either effect and must be superseded.” https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/WR004i005p00909
But this cannot mean that anthropogenic greenhouse gas emissions do not change the planetary energy dynamic. Or that in our nonlinear world – with Hurst-Kolmogorov dynamics – that a planetary emergency is not just a threshold away.
This was written:
“and other factors that lie outside the field of physical science.”
All their stated factors lie outside the field of actual science.
Actual science is required to be skeptical, always question, they have none of that.
At the bottom of it all is the assumption that changes in atmospheric CO2 concentration are explained by fossil fuel emissions. This assumption underlies all of climate science. Yet, no evidence has been provided to support this assumption except for the statistical falsehood that the causation is supported by the fact that atmospheric CO2 has been going up during a time of fossil fuel emissions. Two specific issues with this assumption are as follows:
(1) when carbon cycle accounting is carried out with declared uncertainties in these flows taken into account, the carbon cycle balances with and without fossil fuel emissions, indicating that the system is unable to detect the presence of the relatively smaller flows of fossil fuel emissions
(2) If atmospheric CO2 concentration were responsive to fossil fuel emissions we would expect to find a detrended correlation between emissions and atmospheric composition. No such correlation is found in the data.
Therefore, the relationship between emissions and atmospheric composition and therefore the relationship between emissions and AGW climate change is an assumption of convenience with no supporting evidence.
There is of course the TCRE, transient climate response to cumulative emissions that shows a near perfect correlation between temperature and cumulative emissions but that correlation is spurious and the TCRE coefficient is illusory because the time series of the cumulative values of another time series has neither time scale nor degrees of freedom, and therefore contains no useful information.
And yet, the climate action plans of climate science are made with these illusory parameters in the form of carbon budgets
It appears that climate science sits on a foundation of bad statistics constructed by an army of climate scientists armed with an inadequate education in statistics.
There is about 4 GtC/yr accumulating in the atmosphere. We are emitting some 9 GtC/yr. It means that oceans and terrestrial systems are net sinks. Over geological time magma has contributed CO2 to the atmosphere – estimated currently at some 600 MtC/yr and estimates if not emissions are rising – to offset biological and slow chemical sequestration. Keeping ocean and atmospheric temperatures higher than would otherwise be the case.
Elementary math and no need for much in the way of advanced biogeochemical cycling expertise.
“There is about 4 GtC/yr accumulating in the atmosphere. We are emitting some 9 GtC/yr. It means that oceans and terrestrial systems are net sinks.”
This statement assumes constant natural emissions but a thorough analysis of the evolution of CO2 in the atmosphere (Salby, Berry, Harde) show that most of the rise in concentration is natural. The only evidence for nearly constant natural emissions are ice cores and there is ample evidence that those data are questionable for quantitative analysis.
DMA, even if we assume ice cores are correct (fwiw, they do cross corroborate each other), we can’t rule out the other human activity — deforestation. From 1750 to 1900 CO2 rose 20 ppm according to ice cores whilst cumulative human emissions were only 5 ppm over the same period. Whatever we were doing over that period of time, you can probably at least double or triple it over the years since 1900. Land use could easily account for half the rise (even more if there’s a temperature component)…
“oceans and terrestrial systems are net sinks.”
Concerning the suggested implication to increasing carbon dioxide,
Professor Salby destroyed this myth in his lecture in Hamburg.
It is no myth that oceans and land are net sinks. And that’s not even what Salby says.
@chaamjamal You’re right, the other guy isn’t.
There is no causal relationship between MME and CO2.
Global CO2 Emissions from Fossil Fuels, Cement Production, and Gas Flaring, 1751-2014, Boden/Marland
This answers the research question of whether MME>Natural. The answer is Natural >>>>MME.
Back to the drawing board for Alberto & Co.
Very interesting analysis, sir.
I will visit your site and take a closer look and maybe, with your permission, use a link to connect our similar analyses.
Thank you chaamjamal; I’m on the job now of getting my site going, so I’ll let you know when it’s ready.
@chaamjamal, this comment wouldn’t post on your site. My log-in name was in red on yours, here it’s black…
Marine heatwaves, corals, CO2, what a coincidence!
Presented last week at the 2020 Sun-Climate Symposium.
The elephant in the room is that oceans and terrestrial systems are net sinks of CO2.
The elephant you speak of the ignorance of how Henry’s Law works in the real world, and the effect of the sun’s annual insolation cycle that drives tropical SST and ML CO2 like clockwork:
Note the very high pCO2 positive flux within the 26C isotherm (and higher), and the zero flux region just outside of it, and the negative fluxes everywhere else. The highest May ML CO2 from OCO-2:
The ocean is a net source of CO2. The climate changes naturally as always, in very predictable ways.
All the scheming and computing of MME CO2 influence has lead to widespread ignorance of the natural world.
We have both changing CO2 partial pressures – increasing CO2 in solution -and very small changes in ocean heat content – given the large heat capacity of oceans – decreasing CO2 in solution. The annual CO2 cycle quite obviously emerges from deciduous NH vegetation.
He neglects as well the CO2 gravity and biological pumps that have always meant that oceans are a net sink.
But it really a lot simpler. Less than half of human emissions is accumulating in the atmosphere. The rest has to go somewhere. And does it really matter where the molecule comes from when pooled in the atmosphere?
They are also huge sources. An order of magnitude larger than human sources. Nature cannot tell the difference between natural or fossil fuel derived CO2 so, according to Berry, the ratio in the atmosphere is the same as the ratio in the emissions. This results in about 18 PPM from humans and 392 from nature.
Oh, Robert, say it isn’t so(!) This is the second time that i’ve had to violate my self imposed ban on weekend blogging, both on account of you. (how dare you)…
All that someone who’s claiming a natural rise is saying is that ACO2 is nearly sinking out completely and the rise is caused by natural sources. If that were the case, then nature would be currently adding a net 2ppmv to the atmosphere and 4ppmv of ACO2 would be sinking into nature. Therefore, nature would be a net sink of CO2 at -2ppmv even though the rise is natural. i don’t know how you fell for that one. (yer waaay to good for that) There’s some dingbat named dikran marsupial pushing this very same junk ’round these climate blogs. The only way he’s able to maintain his argument is through insolence. (his selfie even looks like a long haired troll)
i haven’t read your reply to my yesterday evening’s comment. i might stick around this thread because it’s new and i like the subject. i’ll still read your reply come monday though. (i guess this weekend i’ll have a partial ban) Looking forward to reading it.
The CO2 increase in the atmosphere is equal to sources minus sinks. Nothing special – just emissions and CO2 concentration measurement.
But it really a lot simpler. Less than half of human emissions is accumulating in the atmosphere. The rest has to go somewhere. And does it really matter where the molecule comes from when pooled in the atmosphere?
Robert, i think it does matter. If the problem is deforestation and not emissions, then the solution is to plant trees (and not cut emissions). Here’s my guess as to what’s going on:
Assuming soil and moisture are good, a large tree taken down is quickly replaced with weeds, grasses, small shrubs and trees, etc. Much of my forest is old growth, and there very little growing on the forest floor.
Carbon dioxide emissions from fossil fuels and cement production – from 1750 to 2011 – was about 365 billion metric tonnes as carbon (GtC), with another 180 GtC from deforestation and agriculture. Of this 545 GtC, about 240 GtC (44%) had accumulated in the atmosphere, 155 GtC (28%) had been taken up in the oceans and 150 GtC (28%) had accumulated in terrestrial ecosystems.
I’m assuming you can hold more than one idea in your head at the same time Fonz. Not so sure about JCH.
We have rainforest, tall forest and open woodland. But an opening in a forest will transition through stages until the canopy closes again. It’s called ecological succession. Needs some perspective. Did you have any point?
Robert, nice little discussion about inflation over there. Don’t want to drag this thread too far OT*, so i’ll just say something brief as i won’t be going back to that post.
i’m less worried about bubbles for which central banks serve both as bubble & pin than about the bubble from excessive government spending. (the poverty created by the fed gives rise to both massive necessary entitlement spending & a wealth class that isn’t wealthy enough to pay for it)…
*thanx, dr. c.; robert if you want to reply do so over there — i’ll read but not reply
Robert, the first part of that video is very instructive. The low CO2 action nearer 90S vs the increasing action 30S and northward shows the southern ocean is too cold for outgassing most of the time, whereas the northern ocean is more responsive to tropical heat flux, partly because of land arrangements and basin size, and from atmospheric convection off the tropics flowing northward, where CO2 accumulates seasonally.
The annual CO2 cycle quite obviously emerges from deciduous NH vegetation.
That CO2 cycles from all natural sources should make it quite obvious natural CO2>>>MME, therefore the ML CO2 trend is obviously natural.
The highest pCO2 is over the tropical ocean, nearest the Americas, peaking in May, If the atmosphere is collecting and transporting the CO2 to the maximum ocean-air CO2 flux as you suggest instead of the opposite as I said, how does that happen? Please explain how the atmosphere concentrates all this NH CO2 there every year, against the gradient, peaking in May every year.
It’s all a bit convoluted and motivated. CO2 concentrations in the atmosphere peak in May and bottom out in October due to the NH growing season.
Robert I can understand why you make that argument. If plants start growing in the NH spring in April/May, the respiration through the year wouldn’t reach full potential until all the leaves are fully out on trees and crops and flowers and grasses, which happens close to May.
Since the plants continually take in CO2 all summer, it looks like the plants take out CO2 until the leaves and plants die and dry, sequestering the CO2 until October. The question then remains how does CO2 start rebounding just then and from what source(s), what replaces and adds to the total again until May if that’s the whole story?
How can MME fill that gap so precisely every year? I imagine your answer would be increased fossil fuel usage through the winter. If that’s your answer then it poses a few problems.
Implications of MME>>>Natural CO2
1) It would mean most of the biosphere younger than 60-70 years is composed of mostly carbon from MME. People, trees, animals, everything we eat now, is mostly made from MME, according to GHG theory.
2) It implies so much extra MME is still available for warming the atmosphere and ocean. I don’t know about you, but once I figured out the lower troposphere follows the ocean (UAH LT vs HadSST3) by 2 months with an offset of about -0.22C, I realized the ocean is warming the air, not the other way like GHG theory says.
So this leaves us with a ridiculously unsupportable capability for CO2, that it can be everywhere and do all this as a tiny constituent, like magic.
Do you really believe that all the trees and people younger than 60-70 years old and the food we and the animals eat are made of mostly carbon from MME, and all the agricultural product, and there’s still enough left over to accumulate in the atmosphere while warming the ocean first?
Get back to me once you have that sorted out. You don’t like it that I’m ‘motivated’ – too bad – you CO2 guys are far too unnaturally ‘motivated’ for me, so I have to be more motivated, because of Brandolini’s law –
From wikipedia: Alberto Brandolini, an Italian programmer, the BS asymmetry principle (also known as Brandolini’s law) states that:
The amount of energy needed to refute BS is an order of magnitude bigger than to produce it.
No – the plants grow all spring and summer. Now that’s a surprise. And keep growing until it starts getting cold again. Another surprise it appears. But of course you still can’t get the simple idea that there is a pool of atmospheric CO2 – that there are nearly equal terrestrial and oceanic sinks and sources – slightly more sinks than sources -and that it doesn’t particularly matter what the source is once it is in the atmosphere. It’s just one big pool. Only that the net numbers must add up. Simple math – if much less is accumulating in the atmosphere than we emit – the rest must be going somewhere. We presume with the gravity and biological pumps – and greening – that both oceans and land are net sinks. And you can stick your Brandolini there as well.
I believe you didn’t read that very closely. I said ‘the leaves’ reach their maximum (trees esp) early, but yes the plant still grows, taking CO2 in (down) all summer – my point (‘growing’ – your point), so where’s the difference?
I explained your thinking whether you realize it or not, but you didn’t explain wherefrom the biosphere in October-May re-supplies the CO2 pool, which is what I asked you. I hope you’re not going to claim the dead leaves and grasses getting buried under NH snow at the end of the growing season can all decay and sequester their CO2 into the air such that they add to the total perfectly in time for the next May ML peak.
Experience shows the dead leaves from last year’s NH growth haven’t decomposed yet, since once the snow melts and it dries out, those oak and maple leaves in my yard won’t have disintegrated giving up their CO2. I’ll still get a backache from the fact that the mass of the leaves hasn’t reduced down to CO2, meaning any theory of seasonal mass CO2 replenishment from last year’s dead flora is dead Jim.
If you think I’m wrong I’d love to know where.
Anecdote is where I leave you.
“Plant vitality may be compromised if demand for non-structural carbohydrates (NSCs) exceeds supply over prolonged periods of time. Such an NSC imbalance may result in the reduction of growth, loss of reproductive capacity, or delayed recovery from stress. Under extreme circumstances, excessive NSC depletion may kill the plants [1–3]. During periods of limited photosynthesis, like winter dormancy or drought stress, trees depend solely on stored NSCs to maintain basic metabolic functions, produce defensive compounds, and retain cell turgor [4,5]. With amassing threats to plants due to climate change, knowing how environmental factors affect NSC demand is critical for agricultural and forest management, modeling climate impacts, and better understanding plant dormancy biology.”
Pingback: Analysis of a carbon forecast gone wrong: the case of the IPCC FAR | Climate Evidence
OT: Any comments on Ed Hawkins new temperature chart showing no (global) Medieval Warm Period? https://twitter.com/ed_hawkins/status/1222899505089040385
Without such interglacial warm and cold periods, I guess solar/cloud cycle models (of modern warming) are difficult to uphold?
Thanks! Hawkins’s chart uses the PAGES2K reconstruction, but I cannot really say anything beyond that – I don’t know enough about paleo.
Can’t erase something that is not established in the literature.
The Medieval Warm Period occurred. The Vikings settled in Greenland because the history and data records are true.
The southern hemisphere does not correlate with the northern hemisphere in lockstep. The Medieval warm period was not Global. Look at Greenland and Antarctic ice core data, the most recent ten thousand years indicate independent temperatures, regulated independently, but in the same bounds.
greenland is not the globe
Ice causes ice ages, ice extent increase correlates with colder, ice extent retreat correlates with warmer. THAT IS CAUSE AND NOT RESULT! If the continent is covered down to mid latitudes with ice, the ice causes colder. If the ice is removed from the mid latitudes that causes warmer. The ice extent is not result, it is cause, this is common sense.
Ice core data shows ice accumulation is much more in warmer times and much less in colder times. That explains how ice advances after warm times with more snowfall and that explains how ice retreats after cold periods.
Can you name the scientist(s) who did the studies that established the MWP was a global event?
That is nothing compared to the non sense they modelled into their RCPs. First off all, and that is true “Kindergarden”, they derived CO2 sinks directly from Emissions rather than atmospheric CO2 concentrations. And since emissions in this model take a turn every decade, the CO2 sink curve looks extremely cornered, jumps up and down. That is opposed to the very smooth CO2 concentration curve, of which it should actually be a function of.
Then they insist CO2 sinks would sooner or later disappear. The idea is, the ocean would serve as a second reservoir for CO2 with about 2.5times the size of the atmosphere. CO2 sinks would only work as long as the ocean is lagging behind the atmospheric CO2 concentrations (yes, then its concentration, not emissions again). And as soon as this 2nd reservoir is just as full as the first one (the atmosphere), you have no more CO2 sinks. Thereby they totally deny the increased uptake of CO2 by the biosphere, both on and land an water, due to the abundance of CO2.
For instance in the RCP3 by 2100 CO2 concentrations would stand at a largely stagnant 420ppm with CO2 sinks only absorbing a 2.4Gt of CO2 p.a. This rate would eventually turn negative by 2134, when atmospheric CO2 has dropped to 405ppm. At this point, according to their model, CO2 would start moving back from the ocean into the atmosphere. Additional update by the biosphere? None!
Similar problem in RCP4.5. 2100 the yet increasing CO2 concentration would stand at 538ppm, but CO2 sinks would have shrunk to only 11.8Gt p.a., far less then today.
“These are purely problems of inadequate scientific knowledge, or a failure to apply scientific knowledge in climate projections. Perhaps by learning about the mistakes of the past we can create a better future”
Rather it is total ignorance of basic logical consistancy combined with the will to model like a primary school child. The only restriction they seem to obhere to, is that the result must be alarmist.
Btw. does no one every look at the data and analyze them? You can quite easily “back engineer” the model assumptions by just looking at the data.
A few years ago QB Lu of Waterloo University wrote a paper on CFCs and their like and their apparent effect on global warming. He also goes on to show how the Montreal Protocol and its work to remove and eliminate chemicals has benefited the climate as their existence harmed it.
As is known, the absorption rate of CFCs and HFCs is 20,000 plus times that of CO2. Control and removal must have some significant effect as Lu proposes.
Published 20 Jan 2020, Polvani et al study of CFCs and warming gives credibility to the theory proposed by Q-B Lu. Abstract: https://www.nature.com/articles/s41558-019-0677-4
Here is the abstract of Dr Lu’s work. https://arxiv.org/abs/1210.6844
His work showed that up to 97% of the 20th century warming can be accounted for, not by CO2, but by variations in solar flux and by CFCs (chlorofluorocarbons).
Lu’s calculations explained many peculiarities about the climate record, including the apparent temperature increase after WWII and the much slower increase, known as the pause, that began a few years after the 1987 Montreal Protocol banning CFCs went into effect. Climate studies researchers ignored Lu’s work (and still do: he is not cited in the Polvani et al. paper).
Polvani et al. used a computer model to ask how much warming in the Arctic was due to CFCs, which they refer to as ozone-depleting substances or ODS. They conclude that half of the warming and sea ice loss between 1955 and 2005 was due to ODS. They also conclude that ODS were responsible for 0.27° of the 0.59° of global warming, or one-third of the total, over the same period.
These studies should lead to more interest in the actual effects of the Montreal Protocol. CO2 has a warming effect but nothing like chlorofluorocarbons.
These studies should lead to more interest in the actual effects of the Montreal Protocol. CO2 has a warming effect but nothing like chlorofluorocarbons.
That is all theory, none supported by actual data of any temperatures exceeding the bounds of the most recent ten thousand years. Greenland ice core data shows that most of the recent ten thousand years was warmer than now.
The Montreal Protocol has nothing in it that is supported by actual data.
The Ozone hole opened and closed forever. They promoted the alarmism to get the R12 off the market because they needed it banned because the patent had run out and they wanted a different product that they had the patent for and could therefore charge order of magnitude more with no one else being able to sell it. The actual Ozone hole closed some before R12 was off the market. That scam was not as bad as this CO2 scam, but it will register second.
This is mostly a very goos paper. However, its statement of the FAR fossil fue emissions is givn as
5.4 +/- 5 GTC
Both commonsense and context say this should be
5,4 +/- 0.5 GT
A minor point – but when criticising others it is well to b accurate onself.
Thanks for spotting that. Not sure what happened, in the Medium version it said 0.5.
A picture is worth a thousand words.
Reblogged this on Climate Collections.
C02 atmospheric numbers projected greater than observed: a retention or production problem? If considered a retention issue, increasing the absorption by the oceans would be an obvious “fix” which, if incorrect, would mean ocean “acidification” has been exaggerated.
The global production of CO2 is far from easy to calculate. There are multiple sources which, if each exaggerated even a couple of percent will have a bug impact – in an “atmosphere” of fear, the probability of net neutrality in errors is very unlikely to occur (personal experience in risk/reward analyses in big companies).
O.K., folks, here goes nuthin’… i hope i can remember what all these links are:
First up we have the derivative plot of CO2 verses southern ocean SSTs. (i think most of you have seen this before) This matches the carbon dioxide growthrate with a temperature above an equilibrium state.
Next, we’ll take the integrals of both. As you can see the integral of temperature matches the keeling curve.
Then, we’ll go back to the original derivative plot, but this time we’ll extend the temperature data back to 1850.
And then, we’ll take the integral of the temperature data alone. (looks familiar, huh?)
Atmospheric concentrations were at 287ppm back in 1850, ice cores tell us. Add to that the 125+ ppm that you see in the (above) graph and we get about 412 ppm. Let’s see how that compares with reality.
Not bad for a back of the envelope calculation from a high school dropout in a leather jacket(!) Lastly we’ll take a look at how carbon dioxide growth compares with temperature going back 500 years. (Moberg reconstruction and Law Dome)
Look closely at this graph. For the last five hundred years, the pace of carbon dioxide growth matches the temperature (above an equilibrium state). At 500 years past this relationship abruptly stops.
And there you have it…
Definitive evidence that the carbon dioxide growth rate follows temperature for the last five hundred years. (Lord, i hope these links work… 😖)
Comment stuck in moderation. i plan on writing a brief follow up commentary when e’er it shows up (probably in the morning or afternoon)…
O.K., links work… So we’ve got this temperature relationship that goes on for 500 years. So, what might be going on here? Assuming that ice cores give us a good snap shot of co2 levels, what could be causing this relationship but only for the last 500 years? The relationship of temperature to CO2 in deep ice cores shows that a rise in temps of 1°C gives us just a 16 ppm increase in atmospheric concentrations. But, this is predicated on the growth of sequestering trees. A glacial has very few trees, an interglacial has many. Without trees we get an entirely different relationship of CO2 to that of temperature. And over the course of the last half millennium, we have cut down a lot of trees. Trees no longer properly perform their sequestrating function. Thus we get anomalously high levels of co2 and that, then, causes warming of the sea surface. Which, in turn, causes anomalous outgassing of the oceans, further warming & further outgassing (etc., a positive feedback loop). Once this feedback loop gets established, the compromised biosphere can no longer keep up and gets left behind in the carbon cycle dust. So, in essence, the oceans are freely emptying their co2 into the atmosphere, unimpeded now by the formerly sequestering (but, now decoupled) biosphere. When all is said and done, we can expect carbon dioxide levels to be in the thousand of parts per million. Unless, of course, we plant trees. Lots and lots (and lots) of trees…
post script~ for anyone quibbling about C13 ratios, a warming world naturally produces lower ratios. (deforestation, of course, would produce even lower) always heard that plankton die offs are the source of those lower ratios. whatever the natural sources are, they are no longer being sequestered properly by trees.
I appeciate your efforts here, but I think it is a mistake to “quibble” about the 13C/12C ratios. The fact that the incremental CO2 has a constant ratio (beyond ENSO variations) since 1750 is surely an important observation, for which I have yet to see any attempt at an explanation.
The fact that the incremental CO2 has a constant ratio…
i’m sorry, Jim, i’m afraid i don’t understand you on this point. i’d be interested to know what your saying here (especially so if it’s relevant to my argument).
Sorry, my mistakes (typo and lack of clarity). I meant “not to quibble” and I will respond with more detail later today.
My apologies for not being clear. Any discussion about the growth in atmospheric CO2 should, in my view, also consider the 13C/12C ratio of the additional CO2. We know that the net effect is a reduction in the atmospheric ratio (in δ13C terms from around -6.4 per mil in 1750 to -8.5 per mil currently), but no-one seems very keen to address the evidence that this reflects a consistent average 13C/12C of the incremental CO2 of around -13 per mil, both over the period of direct observations and over the period covered by the Law Dome ice core data. The latter case is shown in Figure 1 of Kőhler et al (2006): “On the application and interpretation of Keeling plots in paleo climate research – deciphering δ13C of atmospheric CO2 measured in ice cores”, available at: http://www.biogeosciences.net/3/539/2006/bg-3-539-2006.pdf. The paper also provides a good summary of the mathematical basis for the Keeling plot and Figure 1 shows an intercept for the Law Dome data at -13.1 per mil (i.e. the average δ13C of the incremental CO2).
The following two Keeling plots are based on monthly data available as part of the Scripps CO2 Progam for the South Pole (δ13C data since 1977) and Mauna Loa (δ13C data since 1980). Values are after removal (by Scripps) of the annual seasonal cycle, since we are looking for the longer term trend, and can be downloaded from: https://scrippsco2.ucsd.edu/data/atmospheric_co2/sampling_stations.html
As shown on the plots (hopefully this works as I have not tried this here before), the intercepts are -13.0 per mil and -13.4 per mil respectively.
Why does this matter? Two reasons immediately come to mind. First, given the availability of 13C/12C data, it would seem to be inappropriate to speculate on the source(s) of the additional atmospheric CO2 without any consideration of its δ13C content, since the value and, in particular, continuity of the δ13C content of the incremental CO2 must impose significant restrictions on viable interpretations. It is certainly not the case that the downward trend in atmospheric δ13C is solely due to fossil fuels, given that this would lead to a much faster decline rate (since the additional CO2 would have a δ13C content of around -28 per mil). The general explanation is that the annual exchange of CO2 between atmosphere and ocean involves differential fractionation which “adds back” significant 13C. Second, even with such an adjustment and despite the apparent relatively simple behaviour of the incremental atmospheric δ13C content, it is noteworthy that the experts are still (as of 2017) having problems establishing a model that fully matches the observations. See: Keeling et al (2017) https://www.pnas.org/content/114/39/10361
Quote from the paper:
“Here, we update the longest direct time series for δ13C of CO2, starting in 1978, from the flask program at the Scripps Institution of Oceanography. Rather than resolving carbon sinks, we use data/model comparison to show that there must exist an additional process, previously neglected, that reduces the atmospheric 13C-Suess effect.”
I am not suggesting that this is (or is not) a valid conclusion, merely that it highlights the fact that the science is still far from being settled.
Jim, thank you… It’s a mouthful (you’ve given me a lot to chew on)
Thanks very much for the response. I suspect that many people here have a problem (as I did intially) getting to grips with a ratio that can be negative (not saying that you do, of course). Anyway, any questions, please ask. Just to be clear on my own position, I am focussed on data (facts) rather than hypotheses. I am completely on board with the strong correlation between ENSO (temperatures) and CO2 growth rate. I also find it a bit unhelpful to focus on how much of the incremental atmospheric CO2 is, or may be, “natural”. The important point is not so much where it is coming from (since anthropogenic and natural emissions are merged as soon as they are released), but why it is increasing. That is what we need to understand. I will show a Keeling plot tomorrow that includes the annual cycle, which is quite relevant to those here who are arguing about the cause of even the annual cycle. One thing that any expert opinion would be appreciated by me: NOAA states that CO2 releases from the ocean will have a δ13C flux of -9.5 per mil and I cannot find any documented support for that value (or even that it can reasonably be approximated (over ENSO influences) as a single number. I am suspicious that it is based on a circular argument.
Jim, sorry for my delay. Me being a humble layman, it can be daunting to read relatively technical jargon. So, it takes me a while to get warmed to the task. And with every reading of your comments, i understand a little bit better.
My argument is a simple one. The MWP had slightly lower C13 ratios than the LIA. (the above graph comes from engelbeen’s 2010 piece on C13 ratios) Assuming that the MWP & Modern Warming are similar in intensity, then we can say there are two anthropogenic differences as far as C13 ratios go. One is deforestation, which will cause lower C13 ratios. The other is the mere fact that we are burning fossil fuels. Just simply burning fossil fuels, without even adding to the mass of CO2 in the atmosphere, will create lower C13 ratios. Add to that the possibility that modern warming exceeds that of the MWP and we could be seeing lower C13 ratios still. At this point i invoke Dr Curry’s uncertainty monster. Until we know exactly how much warmer temps are, just how much simple dilution from ff is, and how much deforestation effects, then we can’t rule out the possibility of a rise caused by natural sources, but triggered by deforestation. If someone can untangle that ball of yarn and come come to a different conclusion than uncertainty, then more power to ’em. (and if not, the very real possibility of my humble thesis remains)…
Your hypothesis regarding deforestation is interesting. My focus, as another humble layman, has been to try to clarify what ‘facts’ we can extract from the data with minimal assumptions, and what constraints that information might impose on any proposed hypothesis. It was your comment regarding 13C/12C ratios that prompted me to share some of my own analyses, especially as there are others (not you) who tell us that because the 13C/12C ratio is reducing that fact alone somehow proves that the incremental CO2 is entirely from fossil fuel burning. My conclusion is that all of the extra CO2 in the atmosphere since 1750 has had essentially the same 13C/12C ratio, which strongly suggests either a single common source (which cannot be solely due to fossil fuels because of the rate of decline) or a remarkably consistent net response from multiple sources over an extended period of time.
The plot you show is actually support for my position and I have discussed this in the past with Ferdinand Engelbeen. Check out the scales on the plot. Obtaining an alignment between the δ13C measurements (outer left scale) and those of the ice core/atmospheric CO2 measurements (right scale) requires that the CO2 values are expressed as reciprocals. This is simply acknowledging the linear relationship I showed above with the Keeling plots.
My focus, as another humble layman, has been to try to clarify what ‘facts’ we can extract from the data with minimal assumptions, and what constraints that information might impose on any proposed hypothesis.
As Dr Curry would say, Jim, Bazinga!
i work with the basic assumption that the rise must somehow be anthropogenic. In essence, it’s their assumption that carbon dioxide concentrations were low until modern times. So, i go ahead & agree with that assumption and, from there begin to look at the data in detail. If someone can come along, do the same, and come to the conclusion that the rise is caused by human emissions, then i’d like to see it. For starters, they’re going to actually have to look at the data. (which is something that i’ve never been impressed that ferdinand does very well) i just think that anyone is going to have a real hard time making it actually work with emissions. And offhand, i can’t think of a single argument for a rise due to emissions to date that can withstand scrutiny.
But, we don’t even need theory to see that the carbon dioxide growth rate is set, for whatever the reason, by temperature. It’s there in the data for 500 (count ’em) years. And here we all are, at this late date, still talking about RCPs. i essentially throw in my ideas after the fact (that temps drive carbon growth) to stretch people’s thinking as to what may or may not be happening here. If someone comes along with a better idea than mine, then so be it. (but, that someone has got to honor the data)…
My conclusion is that all of the extra CO2 in the atmosphere since 1750 has had essentially the same 13C/12C ratio, which strongly suggests either a single common source (which cannot be solely due to fossil fuels because of the rate of decline) or a remarkably consistent net response from multiple sources over an extended period of time.
(here’s a plot a la middleton of atmospheric concentrations vs cumulative emissions since 1750)
Fonzi, isn’t the important point that there is no evidence of CO2 change preceding a reversal of temperature, in the recent or the archaic record, demonstrating correlation without causation?
CO2 levels have been as low as 180 ppm and as high as 8,000 ppm without instigating a temperature reversal.
The current preoccupation with CO2 seems to be due to the recognition that there is no other forcing that we can affect, and the delusion that a change in our output at this time, at these levels, will have a measurable effect.
isn’t the important point that…
jimmww, i think the most important point is that the carbon dioxide growth rate is set by temperature and this has been so for the last 500 years. That means that cutting emissions has zero impact on carbon growth, therefore zero impact on temperature. You may be right that CO2 doesn’t affect temperature, but human emissions apparently don’t even affect carbon dioxide growth(!) i just think your argument is a tougher sell. Even among skeptics anything over 1000ppm seems to be, rightly or wrongly, a cause for concern. (i’m afraid it’s all about optics)…
Yes, Fonzi, optics and sales pitches.
So the natural experiment was performed in 1929-1931, when human CO2 production declined by 30%, atmospheric CO2 continued its languid rise, and global temperature continued to rise until 1941. Then of course in WWII and postwar reconstruction with massive human output, CO2 did not change its slope, and the world did not warm. Indeed, temperature fell slightly but enough to generate alarms about THE COMING ICE AGE – see Time and Newsweek and ScienceNews in the early ’70s.
It seems that no one is paying attention. We don’t get no respect.
So what is the optimum level of CO2 in the atmosphere? Under 250 ppm is too low and over 10,000 ppm is probably too high, but what is the Goldilocks concentration for life on Earth? Has anyone ever seriously studied that subject? Shouldn’t we have a good understanding to know how to regard changes in atmospheric CO2 concentration?
Before this question can be answered, shouldn’t you first define your vision for what would be the optimum distribution of all life forms on earth — the plants, the animals, the insects, the bacteria, etc., etc.?
For example, how many humans should there be on earth versus lions, tigers, bears, mosquitoes, wasps, honey bees, pine forests, plankton, great white sharks, corn fields, cheat grass, milkweed, and so on?
USN submarines don’t take countermeasures until CO2 level reaches 8,000 ppm. Our addition to the annual CO2 emission is less than 5%.
30% of the increase in agricultural output since 1950 has been attributed to CO2 increase, which has a linear increase on plant growth and an exponential decline on temperature.
Here is a comparison between FAR and actual Emissions and two SRES scenarios
Alberto Zaragoza Comendador, thank you for this essay.
Comendador, I appreciate your writing style in putting together in your post some interesting information about past attempts at scenarios. I suspect that a number of the readers of your post are wondering why you and McKitrick and originally Hausfather are dwelling on the state of the art decades ago when supposedly that art has improved. That would be my first impression. On further thought, however, perhaps it is good to reflect upon the great difficulty even gatherings of experts have in making these predictions or whatever the process is called. Modeling the scenarios (and not the climate) involves predictions about future human actions and innovations and probably much less on political actions where talk is cheap, actions dear and approaches not at all dynamical.
I relate these exercises to that of the econometric attempts to model something as complicated as human behavior with simplifying assumptions and a less than dynamic real world approach. It is better when those using these processes admit to the shortcomings of their processes but the uncertainties, nonetheless, remain.
The science of climate does not ultimately have the limitations of an econometric approach, but yet here we are with a wide range of outputs from the acknowledged best modeling efforts. We talk about and use the simplifying concept of an ensemble mean of climate model outputs without being able to conceptualize what the distribution of model outputs represents from a statistical point of view. Using the capability of models to capture the observed climate might be a better approach if we could be assured that the modelers were not tuning or perhaps better said as selecting parameters to best emulate the observed climate.
The recent news on and spate of articles attempting to explain the CMIP6 models having a number of models with much higher climate sensitivities is certainly not something to expect from a settled science.
When I was involved in economic modelling of policy options, the output was regarded as indicating the relative changes with alternative policies ten years ahead, not as a forecast: i.e., the outcome with Policy A is likely to be X and that from Policy B Y – an indication of which was likely to provide a better (in GDP terms) result. We would never look more than ten years ahead, the uncertainties are too great; and with reasonable discount rates, those outyears would have little impact on the comparison. But “climate scientists” confidently project 100 years hence. Mmmm …
The Hundred Year Great Leap Forward. Oh Mao !
Like Philip Tetlock discovered, we’re not good at predicting.
Re complex interacting systems, yr whether ‘and yr human confirmation biass… Well !
Pingback: Analysis of a carbon forecast gone wrong: the case of the IPCC FAR – Weather Brat Weather around the world plus
a paper by Zeke Hausfather and three co-authors; I hope the co-authors don’t feel slighted – I will refer simply to “Hausfather” for short.
Co-author Gavin Schmidt might.
Pingback: Weekly Local weather and Power Information Roundup #397 – Daily News
Pingback: Weekly Climate and Energy News Roundup #397 -
On further thought, the issue with the fixed F_2x values in Hausfather’s paper isn’t so serious. The only projections for which this is definitely a problem are the IPCC’s FAR and SAR; in both cases Hausfather digitizes a chart to get the IPCC’s forcing values, and then calculates the implied TCR of the IPCC’s projections. But the IPCC’s charts assume F_2x = 4W/m2, so it makes no sense to calculate TCR on the basis of F_2x = 3.7W/m2. The IPCC’s implied TCR will be 8.1% higher than what Hausfather reports.
For TAR, Hausfather also takes the forcings as stated by the IPCC, but this doesn’t introduce a bias because TAR used the same F_2x as Hausfather.
From my reading of the paper’s Supplementary Information, I *believe* the rest of the models don’t offer a chart or table with forcing values, so Hausfather calculated them; the TCR calculation thus won’t be biased because he’s using the same F_2x to calculate both forcings and TCR. However, I’m not certain this is the case with Hansen’s projections.
On further further thought, the issue definitely affects FAR and SAR, and I’m not sure if it affects Hansen’s models. But the miscalculation is more serious than I initially believed.
FAR and SAR state that the assumed forcing for a doubling of CO2 is 4W/m2, but that is an approximation. The actual value of FAR is stated on Table 2.2, page 52, as: 6.3 * ln (CO2 concentration end / CO2 concentration beginning)
For SAR, the equation is stated differently but the value is the same (page 320). This can be confirmed in TAR, which states on page 356:
“IPCC (1990) and the SAR used a radiative forcing of 4.37 W/m2
for a doubling of CO2 calculated with a simplified expression.”
Hausfather, by contrast, in his Equation 7 assumes a multiplier of 5.35 rather than 6.3. So F_2x is 18% higher in FAR and SAR than assumed by Hausfather, and the correct “implied TCR” value likewise should be 18% higher. In the case of FAR, the over-estimate of forcings is not the 55% stated by Hausfather, but 1.55 / 1.18 = 31 or 32%.
Pingback: IPCC’s CO2 projections ‘turned out to be too high’ — – Climate- Science.press
The biggest problem is that publications in climate science are so inconsistent and can differ by two orders of magnitude. No one can take them seriously as a result.
There is a very interesting new paper about the human portion on the total CO2 emissions since 1750.
The paper states in the conclusion that the human fraction to the atmospheric increase since 1750 of 113 ppm (from 280 to 393 ppm in 2016) is not more than 17 ppm or only 15 %. I think this could have great consequences on predicting future temperatures on earth.
Will humanity ever reach 2xCO2?
Phil, that assumes that human emissions have anything to do with the rise…
(ironically, spencer used to be at the forfront of challenging the notion that the rise is due to emissions)
Excellent point, Fonzi!
Mind you, there’s also no reason to assume that we have nothing to do with the rise.
If humans are not responsible for the CO2 rise, then Gaia was wasting her time evolving a hairless hominid with a grotesquely bulbous head able to burn fossil fuels, thinking that they would save her world from snowball earth glaciation and CO2 starvation-extinction. In that case, nothing will.
(humans are still responsible if we assume the validity of ice cores; it’s that we cut down trees, not burn ff, that does it)
From 1750 to 1900 CO2 rose 20ppm while cumulative emissions were just 5ppm over that time period. Whatever we were doing land usage wise over that period, you could probably at least double or triple that over the next 120 years (til present). Land usage could easily account for half the rise of CO2, even more if there’s a temperature component involved (there is). The notion that the rise is due to human emissions is based on the edifice of flimsy argumentation.
Phil, I suggest modifying that to “If humans are not responsible for the CO2 rise,” then something else is. For example, the rise in CO2 after 1835, well before human production took off in 1880.
George Carlin of course thought that Gaia created us because she was unable to make plastics.
Ah yes, plastics. That may well have been a slip, eh?
Waves in the stratosphere play a major role in winter weather, and the temperature in the stratosphere depends only on high-energy radiation.
A massive stratospheric hit in the west of the US.
During the solar minimum, circulation of the Eastern Pacific is like during La Niña, and on the West Pacific as during the El Niño.
The beginning of the SSW in the central stratosphere over the North Pole.
Very similar to the Hansen 1988 predictions, which exceeded the actual temperature trend by ~400%. In both cases, the major confusion seems to have been over carbon sinks.
Carbon sink models have major implications for the long term. If the IPCC’s long-tail theory of CO2 emissions is wrong (and there are a lot of problems with it, as Javier has pointed out) then by 2200 temps may have already begun their march downward to the next glaciation, eventually rendering huge swathes of currently heavily populated areas uninhabitable.
Hansen et al… Yes indeed, TD. the takeaway is that we aren’t very good at predicting, and shouldn’t risk resources that should be devoted to better ends. Plastics in the ocean, anyone? Nobody’s been actually right so far, but if they increase the error bars, they can’t be actually wrong.
Saying that “the airborne fraction of CO2 is 60%” may be technically incorrect, but it rolls off the keyboard more easily than “the increase in CO2 concentrations is equivalent to 60% of emissions”.
The difference is important. Consider a standard commercial swimming pool which like the Earth’s atmosphere has inflow (jets) and outflows (drains): you dump 1 gal of blue dye in a pool, it swirls around and as the flow of of drains/jets gradually remove/replace it, the concentration drops logarithmically and it stays a little blue for a very long time. But no matter how much dye you dump in for how long, the water level never rises because the additions are automatically balanced by more draining, particularly at the top where outflows will quickly absorb any water level rise. The overall level of the pool is much more a function of the (much larger) balanced flows in the jets and drains than any additions.
Now while the Earth’s atmosphere obviously doesn’t put a pool-like hard ceiling on C02, the relationship between the level of CO2 and emissions may be much less straightforward than IPCC assumes — we know the Earth’s CO2 “pool” is a both little higher in level and a little more blue than in 1940 but not to what extent one is a function of the other. Factors like CO2 greening may act as a “top drain,” i.e. higher CO2 levels may lead to higher outflows, suggesting levels could peak and fall off long before IPCC anticipates.
Carbon dioxide is a gas. It is less than 1% of the atmosphere. It has no more effect on radiant heat entering and leaving the confines of the earth’s environment than the other gasses.
The surface area of the surface of the earth covered by water relative to that covered by land controls both the amount of green foliage covering the surface of the earth and the surface area of the earth capable of reflecting the radiant heat. As the water rises the area covered by green foliage goes down, thus the CO2 level goes up. As the water rises the amount of radiant heat reflected to the black sky goes up, thus the amount retained by the earth goes down.
Humans can make all the CO2 they want. They can pollute and exterminate their race. The new Ice Age will continue on.
The distribution of ozone over the polar circle in winter during periods of low magnetic activity of the Sun is highly asymmetrical and leads to SSW.
Asymmetry of ozone distribution is visible throughout the stratosphere.
Ireneusz, these three graphemes are very important. I am sure the meticulous studying and comparing the various graphemes will produce very interesting results.
I took notice of something in the
05-hPa Zonal Mean Temperature for 2019 & 2020
90N to 65N.
I think of the summer-winter amplitude this grapheme shows. In my opinion the Zonal Mean Temperature amplitude can be used
as an index of the Earth’s surface energy accumulating ability.
When the amplitude is bigger there is less Earth’s surface solar radiation energy accumulating ability.
And when the amplitude is smaller there is more Earth’s surface solar radiation accumulating ability.
In 12.000 years from now, when the Earth’s axis will be titled towards the star Vega, Earth’s Perihelion will occur at the time of the North Hemisphere summer, and Aphelion will occur at the North hemisphere winter.
Consequently the summers in the North Hemisphere will be warmer and the winters will be colder.
And we know from the Reversed Milankovitch Cycle that 12.000 years from now it would be the middle of the next Glacial Period.
Glacial Periods are periods with a much lower Earth’s surface solar radiation accumulating ability.
Consequently the differences in above mentioned amplitude can be used
as an index of Earth’s surface solar energy accumulation ability.
This Earth’s surface solar energy accumulation ability is ruled by orbital forcing.
When at Perihelion the Earth’s axis is tilted towards the star Vega and summer occur on Northern Hemisphere more radiation falls on continent land masses, much more is instantly emitted
as a IR radiation back to space and as a result the year average less energy is left to be absorbed by the planet.
So, when we observe any fluctuations of the above amplitude’s magnitude we may conclude about Earth’s surface year average solar radiation accumulating ability.
Thank you for your patience.
I am saying how the radiant heat lost to the black sky is relatively constant because Mother Nature keeps the average surface temperature relatively contant. The radiant heat striking the earth is not. The sun is an active star so the average surface temperature of the surface gradually goes up and down by hundreds of degrees over the millenium. Presently the sun is retaining less heat striking the earth than the earth is radiating to the black sky. She is taking heat from the oceans to keep the temperature constant.
As the oceans receed, which they will when the ice shelf stops breaking off, and the sun’s surface cooling is enough, the Global Ice making will begin.
Global Ice MELTING wil begin.
A weather battle zone will continue to take place as winter fights back in the northeastern United States with areas of flooding rain, dangerous ice and a blanket of heavy snow through Friday.
The multifaceted and two-part storm will continue to affect the region just days after springlike warmth surged in. It is the same storm system responsible for heavy snow over the southern Plains on Wednesday and ongoing heavy rain and severe weather in the South during Thursday.
Falling temperatures in the Midwest.
In a few days, the cold air will reach southern California.
Has there been a climate etc tipping point? Where daft commentary reiterated endlessly reduces the site to babble. Where not even the most basic ideas of a carbon cycle survive without distortion. Let alone any more sophisticated analysis. It is truly astonishing what skeptics imagine science overlooks.
“This diagram of the fast carbon cycle shows the movement of carbon between land, atmosphere, and oceans. Yellow numbers are natural fluxes, and red are human contributions in gigatons of carbon per year. White numbers indicate stored carbon. (Diagram adapted from U.S. DOE, Biological and Environmental Research Information System.)”
See the distribution of CO2 on the surface depending on the growing season. Visible forest fires in equatorial Africa.
It is worth reminding ourselves as to where the carbon resides and in what quantities it is represented in its various pools
Consequently I thought this was interesting as it gives the amount of co2 sequestered in soil and the huge amount more the soil could absorb in an easy to read text rather than grap[hical form
“Soils constitute the third largest C pool (2,300 Gt or billion tons), after oceanic (38,000 Gt) and geologic (5,000 Gt) pools. The soil C pool is directly linked with the biotic (600 Gt) and atmospheric (770 Gt) pools. Change in soil C pool by 1 Gt is equivalent to change in atmospheric concentration of CO2 by 0.47 ppm. Therefore, increase in soil C pool by 1 Gt will reduce the rate of atmospheric enrichment of CO2 by 0.47 ppm.
c 476 Gt of carbon has been emitted from farmland soils due to inappropriate farming and grazing practices, compared with 270 Gt emitted from over 150 years of burning of fossil fuels.780 Gt is emitted and sequestered each year by the planet’s eco system, that’s 780 Gt in/out in a continuous carbon cycle. Over the past 150 years man made CO2 is 270 Gt while soil is 476 Gt – which is greater? ”
Soil has a lot of the answers if the concern is to sequester it. Robert has written before on the advantages of doing more with our soil
My farm has always had forested and pasture acreage, around 2/3rds, but the arable soils, the cornfields, are river bottom and have exceedingly rich soils, were planted with thousands of trees around 2000, as was much of the pasture acreage.
The downside, no food has been grown there since around 2000.
It is truly astonishing what skeptics imagine science overlooks.
You mean like that imaginary elephant in the room? (astonishing, truly)…
A generic comment, at the risk of stating the obvious. Our earth is not the one of 1500. We have 15 times the population. Millions of square miles of forest lands are gone. Perhaps 80% of wetlands have been destroyed. The character of watersheds across the globe have been altered. Land uses have changed. Coast lines have been hardened. The bathymetry of the oceans and possibly oceanic circulation from man made sedimentation and siltation are different.
What those ecosystems once did and how they functioned are not the same, irrespective of the magnitude on the climate. It’s a different earth than what the explorers of 1500 faced.
Yes indeed; perhaps there is more than one elephant. One of those elephants is the fact that Keeling et al (2017) STILL cannot match the atmospheric 13C/12C observations with their complex model.
My above comment was in reponse to afonzarelli, not cerescokid.
1500? I don’t understand where this has come from.
Having said that as it happens I have read several books recently about the 14th and 15th century. Basically the land was very poorly used with new pieces of land constantly been ploughed, forests chopped down, marshes drained then as the soil fertility became depleted they would move on and do it all again. Yields were very poor with one grain yielding 3 in good times and much less in poor times.
the population grew exponentially during the warm medieval period but came to a grinding halt then reversed as the vagaries of the 1300’s set in and plunged during the great droughts and great floods of the time.
So whilst not disagreeing with anything you write we can certainly do better with soil fertility/carbon usage than we once did.
You mean like the 2008 housing bubble that wasn’t a worry? Or that respiration and photosynthesis are almost a zero sum game in oceans and on land? Apart from deep ocean biological and gravity pumps or the much trumpeted greening? Amazin’ – truly.
I wasn’t taking exception to anything you or Robert said and agree with both of you. In fact, in a general sense, I was reinforcing your comments. My point was only that lots of things have changed from long ago as a result of humans that could be affecting climate in addition to the CO2 issue. I just picked out 1500 since it was a nice round number and I noticed that the population was about 500 million in 1500.
Sorry for being unclear in my comment.
Reblogged this on Climate- Science.press.
I’m an Engineer and as nice as your analysis is, I think you are missing some significant points in your analysis.
1. The cause of the airborne fraction is significant. The way I read your article you are assuming the airborne fraction is permanently added to the atmosphere. If the reduction in the airborne fraction is due to biosphere expansion as we expect then the added CO2 continues to reduce year on year as the increased CO2 stimulates vegetation even if CO2 inputs were held constant. It is better to treat the airborne fraction as a half-life.
2. The CO2 in stimulating the biosphere has an energy term – photosynthesis is given as
6CO2 + 6H2O + Sunlight ——> C6H12O6 + 6O2
Increased vegetation also causes increased transpiration, removing heat energy which must also be accounted for.
The Sunlight and Transpiration energy terms have to be quantified to determine the correct energy balance. Input energy to the climate are also reduced and energy removal enhanced as the CO2 inputs are reduced by 50%. This is a very significant fraction.
Finally, in a lot of your work you make the ubiquitous Climate Science mistake of assuming the logarithmic relationship between CO2 and forcing is correct. It is IFF (If and only if) CO2 back radiation can continue to rise in the same way as it does now, IE back radiation is only dependent on CO2 concentration as in the lab experiments where suitable IR energy is unlimited. The problem here is that CO2 doesn’t create energy, it can only absorb and reemit outgoing IR energy at certain wavelengths that are already 85% absorbed and there is only so much of that energy being emitted by the earth. You must therefore take into account the limits on emitted energy otherwise a violation of the law of conservation of energy occurs are the energy reflected by CO2 exceeds the input energy provided by the earth in the absorption band of CO2. It’s not possible to double forcing because there is insufficient energy input to fuel it. I don’t want you to become a denier of energy conservation.
(post was written by Alberto Zaragoza Comendador)…
I’ve published an article on the F_2x issue:
-The 55% ‘overestimate’ in forcings that Hausfather finds for the First Assessment Report is only true if you measure forcings in raw W/m2. When you measure forcings in terms of the forcing level equivalent to a doubling of atmospheric CO2 (that is to say F_2x), the over-estimate is 31%.
-When I compare FAR’s forcings with those of Lewis & Curry over 1990-2016, the result is very similar: FAR over-estimated by around 30%, not by 50-60%.
-Out of the 30% or so over-estimate, more than half is due to FAR’s excessive CO2 concentrations. All of this mistake (and probably more) is because of FAR’s overly high airborne fraction. This is purely a scientific error, as mentioned in this article, but now its impact is quantified better.
I haven’t been able to calculate the over-estimate due to excessive methane concentrations in FAR, but there’s no question that the combinaed methane + CO2 over-estimate makes up the bulk of the total forcing over-estimate in FAR (perhaps all of it).
The combined over-estimate due to methane, CO2 and Montreal Protocol gases is greater than 100%, because FAR also under-estimated (in fact omitted) the positive forcing from increasing tropospheric ozone and declining aerosols.
I read your article but I don’t think the issue comes down to the logarithm of the concentration, as I understand it the effect of CO2 is logarithmic is the presence of unlimited energy. But real world energy is not unlimited. If we consider that doubling CO2 halves the tranmissability of the medium then we must accept that doubling CO2 would lead to say the next 3 doublings to Now 85% opaque, 92.5% opaque, 96.25% opaque, and 98.2% opaque. That is the energy source is saturating. It is impossible for me to believe that the same warming occurs from a 7.5% increase in energy absorbed (increased opacity of 7.5%) as for a 1.8% increase in absorbed energy (opacity increase of 1.8%) in doubling 3.
At some point between 0.04% and 1ATM pCO2 you are going to reach a point where F_CO2 exceeds the earth’s emission using this model. For example for 3.8 Wpsm per doubling for 13 doubling we reach 49.4 Watts per square metre, far more energy than the small fraction emitted in the absorption band of CO2 … this is literally impossible.
Also as I point out the effect on short wave absorbed energy of a 15% expansion of photosynthesis of Terratonnes of plant biomass is not inconsiderable. I estimate total plant biomass energy consumption at around 6W per square meter meaning the 15% expansion represents warming of -0.9wpsm (cooling of 0.9wpsm)
Finally one cannot continue to ascribe effects of global warming say SLR, increase rainfall cycling, melting ice, bigger storms without deducting that energy from the atmospheric budget. I don’t see where that is done. 1.44×10^33 kg x9.8 x 0.0015 = 2.1E23 joules. Extracted from climate change each year by SLR
Again a large increase in galactic radiation. There is a minimum of solar activity.
Firstly, I just discovered this page today. I am excited to read your content.
Secondly, I am grateful for you having done this. More people need to hear the science against the consensus. I personally had to do a lot of digging online through a search engine (out of google’s claws) in order to find the latest full consensus report. If this was readily available to the public, and they read it themselves, this social vlimate wouldn’t exist right now.
Thank you for refusing to silence science.
In case anyone stumbles across this comment section now, I’ve published a complete accounting of the differences in forcing between FAR’s Business-as-usual scenario and Lewis & Curry 2018.