How much warming can we expect in the 21st century?

by Hakon Karlsen

A comprehensive explainer of climate sensitivity to CO2

Short summary

According to the Intergovernmental Panel on Climate Change (IPCC), the atmosphere’s climate sensitivity to CO2is likely between 2.5 and 4.0°C. Simply put, this means that (in the very long term) Earth’s temperature will rise between 2.5 and 4.0°C when the amount of CO2 in the atmosphere doubles.

A 2020 study (Sherwood20) greatly influenced how the IPCC calculated the climate sensitivity. Sherwood20 has been “extremely influential, including in informing the assessment of equilibrium climate sensitivity (ECS) in the 2021 IPCC Sixth Assessment Scientific Report (AR6); it was cited over twenty times in the relevant AR6 chapter“, according to Nic Lewis. A Comment in Nature confirmed this view.1)

Nic Lewis took a closer look at this study, and in September 2022, he published his own study (Lewis22) that criticizes Sherwood20. By correcting errors and using more recent data, including from AR6, Lewis22 found that the climate sensitivity may be about 30% lower than what Sherwood20 had found.

If we know what the climate sensitivity is, and if we also know approximately the amount of greenhouse gases that will be emitted going forward, then the amount of future warming that’s caused by greenhouse gases can also be estimated.

In terms of future emissions, a 2022 study (Pielke22) found that something called RCP3.4 is the most plausible emissions scenario. Traditionally, another scenario (RCP8.5), has been used as a business-as-usual scenario, but this is now widely regarded as an extremely unlikely scenario, with unrealistically high emissions.

Assuming that the climate sensitivity from Lewis22 is correct and that RCP3.4 is the most appropriate emissions scenario, then we find that global temperatures will rise by less than 1°C from 2023 to 2100 (not accounting for natural variability).

How much the Earth’s surface air temperature will rise this century depends, among other things, on how sensitive the atmosphere is to greenhouse gases such as CO2, the amount of greenhouse gases that are emitted, and natural variations. It’s hard to predict natural variations, so the focus here will be on climate sensitivity and greenhouse gas emissions (in particular CO2).

Climate sensitivity

Climate sensitivity is the amount of warming that can be expected in the Earth’s surface air temperature if the amount of CO2 in the atmosphere doubles. So if the climate sensitivity is 3°C, and the amount of CO2 in the atmosphere quickly doubles and stays at that level, then the Earth’s surface air temperature will – in the long term – rise by 3°C.2) In the long term, in this case, is more than 1000 years, but most of the temperature increase happens relatively fast.

The exact value for the climate sensitivity isn’t known, and the uncertainty range has traditionally been very large. In 1979, the so-called Charney report found the climate sensitivity to be between 1.5 and 4.5°C. 34 years later, in 2013, the IPCC reached the exact same conclusion – that it’s likely (66% probability) that the climate sensitivity is between 1.5 and 4.5°C. However, the uncertainty in the Charney report may have been underestimated. So even though the official climate sensitivity estimate didn’t change, it wouldn’t be correct to say that no progress was made during those 34 years.

In climate science, there are several different types of climate sensitivity. I won’t go into detail about the various types just yet, but I’ll have something to say about some of them later in the article – when it becomes relevant. The type of climate sensitivity referred to above – in the Charney report and by the IPCC – is called equilibrium climate sensitivity (ECS).

Why so much uncertainty? (Feedback effects)

There’s broad agreement that without so-called feedback effects, the equilibrium climate sensitivity (ECS) would be close to 1.2°C 3), which is quite low and not particularly dangerous. The reason for the great uncertainty comes from how feedback effects affect the temperature.

A feedback effect can be either positive or negative. A positive feedback effect amplifies warming, contributing to a higher climate sensitivity. A negative feedback dampens warming and contributes to a lower climate sensitivity.

The strengths of feedback effects can vary based on the atmosphere’s temperature and composition, and how much of the Earth is covered by ice and vegetation, among other things. Earth’s climate sensitivity is thus not a constant. And for this reason, the equilibrium climate sensitivity, ECS, has been defined as the long-term increase in temperature as a result of a doubling of CO2 from pre-industrial levels (which was about 284 parts per million (ppm)).

Atmospheric CO2 concentration currently stands at approximately 420 ppm, which means there’s been a near 50% increase since the second half of the 19th century.4) Since the concentration of CO2 hasn’t yet doubled (and also since the long term is a long way away), the temperature has risen less than than the magnitude of the equilibrium climate sensitivity. To be more precise, the temperature increase has been approximately 1.2°C over the past 150 years.

Types of feedback mechanisms

There are several different feedback mechanisms. Here are some of the most important ones:

  • Water vapor. Increased amounts of greenhouse gases in the atmosphere cause higher temperatures. A higher temperature then allows the atmosphere to hold more water vapor, and since water vapor is a strong greenhouse gas, the increased amount of water vapor in the atmosphere causes the temperature to rise even more.5) The feedback effect from water vapor is therefore said to be positive.
  • Lapse rate is how the temperature changes with altitude. The higher up you go in the lower atmosphere (troposphere), the colder it gets – on average about 6.5°C colder per kilometer up. So the lapse rate is said to be 6.5°C per kilometer in the lower atmosphere.
    The feedback from lapse rate is related to the feedback from water vapor, and the two are often considered together. More water vapor causes the temperature to rise more higher up in the atmosphere than closer to the Earth’s surface. This is because the air is generally drier higher up, and so at those altitudes the increased amounts of water vapor has a larger effect on the temperature. The increased temperature at higher altitudes then contributes to more radiation to space, which causes the Earth to cool more. This means that the feedback effect from lapse rate is negative. However, the combined effect of water vapor and lapse rate is positive.

Screen Shot 2023-07-08 at 8.24.32 AM

How temperature changes with altitude in the lower atmosphere. Image found on ScienceDirect (from the book Environmental Management).

  • Clouds. Without clouds, the temperature on Earth would be significantly higher than today, but not all clouds have a net cooling effect. Different types of clouds have a different effect on the temperature. On average, high clouds have a warming effect, while low clouds tend to have a cooling effect. When assessing whether total cloud feedback is positive or negative, one must determine whether clouds in a warmer atmosphere on average will have a greater or lesser warming effect than they do now. There is some uncertainty about this, but according to the IPCC, it’s very likely (over 90%) that the feedback effect from clouds is positive, and that therefore changes in cloud cover as a result of increased temperature will amplify the temperature increase.
  • Surface Albedo Changes. Earth’s surface albedo says how much solar radiation the Earth reflects directly back to space. Currently, it’s around 0.3, which means that the Earth reflects 30% of the incoming solar radiation. The part of the solar radiation that’s reflected does not contribute to warming.
    The Earth’s albedo can change, for example, when a larger or smaller part of the surface is covered by ice and snow. A higher temperature generally leads to less ice cover, which in turn leads to higher temperatures still, since less radiation is reflected (the albedo decreases). The albedo change resulting from changes in ice cover is a positive feedback effect.
    (Changes in albedo due to changes in cloud cover are included in the cloud feedback.)
  • Planck Feedback. A warm object radiates more than a cold object. Or in the case of the Earth: A warm planet radiates more to space than a cold planet. As the Earth warms, it radiates more energy to space, which cools the planet and reduces the rate of warming. The Planck feedback is a strongly negative feedback.6)
    Actually, the Planck feedback is already included in the calculation of how much the temperature would rise in the absence of (other) feedback effects. In this sense, the Planck feedback is different than the other feedbacks, and it may be best not to think of it as an actual feedback effect, but rather as a fundamental property of physical objects. The Planck feedback is sometimes referred to as Planck response or no-feedback response.

Different ways to calculate climate sensitivity

There are several ways to calculate climate sensitivity. We can base it on the historical record of the past 150 years, where we know approximately how temperature, greenhouse gases, aerosols etc have changed (historical evidence). Or we can estimate the strengths of the various known feedback mechanisms and sum them (process evidence). Or it can be calculated based on how much average temperature has changed since the last ice age glaciation or other warm or cold periods in the Earth’s history (paleo evidence). A fourth possibility is to use climate models – large computer programs that attempt to simulate Earth’s past and future climate under different assumptions.


In 2020, a large study by 25 authors was published, and it combined the first three of the above-mentioned methods. So they did not calculate climate sensitivity from climate models directly, although the study oftentimes relies on climate models to substantiate some of their values and assumptions.

The study’s title is An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence. Steven Sherwood is lead author, and so the study is often referred to as Sherwood et al 2020. To simplify further, I’ll just call it Sherwood20.

Sherwood20 concluded that the climate sensitivity is likely (66% probability) between 2.6 and 3.9°C, with 3.1 degrees as the median value. (It’s equally likely that climate sensitivity is higher (50%) or lower (also 50%) than the median.)

The latest IPCC scientific report (AR6) put great emphasis on Sherwood20, and the IPCC, in turn, concluded that climate sensitivity is likely between 2.5 and 4.0°C, a significant narrowing of their previous uncertainty range.
(Note, however, that Sherwood20 and the IPCC focused on different types of climate sensitivities, so their respective values aren’t directly comparable.7))

Sherwood20 thoroughly examines all factors that they believe affect climate sensitivity and discusses sources of uncertainty.

Process evidence: Climate sensitivity calculated by adding up feedback effects

The feedback effects that Sherwood20 focused on were primarily the five that I listed earlier. Other feedbacks were estimated as having no net effect. To calculate the climate sensitivity based on feedback effects, the first step is to add up the strengths of each individual feedback effect, and then there’s a simple formula to convert from total feedback strength to climate sensitivity.

The cloud feedback has the largest uncertainty of the various feedback effects. This is true even though the uncertainty has been reduced in recent years.8)

Historical evidence: Climate sensitivity calculated from temperature and other data over the past 150 years

Within some margin of error, we know how the Earth’s surface air temperature has varied over the past 150 years. We also know roughly how the amount of greenhouse gases in the atmosphere has increased – at least since 1958, when the Mauna Loa observatory started measuring atmospheric CO2. But in order to calculate the climate sensitivity to CO2, we also need to know the effect that other drivers of climate change, including aerosols, have had on the temperature and ideally also how the temperature would have changed without human influence. In addition, there’s something called the pattern effect, which, along with aerosols, is what contributes most to the uncertainty in the climate sensitivity when it’s calculated from historical evidence.

  • Aerosols: Translated from Norwegian Wikipedia, aerosols are “small particles of liquid or solid in the atmosphere, but not clouds or raindrops. These can have a natural origin or be human-made. Aerosols can affect the climate in various complex ways by affecting Earth’s radiation balance and cloud formation. Studies suggest that these have been released since the start of the Industrial Revolution and have had a cooling effect.”
    The uncertainty in how aerosols affect the temperature is surprisingly large, but they likely have a net cooling effect. The main reason for the large uncertainty is a lack of knowledge about how aerosols interact with clouds.9) Along with greenhouse gases, certain aerosols are released during the combustion of fossil fuels, but with newer technologies, the release of aerosols from combustion is being reduced.
    If aerosols have a strong cooling effect, it means they’ve counteracted a significant part of the warming from greenhouse gases. If so, the climate sensitivity to CO2 must be relatively high. If the cooling effect from aerosols is smaller, it implies a lower climate sensitivity.
  • The pattern effect: Different geographical regions have experienced different amounts of warming since the 1800s.10) Following some previous work, Sherwood20 assumes that areas that have experienced little warming will eventually “catch up” with areas that have experienced more warming, and that this will lead to cloud feedback becoming more positive. However, this may not necessarily happen this century.11) There are few climate sensitivity studies prior to Sherwood20 that take the pattern effect into account, and there’s considerable uncertainty about its magnitude. As a result, the uncertainty in the climate sensitivity as calculated from historical evidence is significantly larger in Sherwood20 than in the earlier studies.

Paleo evidence: Climate sensitivity estimated from previous warm and cold periods

Sherwood20 used one past cold period and one warm period to calculate the climate sensitivity based on paleo evidence (past climates). They also looked at one additional warm period (PETM – Paleocene-Eocene Thermal Maximum, 56 million years ago), but didn’t use the results from that period when calculating their main results.

Screen Shot 2023-07-08 at 8.25.51 AM

Temperature trends for the past 65 million years. Figure from Burke et al 2018. The original image also contained different future projections, but I’ve removed that part of the image. Note that there are 5 different scales on the time axis.

The cold period that Sherwood20 looked at was the coldest period in the last ice age glaciation (Last Glacial Maximum, LGM), about 20,000 years ago (20 “kyr Before Present” in the graph), when, according to the study, Earth’s temperature was 5±1°C below pre-industrial temperature (so 6±1°C colder than today).

The warm period they looked at was the mid-Pliocene Warm Period (mPWP), about 3 million years ago (3 “Myr Before Present” in the graph), when the temperature was 3±1°C higher than pre-industrial (2±1°C warmer than today).

It may not be obvious that it’s possible to calculate the atmosphere’s climate sensitivity to CO2 based on what the temperature was in previous warm or cold periods. The reason it is possible is that we can also talk about the atmosphere’s climate sensitivity in a more general sense, without specifically taking CO2 into consideration.12) I’ll try to explain.

If the Earth receives more energy than it radiates back to space, the Earth’s temperature will rise. If climate sensitivity is high, the temperature will rise by a relatively large amount. If climate sensitivity is low, the temperature will rise less.

Regardless of what non-temperature factor causes a change in the balance between incoming and outgoing energy – whether it’s due to more greenhouse gases or a stronger sun, or to ice sheets reflecting more sunlight – the result is (approximately) the same. What matters (most) is the size of the change, not what causes it.

So if we know how much warmer or colder Earth was in an earlier time period, and if we also know how much more or less energy the Earth received at that time compared to now, then it should be possible to calculate how sensitive the atmosphere is to a change in incoming energy.

When we know this general climate sensitivity, and when we also know what CO2 does to the atmosphere’s energy balance, then it’s possible to calculate the atmosphere’s climate sensitivity to CO2.

When it comes to what CO2 does to the atmosphere’s energy balance, it’s been found that a doubling of CO2 reduces radiation to space by about 4 watts per square meter (W/m2) over the entire atmosphere.13) Less radiation to space means that more energy stays in the atmosphere, raising temperatures until outgoing radiation again balances incoming radiation.

All of this means that it’s possible (in theory) that the amount of CO2 in the atmosphere was the same today and at an earlier time when temperatures were quite different from today, and even if CO2 levels were the same (and some other factor(s) caused the temperature difference), it would be possible to calculate the atmosphere’s climate sensitivity to CO2 – because we know approximately what a doubling of CO2 does to the atmosphere’s energy balance.

When scientists estimate the climate sensitivity from past warm or cold periods, they’re looking at very long time spans. This means that all the slow feedbacks have had time to take effect, and we can then find the “real” long-term climate sensitivity. Based on what I’ve written earlier, you would probably think this is the equilibrium climate sensitivity, ECS. However, in the definition of ECS, the Earth’s ice cover is kept constant, so ECS is in a way a theoretical – not a real – climate sensitivity. The real long-term climate sensitivity is called Earth system sensitivity, or ESS for short.

The climate sensitivity that Sherwood20 calculated is called effective climate sensitivity (S) and is an approximation of ECS. ECS is likely higher than S, and ESS is likely significantly higher than ECS (so S < ECS < ESS).

Even though ESS is the true very long-term climate sensitivity, S is actually the most relevant climate sensitivity for us, since we’re most interested in what will happen in the relatively near term – the next century or two. Sherwood20 writes:

Crucially, effective sensitivity (or other measures based on behavior within a century or two of applying the forcing) is more relevant to the time scales of greatest interest (i.e., the next century) than is equilibrium sensitivity[.]

As we’ve seen, Sherwood20 combined climate sensitivities from three different lines of evidence (meaning that they combined climate sensitivities that had been calculated in three different/independent ways). For historical and process evidence, Sherwood20 calculated effective climate sensitivity (S). But the type of climate sensitivity that is most easily calculated from paleo evidence is Earth system sensitivity (ESS). So to be able to directly compare, and then combine, the climate sensitivities from all three lines of evidence, they needed to convert from ESS to S.

According to Sherwood20, ESS was around 50% higher than ECS during the mPWP warm period. During the much warmer PETM, Sherwood20 assumed that ESS and ECS were approximately the same since there weren’t any large permanent ice-sheets during that warm period – and hence no significant changes in ice-cover.

For the more recent LGM, however, it was actually possible to calculate ECS directly (instead of ESS), by treating slow feedbacks as forcings rather than feedbacks.14)

Naturally, there’s significant uncertainty involved when calculating climate sensitivity based on previous warm and cold periods (paleo evidence). We don’t know what the Earth’s exact average temperature was, and we also don’t know exactly how much more or less energy the Earth received at that time compared to today (or surrounding time periods). Still, according to Sherwood20, the uncertainty in the climate sensitivity as calculated from paleo evidence isn’t necessarily greater than for the other lines of evidence.

Sherwood20’s conclusion

According to Sherwood20, “there is substantial overlap between the lines of evidence” used to calculate climate sensitivity, and the “maximum likelihood values are all fairly close”, as can be seen in the graph (b) below. (However, the median value for historical evidence has a surprisingly high value of 5.82°C).

This is Figure 20 from Sherwood20 and shows their main results. The figure shows how likely different climate sensitivities (S) are for each of their three lines of evidence – in addition to the combined likelihood (black curve). The higher the curve goes, the greater the likelihood. We see that the most likely value is just under 3°C, but the median value is 3.1°C.

Screen Shot 2023-07-08 at 9.07.41 AM

Gavin Schmidt, one of the co-authors of Sherwood20, has also written a summary of the study on RealClimate.

Critique of Sherwood20

Nic Lewis is a British mathematician and physicist who entered the field of climate science after being inspired by Stephen McIntyre. McIntyre had criticized the perhaps most important study behind the hockey stick graph used in IPCC’s third assessment report from 2001. (See this earlier post I wrote, which talks about the hockey stick controversy, among other things.)

In general, Lewis’ research points to a lower climate sensitivity than IPCC’s estimates.

Here’s a 2019 talk by Nic Lewis on the topic of climate sensitivity. I highly recommend it: [link]

Lewis has published a total of 10 studies related to climate sensitivity, and Sherwood20 referenced studies where Lewis was the main (or only) author 16 times. In September 2022, Lewis published a study, Objectively Combining Climate Sensitivity Evidence, where he discusses and corrects Sherwood20. I will refer to this new study as Lewis22.

In an article that summarizes Lewis22, Lewis argues that Sherwood20’s methodology of combining different lines of evidence to calculate the climate sensitivity is sound:

This is a strong scientific approach, in that it utilizes a broad base of evidence and avoids direct dependence on [Global Climate Model] climate sensitivities. Such an approach should be able to provide more precise and reliable estimation of climate sensitivity than that in previous IPCC assessment reports.

Lewis writes in the article that since 2015, he has published several studies that describe how to combine “multiple lines of evidence regarding climate sensitivity using an Objective Bayesian statistical approach”. Although Sherwood20 was well aware of Nic Lewis’ studies, Sherwood20 had chosen a (simpler) subjective method instead. According to Lewis, the subjective method “may produce uncertainty ranges that poorly match confidence intervals”. Lewis therefore decided to replicate Sherwood20 using the objective method. He also wanted to check Sherwood20’s estimates and data.

The authors of Sherwood20 had, however, made a deliberate choice to use the subjective method. In Schmidt’s article on RealClimate, we can see that Sherwood20 thought the subjective method was more appropriate:

Attempts to avoid subjectivity (so-called ‘objective’ Bayesian approaches) end up with unjustifiable priors (things that no-one would have suggested before the calculation) whose mathematical properties are more important than their realism.

By using the objective method instead of the subjective one, and by also using an appropriate likelihood estimation method,15) the result was actually a slightly higher climate sensitivity. The median climate sensitivity increased from 3.10 to 3.23°C. Lewis comments:

As it happens, neither the use of a Subjective Bayesian method nor the flawed likelihood estimation led to significant bias in Sherwood20’s estimate of [the climate sensitivity] S when all lines of evidence were combined. Nevertheless, for there to be confidence in the results obtained, sound statistical methods that can be expected to produce reliable parameter estimation must be used.

However, after correcting some other errors and using newer data, including from IPCC’s latest scientific report from 2021 (AR6), the most likely value for the effective climate sensitivity fell to 2.25°C. By also using data that Lewis considered as better justified (not newer), the climate sensitivity was revised down by another 0.09°C, to 2.16°C.

The data changes made by Lewis22 are in part explained in the study and in part in an appendix to the study (Supporting Information, S5). In addition to discussing data values that he changed, in the appendix, Lewis also discusses some data values that he conservatively chose not to change – even though he thought Sherwood20’s values weren’t optimal. So a case could actually be made for an even lower effective climate sensitivity than the 2,16°C that Lewis found in his study.

The figure below is taken from Lewis’ summary of Lewis22 and shows Lewis’ results compared to Sherwood20’s:

Screen Shot 2023-07-08 at 9.08.43 AM

In (a), (b), and (d), dashed lines represent results from Sherwood20, while solid lines are from Lewis22. In (b), we see that the three lines of evidence for calculating the climate sensitivity coincide nicely for Lewis22, while the variation is slightly larger in Sherwood20. Additionally, the uncertainty is lower (the curves are narrower) in Lewis22, especially for historical evidence (data from the past 150 years). PETM is the warm period that Sherwood20 didn’t include in the calculation of the combined climate sensitivity (PETM = Paleocene-Eocene Thermal Maximum, about 56 million years ago, when temperatures were about 12°C higher than now).

The details: Why Lewis22 found a lower climate sensitivity than Sherwood20

In this section, I’ll try to explain in more detail why Lewis22 found a lower climate sensitivity than Sherwood20. This is the most technical part of this article, and if you’re not interested in the details, you may want to skip ahead to the section on future emissions.

Values with blue text are the same as in IPCC’s latest assessment report (AR6). Values with yellow background in Lewis22 are conservative choices.16) Less conservative choices would have resulted in a lower climate sensitivity. The data changes in Lewis22 are discussed under Review and revision of S20 data-variable assumptions in Lewis22 and in section S5 of Supporting Information.

Historical evidence (data from the past 150 years)

Screen Shot 2023-07-06 at 3.07.33 PM

Sherwood20 Lewis22
ΔFCO2 1.731 1.724
ΔFOther well-mixed greenhouse gases 0.969 1.015
ΔFOzone 0.298 0.400
ΔFLand use -0.106 -0.150
ΔFStratospheric water vapor 0.064 0.041
ΔFBlack carbon on snow and ice 0.020 0.109
ΔFContrails og induced cirrus 0.048 As Sherwood20
ΔFSolar 0.017 0.019
ΔFVolcanic -0.113 0.044
ΔFAerosols -1.104 -0.860
ΔF (sum, difference in forcing, W/m2) 1.824 2.390
ΔN (W/m2) 0.600 ± 0.183 As Sherwood20
ΔT (or ΔTGMAT, °C) 1.03 + 0.085 0.94 ± 0.095
λhist (W/m2/°C) -1.188 -1.915
𝛾 (scaling factor) Omitted (1.00) 0.86 ± 0.09
ΔF2xCO2 (W/m2) 4,00 ± 0,30 3.93 ± 0.30
Δλ (pattern effect, W/m2/°C) 0.500 ± 0.305 0.350 ± 0.305
Climate sensitivity, S (°C) 5.82 2.16

ΔF, ΔN, and ΔTGMAT refer to differences between 1861-1880 and 2006-2018. ΔF is the difference in climate forcing (climate forcing (or radiative forcing) is something that forces the Earth’s energy balance to change, e.g. a stronger/weaker sun or more/less greenhouse gases in the atmosphere). ΔN is the change in radiative imbalance at the top of the atmosphere, measured in W/m2. A positive ΔN means that the radiative imbalance is greater now than at the end of the 19th century, and that the Earth is receiving more net energy now than then.

The exact ΔF values that Lewis22 uses can’t be found in IPCC AR6. The reason for this is that Sherwood20 and Lewis22 look at the period from 1861-1880 to 2006-2018, while the IPCC has been more interested in the period 1750 to 2019. Fortunately, though, IPCC has also included forcing values for 1850 and also for several years after 2000, so Lewis has been able to calculate ΔF values with good accuracy (derived from official IPCC values, see table AIII.3 here).

GMAT (Global Mean near-surface Air Temperature) is average air temperature above ground. GMST (Global Mean Surface Temperature) is the same but uses sea surface temperature instead of air temperature over the ocean. Sherwood20 converted ΔTGMST (0.94°C) to ΔTGMAT (1.03°C) based on results from climate models, which suggest that GMAT is higher than GMST. Lewis, however, points out that a higher GMAT than GMST hasn’t been observed in the real world, and that, according to the IPCC AR6, the best estimate median difference between GMST and GMAT is 0. Lewis22 therefore uses a value for ΔTGMAT that’s equal to ΔTGMST. (See Supporting Information, 5.2.1.)

When estimating effective climate sensitivity (S) from climate feedback (λ), a scaling factor 𝛾 (gamma) is needed (for historical and process evidence). This is because Sherwood20 used linear regression to estimate S based on the ratio of ΔN to ΔT, a relationship that isn’t strictly linear. The reason it’s not linear is that, according to most climate models, climate feedback (λ) weakens during the first decade following a sudden increase in CO2. (That λ weakens means it gets closer to 0 (less negative), which means that climate sensitivity, S, increases.)

Sherwood20] recognize this issue, conceding a similar overestimation of S, but neglect it, asserting incorrectly that it only affects feedback estimates from [Global Climate Models]. This misconception results in [Sherwood20]’s estimates of S from Process and Historical evidence being biased high.

Lewis22 used numbers from the two most recent generations of climate models (CMIP5 and CMIP6) to determine that 𝛾 = 0.86.

More technically, 𝛾 is the ratio of  to ΔF2xCO2. You can read more about this in Lewis22 under Climate Sensitivity Measures, F2xCO2 and its scaling when using Eq. (4) and Supporting Information (S1).

The reason for the relatively large change in aerosol forcing (ΔFAerosols) is quite elaborate and advanced, so for that I’ll have to refer you to the Supporting Information (5.2.3, starting from the third paragraph).

The change Lewis22 made for the pattern effect (Δλ) is in large part done because most datasets for sea surface temperature point to the so-called unforced component of the pattern effect (having to do with natural variation, see footnote 8) being very small. See Supporting Information, 5.2.4.

Process evidence (Adding up feedback effects):

Sherwood20 Lewis22
λWater vapor + lapse rate 1.15 ± 0.15 As Sherwood20
λCloud 0.45 ± 0.33 0.27 ± 0.33
λAlbedo 0.30 ± 0.15 As Sherwood20
λPlanck -3.20 ± 0.10 -3.25 ± 0.10
λOther 0.00 ± 0.18 As Sherwood20
λ (Sum, feedback effects, W/m2/°C) -1.30 ± 0.44 -1.53 ± 0.44
𝛾 (Scaling factor) Omitted (1.00) 0.86 ± 0.09
ΔF2xCO2 (W/m2) 4.00 ± 0.30 3.93 ± 0.30
Climate sensitivity, S (°C) 3.08 2.21

Lewis adjusted the cloud feedback (λCloud) down based on data from Myers et al 2021 (a more recent study than Sherwood20), which found a lower value for low-cloud feedback over the ocean (0-60° from the equator). According to Myers et al 2021, the low-cloud feedback strength is 0.19 W/m2/°C, while Sherwood20 had used 0.37. The difference of 0.18 is how much the total cloud feedback strength was adjusted down in Lewis22. See Supporting Information (5.1.3) for more details.

According to physical expectation (calculated from a formula) and also the latest climate models (CMIP6), the Planck feedback (λPlanck) is -3.3 W/m2/°C. Sherwood20 acknowledged that the physical expectation for the Planck feedback is -3.3, but they put more weight on the previous generation of climate models (CMIP5) and used -3.2 as the value for the Planck feedback. Lewis22 adjusted the Planck feedback halfway from Sherwood20’s estimate towards the value from physical expectation and CMIP6. See Supporting Information (5.1.2).

As a curiosity, the strength of the albedo feedback here has the same numerical value as the Earth’s albedo, namely 0.30. That’s merely a coincidence.

Paleo evidence (past cold and warm periods)

  1. The coldest period during the last ice age glaciation (Last Glacial Maximum, LGM)
Sherwood20 Lewis22
ζ (how much higher ECS is than S) 0.06 ± 0.20 0.135 ± 0.10
ΔFCO2 -0.57 x ΔF2xCO2 = -2.28 -0.57 x ΔF2xCO2 = -2.24
ΔFCH4 -0.57 As Sherwood20
ΔFN2O -0.28 As Sherwood20
ΔFLand ice and sea level -3.20 -3.72
ΔFVegetation -1.10 As Sherwood20
ΔFDust (aerosol) -1.00 As Sherwood20
ΔF (difference i forcing, W/m2) -8.43 ± 2.00 -8.91 ± 2,00
ΔT (difference in temperature, °C) -5.0 ± 1.00 -4.5 ± 1,00
α (state dependence) 0.10 ± 0.10 As Sherwood20
λ (W/m2/°C) -1.522 -1.992
ΔF2xCO2 (W/m2) 4.00 ± 0.30 3.93 ± 0.30
Climate sensitivity, S (°C) 2.63 1.97

Sherwood20 calculated ζ (zeta; how much higher equilibrium climate sensitivity, ECS, is than the effective climate sensitivity, S) by looking at abrupt 4xCO2 simulations – computer simulations where the atmosphere’s CO2 level is instantaneously quadrupled. Sherwood20 then divided the resulting climate forcing (ΔF4xCO2) by 2 to find the climate forcing for a doubling of CO2 (ΔF2xCO2). Lewis22 notes that the scaling factor of 2 “while popular, is difficult to justify when the actual [scaling factor] has been estimated with reasonable precision to be 2.10”. However, Lewis did not use this method to calculate ζ – instead, he extracted the ζ value (0.135) directly from the results of climate models (or, to be more precise, from long-term simulations by climate models of warming after CO2 concentration was doubled or quadrupled, finding the same value in both cases). More details can be found under Climate Sensitivity Measures in Lewis22.

ΔF is the difference in climate forcing between the coldest period of the last ice age glaciation and pre-industrial times. ΔT is the temperature difference between these periods.

Sherwood20’s ΔT estimate was 5.0°C. However, the mean ΔT value for the studies that Sherwood20 based their estimate on was only 4.2°C (after, where necessary, adjusting values given in the studies to fairly reflect an observational (proxy-based) estimate of the temperature (GMAT) change). Lewis22 therefore adjusted Sherwood20’s ΔT estimate towards that value, from 5.0 to 4.5°C. See Supporting Information, 5.3.2.

The reason for Lewis22’s revision of ΔFLand ice and sea level was that Sherwood20 had omitted albedo changes resulting from lower sea levels. (The sea level was approximately 125 meters lower during the LGM than now, so Earth’s land surface was larger during the LGM than now. Land reflects more solar radiation than water, so the Earth’s albedo might have been higher during the LGM than what Sherwood20 assumed.) See Supporting Information, 5.3.2.

α (alpha) says something about how climate sensitivity varies based on the state of Earth’s climate system. What we’re most interested in is the climate sensitivity for the current and near-future states of the climate system. Since climate sensitivity may be different for warm periods than cold periods (possibly higher in warm periods), we need to convert the climate sensitivity for any past warm or cold period to the current climate sensitivity. The α parameter is included in an attempt to translate the climate sensitivity for the LGM cold period into the current climate sensitivity.

In contrast to Sherwood20’s assumption about the state dependence, Lewis22 writes that a 2021 study by Zhu and Poulsen “found that ocean feedback caused 25% higher LGM-estimated ECS.” This would bring the LGM climate sensitivity closer to the current climate sensitivity. For this reason (and one other) Lewis thought Sherwood20’s estimate for α was questionable. Still, he retained it. See Supporting Information, 5.3.2 (last paragraph).

  1. Mid-Pliocene Warm Period (mPWP)
Sherwood20 Lewis22
CO2 (ppm) 375 ± 25 As Sherwood20
ΔF2xCO2 (W/m2) 4.00 ± 0.30 3.93 ± 0.30
ΔFCO2 (difference i forcing from CO2, W/m2) 1.604 1.576
ζ (how much higher ECS is than S) 0.06 ± 0.20 0.135 ± 0.10
fCH4 0.40 ± 0.10 As Sherwood20
fESS 0.50 ± 0.25 0.67 ± 0.40
ΔT (°C) 3.00 ± 1.00 2.48 ± 1.25
λ (W/m2/°C) -1.190 -1.686
Climate sensitivity, S (°C) 3.36 2.33

ΔT is the difference in temperature between the mid-Pliocene Warm Period (mPWP) and pre-industrial times. A positive value for the temperature difference means that mPWP was warmer. ΔFCO2 is the difference in climate forcing (from CO2) between mPWP and pre-industrial. fCH4 is the estimated forcing change from methane (and actually also N2O/nitrous oxide) relative to the forcing change from CO2 (so if the forcing change from CO2 is 1.6, then the forcing change from CH4 (and N20) is 0.64). fESS is how much higher the climate sensitivity ESS is compared to the climate sensitivity ECS. The number 284 in the formula represents the pre-industrial CO2 level (measured in parts-per-million).

Lewis22 used a newer value for fESS than Sherwood20. Sherwood20 obtained the value of 0.50 (or 50%) from The Pliocene Model Intercomparison Project (which focuses on the Pliocene era) version 1 (PlioMIP1). The value of 0.67 (or 67%) used by Lewis22 was taken from PlioMIP2, a newer version of the PlioMIP project. See Supporting Information, 5.3.3.

The change Lewis22 made to ΔT was also based on PlioMIP2. Tropical temperatures during the mPWP were about 1.5°C higher than pre-industrial tropical temperatures. To determine the change in global temperature, Sherwood20 multiplied the change in tropical temperatures by 2 on the grounds that average global temperature has changed about twice as much as tropical temperature over the last 500,000 years. However, conditions on Earth were different during the mPWP three million years ago, with much less extensive ice sheets than at present. The PlioMIP2 project has used climate models to estimate that changes in global temperature may have been about 1.65 times higher than changes in tropical temperature during the Pliocene. Lewis22 used this value and consequently multiplied the tropical temperature change (1.5°C) by 1.65 instead of by 2. This changes ΔT from 3.00°C down to 2.48°C. See Supporting Information, 5.3.3.

  1. Paleocene–Eocene Thermal Maximum (PETM):

Screen Shot 2023-07-08 at 9.10.23 AM

Using β = 0, we can simplify to:

Sherwood20 Lewis22
ζ 0.06 ± 0.20 0.135 ± 0.10
ΔT (°C) 5.00 ± 2.00 As Sherwood20
fCH4 0.40 ± 0.20 As Sherwood20
CO2 (ppm) 2400 ± 700 As Sherwood20
β 0.0 ± 0.5 As Sherwood20
fCO2nonLog Omitted (1.00) 1.117
Climate sensitivity, S (°C) 2.38 1.99

ΔT refers to the difference in temperature between the Paleocene-Eocene Thermal Maximum (PETM) and the time just before and after the PETM. During the PETM, temperatures were about 13°C higher than pre-industrial temperatures. Just before and after the PETM, temperatures were about 5°C lower than this (8°C higher than pre-industrial). The number 900 in the formula represents the approximate CO2 level before and after the PETM. fCH4 is again the estimated difference in climate forcing from methane (and nitrous oxide) relative to the forcing change from CO2.

Sherwood20 assumes that the relationship between CO2 concentration and CO2 forcing is logarithmic. Lewis refers to Meinshausen et al. 2020, the results of which were adopted by the IPCC in AR6, which found that at high CO2 concentrations (such as during the PETM), the climate forcing is higher than if the relationship had been purely logarithmic. By using a formula from Meinshausen, Lewis found that the CO2 forcing during the PETM would have been some 11.7% higher than Sherwood20’s assumption of a purely logarithmic relationship. Therefore, Lewis22 used a value of 1.117 for fCO2nonLog. See Supporting Information, 5.3.4.

One would think that a higher CO2 forcing at high temperatures would imply a higher climate sensitivity during warm periods, but that’s not necessarily true, since feedback strengths may be different in warm and cold periods. However, if feedback strengths are the same in warm and cold periods, then a higher CO2 forcing implies a higher climate sensitivity.

Sherwood20 then assumes that the climate sensitivity ESS during the PETM is roughly the same as today’s equilibrium climate sensitivity, ECS. Uncertainty is accounted for with the parameter β, whose mean value is set to zero. Lewis agrees that “[a]ssuming zero slow feedbacks in the PETM (so ESS equals ECS) may be reasonable, given the lack of evidence and the absence of major ice sheets.” However, some studies (that rely on climate models) suggest that climate sensitivity during the PETM may have been higher than it is today. For this reason, Lewis thinks a positive mean value for β would be better. He nonetheless retained Sherwood20’s estimate of zero. See Supporting Information, 5.3.4.

This concludes the most technical part of this article. Next up: greenhouse gas emissions.

To determine how much the temperature will rise in the future (disregarding natural variability), it’s not enough to know what the climate sensitivity is – we also need to know approximately how much greenhouse gases will be emitted, so:

What will the emissions be?

The media and many scientists have long used something called RCP8.5 as a business-as-usual scenario for the effect of human activity (including emissions of greenhouse gases), and many still do. RCP stands for Representative Concentration Pathway, and the number (8.5 in this case) is how much greater the net energy input to the atmosphere is in the year 2100 compared to pre-industrial levels, measured in W/m2.

But RCP8.5 was never meant to be a business-as-usual scenario. In a 2019 CarbonBrief article about RCP8.5, Zeke Hausfather, another co-author of Sherwood20, writes:

The creators of RCP8.5 had not intended it to represent the most likely “business as usual” outcome, emphasising that “no likelihood or preference is attached” to any of the specific scenarios. Its subsequent use as such represents something of a breakdown in communication between energy systems modellers and the climate modelling community.

Sherwood20 also mentions that RCP8.5 should not be seen as a business-as-usual scenario, but rather as a worst-case scenario:

Note that while RCP8.5 has sometimes been presented as a “business as usual” scenario, it is better viewed as a worst case (e.g., Hausfather & Peters, 2020).

RCPs are often referred to as scenarios, which I also did earlier. But it may be better to think of an RCP as a collection of scenarios that all result in roughly the same net change in incoming energy in the year 2100. Thousands of different scenarios have been developed, and these can be used as inputs to climate models when they simulate future climates.

Plausible emissions scenarios

Roger Pielke Jr, Matthew Burgess, and Justin Ritchie published a study in early 2022 titled Plausible 2005–2050 emissions scenarios project between 2 °C and 3 °C of warming by 2100. In Pielke22, the different scenarios used in IPCC’s 2013 assessment report were categorized based on how well they were able to predict actual emissions from 2005 to 2020, in addition to how well their future emissions matched the International Energy Agency’s projections until 2050. Assuming that the scenarios that best matched actual and projected emissions will also be the ones that will be best at predicting emissions in the second half of the century, they found that RCP3.4 is the most likely (or plausible) RCP.

These scenarios (RCP3.4) are largely compatible with a temperature increase of between 2 and 3°C from pre-industrial times to 2100, with 2.2°C as the median value. Earth’s average temperature has increased by about 1.2°C since pre-industrial, so the median of 2.2°C corresponds to a temperature increase from today to 2100 of about 1.0°C.

After Pielke22 was published, Pielke Jr also looked at the scenarios used in IPCC’s latest assessment report (from 2021). He spoke about this in a talk in November 2022 (54:03-1:06:16), and, according to Pielke Jr, the median value for these newer scenarios is 2.6°C (rather than 2.2°C). This corresponds to a temperature rise of 1.4°C from today until 2100. In the following, I will use this more recent value.

In the talk, Pielke Jr says that RCP4.5 should now be considered a high-emissions scenario, while RCP8.5 and RCP6.0 are unlikely (58:12):

The high emissions scenarios are clearly implausible […]. What’s a high emissions scenario? Anything over 6 W/m2 […].

RCP 4.5 and the SSP2-4.5 are plausible high emissions scenarios. I know in the literature they’re often used to represent mitigation success. Today I think we can say based on this method that they’re in fact high-end scenarios. A business as usual – or consistent with current policy – scenario is a 3.4 W/m2 scenario. I will say that scenario is almost never studied by anyone.

Pielke22 doesn’t mention climate sensitivity explicitly, but the median equilibrium climate sensitivity (ECS) used in the latest generation of climate models is 3.74°C. ECS is likely higher than the effective climate sensitivity (S), which is the type of climate sensitivity that Sherwood20 and Lewis22 calculated. According to Sherwood, ECS is 6% higher than ECS. According to Lewis22, ECS is 13.5% higher. Using Lewis22’s value of 13.5%, an ECS of 3.74°C corresponds to an effective climate sensitivity (S) of 3.30°C.

If the climate sensitivity S is closer to 2.16°C, as Lewis22 found, then the temperature increase from today to 2100 will be approximately 35% lower than what Pielke Jr found. This means that the temperature increase from today will be 0.9°C instead of 1.4°C (0.9°C higher than today will be 2.1°C above pre-industrial).

An assumption in the RCP3.4 scenarios is widespread use of CO2 removal from the atmosphere in the second half of the century. Pielke22 did not assess whether that’s feasible:

Importantly, in the scenarios our analysis identifies as plausible, future decarbonization rates accelerate relative to the present, and many include substantial deployment of carbon removal technologies in the latter half of the century, the feasibility of which our analysis does not assess.

Given the recent rapid pace of technological development, I believe it to be highly likely that potent CO2 removal technologies will be developed this century. However, other methods may be more economically effective in limiting an unwanted temperature rise, e.g. manipulating the cloud cover, as Bjørn Lomborg suggests in an interview on Econlib (skip forward to 8:35 and listen for 2 minutes or read in footnote 17)).

In October 2022, The New York Times published an extensive article titled Beyond Catastrophe – A New Climate Reality Is Coming Into View. According to the author, David Wallace-Wells, recent evidence shows that the Earth is on track for a 2-3°C warming from the 1800s until 2100 instead of the previously feared 4-5°C. 2-3°C is the same as Pielke22 found.

According to The New York Times article, Hausfather contends that about half of the reduction in expected temperature rise is due to an unrealistic scenario being used previously (RCP8.5). The other half comes from “technology, markets and public policy”, including faster-than-expected development of renewable energy.

How much will temperatures rise by 2100?

Figure 1 (b) in Sherwood20 (graph (b) below) shows how much the temperature is likely to rise between 1986-2005 and 2079-2099, depending on effective climate sensitivity (S) and RCP scenario. This period is about 16 years longer than the 77 years from today until 2100, so the temperature rise for the remainder of the century will be less than the graph suggests – about 18% lower if we assume a linear temperature rise.

Screen Shot 2023-07-08 at 9.12.42 AM

We can see in the graph that if RCP4.5 is the correct emissions scenario and the effective climate sensitivity is 3.1°C, then the temperature will rise by about 1.8°C between 1986-2005 and 2079-2099. To estimate the temperature rise from today until 2100, we subtract 18% from 1.8°C, resulting in an estimated increase of about 1.5°C.

Using instead Lewis22’s effective climate sensitivity of 2.16°C with the RCP4.5 scenario, we can see from the graph that the temperature increase will be approximately 1.25°C. This corresponds to a temperature rise of 1.0°C from today until 2100.

RCP3.4 is not included in the graph, but we can assume that the temperature increase for RCP3.4 will be a few tenths of a degree lower than for RCP4.5, so perhaps 0.7-0.8°C, which also agrees quite well with what Pielke Jr found (0.9°C) after we adjusted for the climate sensitivity from Lewis22.

0.8°C corresponds to a temperature rise of 2.0°C since the second half of the 19th century and is identical to the Paris agreement’s two degree target. 2.0°C is also within the New York Times interval of 2-3°C, where – as for the two degree target – pre-industrial is the starting point.

Although Lewis22’s estimate of climate sensitivity may be the best estimate as of today, it’s not the final answer. Much of the adjustment made to Sherwood20’s estimate was based on more recent data, and as newer data becomes available in the future, the effective climate sensitivity estimate of 2.16°C is going to be revised up or down.

And Nic Lewis points out that:

This large reduction relative to Sherwood et al. shows how sensitive climate sensitivity estimates are to input assumptions.

But he also criticizes the IPCC for significantly raising the lower end of the climate sensitivity likely range (from the previous to the latest assessment report, the lower end of the likely range was raised from 1.5 to 2.5°C):

This sensitivity to the assumptions employed implies that climate sensitivity remains difficult to ascertain, and that values between 1.5°C and 2°C are quite plausible.

It will be interesting to see what the authors of Sherwood20 have to say about Lewis22.


1) From the Comment in Nature (which is written by five authors, four of whom are co-authors of Sherwood20):

On the basis of [Sherwood20] and other recent findings, the AR6 authors decided to narrow the climate sensitivity they considered ‘likely’ to a similar range, of between 2.5 and 4 °C, and to a ‘very likely’ range of between 2 °C and 5 °C.

The Comment in Nature is titled Climate simulations: recognize the ‘hot model’ problem, but it’s behind a paywall. Luckily, however, it’s also published on MasterResource.

2) Zeke Hausfather has written on CarbonBrief that for CO2 levels to remain at the same high level after a doubling of CO2, it’s necessary to continue emitting CO2. If humans stop emitting CO2, the atmosphere’s CO2 level will fall relatively quickly. Temperature, however, is not expected to fall, but will likely remain constant for a few centuries (disregarding natural variability).

3) It may not be entirely correct to say that the temperature will increase by 1.2°C if there are no feedback effects. The reason is that the so-called Planck feedback is included in the formula for the “no feedback” climate sensitivity:

However, the Planck feedback can be seen as a different kind of feedback than the other feedbacks mentioned here, and it’s sometimes called the Planck response or no-feedback response. Anyway, if we insert the values from the studies we’re going to discuss in this article, then for Sherwood20 (ΔF2xCO2 = 4.00 W/m2 and λPlanck = -3.20 W/m2/°C) we get that ECSnoFeedback = 1.25°C. For the other study, Lewis22 (ΔF2xCO2 = 3.93 W/m2 and λPlanck = -3.25 W/m2/°C) we get ECSnoFeedback = 1.21°C.

4) Pre-industrial has traditionally been defined as the average of 1850-1900. Sherwood20 and Lewis22 have used the average of 1861-1880 as pre-industrial, since it is far less affected by volcanic activity. IPCC has started to use 1750.

5) This is the theory, at least. However, Andy May has shown that the relationship between temperature and the atmosphere’s water content may be more complicated. His argument is presumably based on the best available data, but he also notes that the data for atmospheric water content is somewhat poor.

6) If we add up the strengths of all the feedback effects including the Planck feedback, we get a negative number. But when the Planck feedback is not included, then the sum is very likely positive. And if this sum is positive, it means that the climate sensitivity (ECS) is higher than 1.2°C (which is what the climate sensitivity would be with no feedback effects except the Planck feedback, see footnote 2).

7) The IPCC estimated equilibrium climate sensitivity (ECS). Sherwood20, on the other hand, calculated effective climate sensitivity (S). ECS is likely higher than S – 6% higher according to Sherwood20, 13.5% higher according to Lewis22 (which is the study that corrects Sherwood20).

8) From Sherwood20:

Among these distinct feedbacks, those due to clouds remain the main source of uncertainty in λ, although the uncertainty in the other feedbacks is still important.

λ (lambda) is the strength of a feedback effect. A positive λ means that the corresponding feedback effect increases climate sensitivity. Negative λ does the opposite. If the value of λ is known for every type of feedback, then the climate sensitivity can easily be calculated from the sum of the feedback strengths:

9) Sherwood20 writes:

However, uncertainty in radiative forcing [during the past 150 years] is dominated by the contribution from anthropogenic aerosols, especially via their impact on clouds, which is relatively unconstrained by process knowledge or direct observations (Bellouin et al., 2020).

10) Andrew Dessler has been lead author and co-author in several studies on the pattern effect. In a couple of youtube-videos (one short and one long), you can watch his explanation of the pattern effect in relation to committed warming (however, he doesn’t use the term pattern effect in the short video).

An example Dessler uses to illustrate the pattern effect is from the oceans around Antarctica:

The existence of present day cold sea surface temperatures in these regions while the overlying atmosphere is warming due to global warming favors the buildup of low clouds over the region. These clouds reflect sunlight back to space and tend to cool the planet.

From Dessler’s short video (3:08)

When the ocean temperature eventually increases, less clouds are expected, which will lead to faster warming.

Nic Lewis (who criticized Sherwood20) has written an article which criticizes the study that Dessler talks about in the videos (Zhou et al 2021, titled Greater committed warming after accounting for the pattern effect). Although Lewis’ article, which was published on Judith Curry’s climate blog (Climate Etc), isn’t peer reviewed, he has also published a study on the pattern effect, which is peer reviewed.

The dataset for sea surface temperature (SST) used in Zhou et al implies a relatively large pattern effect. However, Lewis notes that other sea surface temperature datasets imply a much smaller pattern effect. The reason for the discrepancy is that sea surface temperature measurements historically have been quite sparse. The uncertainty is therefore substantial.

Lewis also criticizes Zhou et al for not distinguishing between the forced and unforced pattern effect. The component of the pattern effect that is forced has to do with the effect of greenhouse gases. The unforced component, on the other hand, has to do with natural variability. And the two components have different implications for future committed warming. Whereas the greenhouse gas-related component will have little effect on warming this century, the natural variations-component may have a larger effect on warming this century.

Lewis found that the natural variations-component is very close to zero if the following two conditions are met: (1) a different sea surface temperature dataset is used than the dataset Zhou et al used, and (2) a reference period is used that’s outside of the hiatus (1998-2014) – a period of relatively low temperature rise, which may have been caused by a cooling effect from natural variability. It’s thus uncertain whether the pattern effect will have any significant impact on temperatures this century.

11) IPCC on the pattern effect (latest assessment report, section

[T]here is low confidence that these features, which have been largely absent over the historical record, will emerge this century[.]

12) From Wikipedia:

Although the term “climate sensitivity” is usually used for the sensitivity to radiative forcing caused by rising atmospheric CO2, it is a general property of the climate system. Other agents can also cause a radiative imbalance. Climate sensitivity is the change in surface air temperature per unit change in radiative forcing, and the climate sensitivity parameter is therefore expressed in units of °C/(W/m2). Climate sensitivity is approximately the same whatever the reason for the radiative forcing (such as from greenhouse gases or solar variation). When climate sensitivity is expressed as the temperature change for a level of atmospheric CO2 double the pre-industrial level, its units are degrees Celsius (°C).

13) Sherwood20 uses the value 4,00±0,30 W/m2, while Lewis22 uses 3,93±0,30 W/m2 for the climate forcing for doubled CO2, which accords with the AR6 assessment (uncertainties here are ± 1 standard deviation).

Some skeptics argue that the atmosphere’s absorption of CO2 is saturated. This presumably means that the climate forcing for doubled CO2 would be close to zero, but according to Nic Lewis, this is wrong. The following quote is from a 2019 talk by Lewis (14:00):

Another point that is often argued is that the absorption by carbon dioxide is saturated – that it can’t get any stronger. Unfortunately, that is not the case. However, it is a logarithmic relationship, approximately, so it increases slower and slower. Roughly speaking, every time you double carbon dioxide level, you get the same increase in the effect it has in reducing outgoing radiation. And this decrease in outgoing radiation is called a radiative forcing, and it’s just under 4 W/m2 of flux for every time you double carbon dioxide. And again, this is pretty well established.

And a little earlier in the same talk (11:12):

The black is the measured levels – this is measured by satellite at the top of the atmosphere. […] And the red lines are from a specialized radiative transfer model, and you can see how accurately they reproduce the observations. And what that reflects is that this is basic radiative physics, it’s very soundly based. There’s no point in my view disputing it because the evidence is that the theory is matched by what’s actually happening.

The figure that he’s talking about is this one:

The figure shows how CO2 and other (greenhouse) gases in the atmosphere absorb infrared light from the ground at various wavelengths in the absence of clouds (above the Sahara). Without an atmosphere, the outgoing radiation would follow the top dashed line marked by the temperature 320 K (47°C).

14) Lewis writes:

A significant advantage of the LGM transition is that, unlike more distant periods, there is proxy evidence not only of changes in temperature and CO2 concentration but also of non-CO2 forcings, and that enables estimation of the effects on radiative balance of slow (ice sheet, etc.) feedbacks, which need to be treated as forcings in order to estimate ECS (and hence S) rather than ESS.

15) The method that Sherwood20 had used to calculate the likelihood of different climate sensitivities was invalid in some circumstances. Among other things, the method assumed a normal (Gaussian) distribution of all input parameters. But for historical evidence (data for the past 150 years), this wasn’t the case since the climate forcing from aerosols wasn’t normally distributed.

To triple-check that Sherwood20’s method was invalid, Lewis calculated the probability distribution using three different methods, and they all gave the same result.

The method used by Sherwood20 led to an underestimation of the probability of high climate sensitivity values:

The dashed lines here show Sherwood20’s results for historical evidence, while the solid lines show Lewis22’s correction.

Correcting this error in Sherwood20 caused the median for the combined climate sensitivity to increase from 3.10 to 3.16°C. (The further increase from 3.16 to 3.23°C, was due to Lewis applying the objective Bayesian method rather than the subjective Bayesian method.)

See Likelihood estimation for S in Lewis22, Supporting Information (S2) and Appendix B in Lewis’ summary of Lewis22 for more details.

16) Conservative choices in Lewis22 (Supporting Information) – S20 is Sherwood20:

I make no changes to S20’s assessments of other cloud feedbacks. However, I note that Lindzen and Choi (2021) cast doubt on the evidence, notably from Williams and Pierrehumbert (2017), relied upon by S20 that tropical anvil cloud feedback is not, as previously suggested (Lindzen and Choi 2011; Mauritsen and Stevens 2015), strongly negative.

The resulting median revised total cloud feedback estimate is 0.27 − almost double the 0.14 for nine CMIP6 GCMs that well represent observed interhemispheric warming (Wang et al. 2021).

S20’s GMST [=Global Mean Surface Temperature] estimate was infilled by kriging, which does not detect anisotropic features. Recently, a method that does detect anisotropic features was developed, with improved results (Vaccaro et al. 2021a,b). Infilling the same observational dataset as underlies S20’s infilled estimate, the improved method estimates a 9% lower GMST increase. Nevertheless, I retain S20’s estimate of the GMST rise, resulting in a GMAT [=Global Mean Air Temperature] ΔT estimate of 0.94 ± 0.095 [°C].

S20’s 0.60 Wm−2 estimate of the change in planetary radiative imbalance equals that per AR6. However, AR6 (Gulev et al. 2021 Figure 2.26(b)) shows that, excluding series that are outliers, the AR6 0-2000m [Ocean Heat Content] estimate is middle-of-the-range in 2018 but at its bottom in 2006, hence yielding an above average increase over that period. Nevertheless, I retain S20’s estimate.

Moreover, Golaz et al. (2019) found that an advanced [Global Climate Model] with historical aerosol [Effective Radiative Forcing] of −1.7 Wm−2, tuned on the pre industrial climate, would only produce realistic GMAT projections if the aerosol forcing is scaled down to ~−0.9 Wm−2 (and, in addition, its climate sensitivity is halved).

Conservatively, in the light of the foregoing evidence pointing to aerosol forcing being weaker than implied by simply revising B20’s βlnL−lnN estimate, I adopt a modestly weakened aerosol ERF estimate of −0.95 ± 0.55 Wm−2 over, as in B20, 1850 to 2005-15. This implies a 5–95% uncertainty range of −1.85 to −0.05 Wm−2, which has the same lower bound as AR6’s estimate, and is likewise symmetrical.

Scaled to the period 1861-1880 to 2006-2018, the median then becomes 0.86 instead of 0.95, according to Lewis22.

In two [Global Climate Models], Andrews et al. (2018) found a 0.6 weakening in [the pattern effect] when using [a newer sea-ice dataset]. Although the [newer] sea-ice dataset […] is no doubt imperfect […], its developers argue that it is an improvement on [the earlier version]. However, I consider that there is too much uncertainty involved for any sea-ice related reduction to be made when estimating the unforced Historical pattern effect.

In view of the evidence that pattern effect estimates from [Atmospheric Model Intercomparison Project II]-based simulations are likely substantially excessive, and that the unforced element is probably minor and could potentially be negative, it is difficult to justify making a significantly positive estimate for the unforced element. However, a nominal 0.1 ± 0.25 is added to the 0.25 ± 0.17 forced pattern effect estimate, which reflects the substantial uncertainty and allows not only for any unforced pattern effect but also for the possibility that some other element of the revised Historical evidence data-variable distributions might be misestimated.

I revise S20’s central LGM [=Last Glacial Maximum] cooling estimate of −5 [°C] to −4.5 [°C], primarily reflecting, less than fully, the −4.2 [°C] adjusted mean ΔTLGM estimate of the sources cited by S20, and increase the standard deviation estimate to 1.25 [°C] so as to maintain the same –7 [°C] lower bound of the 95% uncertainty range as S20’s.

S20 use the single year 1850 as their preindustrial reference period for GHG concentrations, whereas for observational estimates of temperature change preindustrial generally refers to the average over 1850−1900. For consistency, the S20 GHG [=Greenhouse Gas] forcing changes should therefore use mean 1850−1900 GHG concentrations. Doing so would change the CO2 ERF from –0.57x to –0.59x ΔF2xCO2, as well as marginally changing the CH4 and N2O ERFs. However, conservatively, I do not adjust S20’s LGM forcing estimates to be consistent with the LGM ΔT measure.

S20 adopt the estimate of vegetation forcing in the Kohler et al. (2010) comprehensive assessment of non-greenhouse gas LGM forcing changes, but use a central estimate of –1.0 Wm−2 for aerosol (dust) forcing in place of Kohler et al.’s –1.88 Wm−2. This seems questionable; Friedrich and Timmermann (2020) adopt Kohler et al.’s estimate, while pointing out that estimates of its glacial-interglacial magnitude vary from ~0.33 to ~3.3 Wm−2. I nevertheless accept S20’s estimate of dust forcing[.]

S20 assume that climate feedback in equilibrium (λ’) strengthens by α for every -1 [°C] change in ΔT, resulting in the 0.5 α TLGM2 term in (11), reducing LGM-estimated ECS. Contrariwise, Zhu and Poulsen (2021) found that ocean feedback caused 25% higher LGM-estimated [climate sensitivity] ECS. Moreover, a significant part of the reduction in mean surface air temperature at the LGM is due to ice-sheet caused increased land elevation, which would weaken λ’ compared to in non-glacial climates. Although S20’s [α = 0,1 ± 0,1] estimate appears questionable, I retain it.

Although the Tierney et. al (2019) 1.4 [°C] tropical SST warming estimate appears more reliable than S20’s 1.5 [°C], I retain the latter but multiply it by the 1.65 PlioMIP2 ratio, giving a revised GMAT ΔTmPWP of 2.48 [°C].

S20 assessed a [2400 ± 700] ppm distribution for CO2 concentration in the PETM relative to a baseline of 900 ppm, implying a [1.667 ± 0.778] ΔCO2PETM distribution. That covers, within its 90% uncertainty range, a concentration ratio range (1 + ΔCO2PETM) of 1.39 to 3.95. The CO2 concentration estimates considered by S20, even taking extremes of both their PETM and Eocene ranges, constrain (1 + ΔCO2PETM) within 1.4 to 5. Using instead that range would lower PETM based S estimates. Nevertheless, I retain S20’s ΔCO2PETM distribution.

While Meinshausen et al. assume a fixed ratio of CO2 ERF to stratospherically-adjusted radiative forcing, there is modeling evidence that fast adjustments become more positive at higher temperatures (Caballero and Huber 2013), which would further increase CO2 ERF change in the PETM. I make no adjustment for this effect.

To account for forcing from changes in CH4 concentrations, S20 apply the same 0.4 fCH4 factor to the CO2 forcing change as for the mPWP, with doubled uncertainty, although noting that the tropospheric lifetime of CH4 could be up to four times higher given sustained large inputs of CH4 into the atmosphere (Schmidt and Shindell 2003). I retain S20’s fCH4 distribution, although doing so may bias estimation of S upwards.

S20 assume that ESS [=Earth System Sensitivity] for the PETM was the same as present ECS, representing uncertainty regarding this by deducting a [0 ± 0,5] adjustment (β) from ESS feedback when estimating ECS feedback, λ’. Assuming zero slow feedbacks in the PETM (so ESS equals ECS) may be reasonable, given the lack of evidence and the absence of major ice sheets. However, Caballero and Huber (2013) and Meraner et al. (2013) both found, in modeling studies, substantially (~50%) weaker climate feedback for climates as warm as the PETM. Zhu et al (2019) found, in a state-of-the-art GCM, that ECS was over 50% higher than in present day conditions, with little of the increase being due to higher CO2 ERF. I therefore consider that it would be more realistic to use a positive central estimate for β. Nevertheless, I retain S20’s estimate.

17) Here’s (roughly) what Bjørn Lomborg said:

If [you] want to protect yourself against runaway global warming of some sorts, the only way is to focus on geoengineering, and […] we should not be doing this now, partly because global warming is just not nearly enough of a problem, and also because we need to investigate a lot more what could be the bad impacts of doing geoengineering.

But we know that white clouds reflect more sunlight and hence cool the planet slightly. One way of making white clouds is by having a little more sea salt over the oceans stirred up. Remember, most clouds over the oceans get produced by stirred-up sea salt — basically wave-action putting sea salt up in the lower atmosphere, and those very tiny salt crystals act as nuclei for the clouds to condense around. The more nuclei there are, the whiter the cloud becomes, and so what we could do is simply put out a lot of ships that would basically [stir] up a little bit of seawater — an entirely natural process — and build more white clouds.

Estimates show that the total cost of avoiding all global warming for the 21st century would be in the order of $10 billion. […] This is probably somewhere between 3 and 4 orders of magnitude cheaper — typically, we talk about $10 to $100 trillion of trying to fix global warming. This could fix it for one thousandth or one ten thousandth of that cost. So, surely we should be looking into it, if, for no other reason, because a billionaire at some point in the next couple of decades could just say, “Hey, I’m just going to do this for the world,” and conceivably actually do it. And then, of course, we’d like to know if there’s a really bad thing that would happen from doing that. But this is what could actually avoid any sort of catastrophic outcomes[.]

128 responses to “How much warming can we expect in the 21st century?

  1. Ireneusz Palmowski

    In winter, the troposphere looks quite miserable against the stratosphere. With such a thin troposphere, is it more likely that there will be a significant increase in winter temperatures, or a rapid decrease?

  2. Bill Fabrizio

    Thank you, Hakon.

  3. Nice and detailed article. A very good presentation of current knowledge about climate sensitivity.

    The problem is that it is all guesswork. There are a lot of implicit assumptions in calculating the ECS, starting with having all the forcings identified and correctly specified. Then assuming that all past warming is due to CO2 and no other cause. Then assuming that there actually is an ECS, and that the climate response to CO2 remains constant at different CO2 concentrations and different climate states.

    Then we don’t know what our future emissions will be. The rate of increase has slowed a lot in the last 10 years, and I don’t remember anyone predicting that.

    The end result is that we have no idea what the temperature will be in 2100, unless they continue to manipulate the temperature records, and then it will be the temperature they decide.

    My guess is an increase of 0.15ºC/decade, or 1.2ºC. As good as anyone’s guess (2 cents).

    • Clyde Spencer

      “The rate of increase has slowed a lot in the last 10 years, and I don’t remember anyone predicting that.”

      And I have demonstrated that the anthro’ CO2 reductions in 2020 resulted in no measurable change in either CO2 concentration or temperature.

    • Richard Greene

      Javier commented:

      “The end result is that we have no idea what the temperature will be in 2100, unless they continue to manipulate the temperature records, and then it will be the temperature they decide.”

      We may have no idea, but I do:

      The climate in 2100 will be warmer, unless it is colder. That is what climate history tells us. And no one today can provide a better prediction that one, which I made, in 1997. Nobel Prize pending.

    • Richard Greene

      I agree with Javier

      We don’t know the CO2 level in 1850 with measurements

      We don’t know the GLOBAL average temperature in 1850 with measurements

      We don’t know how much warming since 1850 was caused by CO2

      We don’t know the ECS of CO2

      We don’t know future CO2 emissions

      Therefore, no one knows the global average temperature in 2100.

      It could be warmer, or colder.

      If there is a betting pool on the ECS of CO2, I’m betting on +1.0 degrees C.

      “No one knows” is the correct answer. No one even knows if 2100 will be warmer or colder than 2023.

      The most basic climate science is missing from this article: Knowing when to say “no one knows that”

      The ECS of CO2 could be estimated from lab spectroscopy.
      But an ECS can NOT be estimated by looking at global warming in the past, such as the warming since 1975. There is no way to determine what percentage of the past warming was caused by CO2. Most ECS estimates simply assume all past warming, such as since 1975, was caused by CO2. That is a worst case ECS of CO2 assumption, not an accurate ECS of CO2 estimate.

      This is a very long article that failed to reach the correct conclusion about the average temperature in 2100: No one knows.:

  4. Typo: In section ‘Plausible emissions scenarios’, 7th paragraph, “According to Sherwood, ECS is 6% higher than ECS.” That should be “… higher than S.”

    • Typo: In section ‘Historical evidence”, 7th paragraph, “More technically, 𝛾 is the ratio of to ΔF2xCO2.” Missing ?? after “ratio of”. The ?? should be ΔF2xCO2 regress (where 2XCO2 is subscript, regress is superscript)

  5. Bruce Zeitlin

    What happened to the medieval warm period and the Roman warm period? How convincing are the arguments against them not being world wide if that’s the case?

  6. As a novice can I ask if the sensitivity is linear against CO2 concentration. So if you double from say 200 to 400 would you get the same sensitivity doubling from 400 to 800? I think I read that somewhere.

  7. Ireneusz Palmowski

    The thin troposphere also causes maximum surface heating in summer when there are no aerosols in the air, because the surface absorbs almost all of the sun’s radiation.
    I’m sure the high temperature in June in Canada shows surface radiation during the long period of high pressure over Canada.

  8. Clyde Spencer

    ” Earth’s surface albedo says how much solar radiation the Earth reflects directly back to space. Currently, it’s around 0.3, which means that the Earth reflects 30% of the incoming solar radiation.”

    First off, that estimate has low precision. The implied precision is only +/-5%. If that number (30% +/-5%) is used in calculation, then the accepted Rule of Thumb for propagation of precision in calculations is that the final answer should only have one significant figure. A more rigorous analysis of the propagation of uncertainty should take into account the uncertainty of 0.05.

    Secondly, using the strict definition of “albedo” rather than the combined diffuse (albedo) reflectance and specular reflectance, provides a lower-bound on the amount of light that will not contribute to warming. That is, because 71% of the Earth is water, and water that is not shaded by clouds will reflect light in accordance with Fresnel’s formula for specular reflectance, which varies with the angle of incidence. Basically, that means that open water varies from about 2% surface specular reflectance at solar noon on the equator (plus variable diffuse reflectance from suspended material) to a maximum of 100% at the terminator. Typically, the reflectance grows very rapidly with an anble of incidence above 60 degrees.

    • Clyde:
      “the combined diffuse (albedo) reflectance and specular reflectance”

      “Typically, the reflectance grows very rapidly with an angle of incidence above 60 degrees.”

      Clyde, it is very much agreed, and
      the combined diffuse (albedo) reflectance and specular reflectance of the smooth surface planets and moons:
      includes the strong specular reflection, which has to be taken in consideration.
      We have now the Φ – the solar irradiation accepting factor (the planet surface spherical shape and roughness coefficient, which is for smooth surface planets and moons
      Φ =0,47)
      So, the not reflected portion of the incident solar flux, for those smooth surface planets and moons is:
      Φ(1 -a)S
      where a is the average surface diffuse albedo,
      and the S is the solar flux.
      The above provides a lower-bound on the amount of light that will contribute to warming – the not reflected portion of the incident solar flux, which is referred to the planet’s cross-section cycle.
      When calculated for Earth:
      0,47(1 -0,306)*1.362 W/m² = 444 W/m²

      Thus the combined reflection of the SW incident on planet Earth solar flux of 1.362 W/m² is:
      1 – 444W/m² /1362W/m² = 1 – 0,326 = 0,674
      or 67,4%


      • Clyde Spencer

        “… the combined diffuse (albedo) reflectance and specular reflectance of the smooth surface planets and moons:”

        The surfaces of the inner rocky planets are not smooth at the wavelength of light. They are composed of rough, rocky outcrops and granular materials that are dominated by diffuse reflectance; Individual mineral grains may be smooth enough to reflect specularly, but they have random orientations, leading to reflected rays leaving in all directions, thus, diffuse reflectance. Albeit, for high angles of incidence, the bi-directional reflectance distribution function (BRDF) has a forward lobe, but not as strong as for snow. Venus is completely covered with clouds, and there is strong scattering, not only from the surface of droplets, but from total internal reflection, giving an almost Lambertian (diffuse) reflectance.

        In order to measure any specular reflectance of solar bodies, they have to be between Earth and the sun, leaving out the satellites of the gas giants. Diffuse reflectance can only be measured as retro-reflectance, which is a fraction of the hemispherical reflectance and depends on the solid-angle of the viewing sensor. That is, for a homogeneous diffuser, such as a cloud, one has to integrate over the surface area of the cloud for the total reflectance.

        I think that what you missed is that Earth is unique in having lots of water, which, except for the special condition of a ray normal to the surface, reflects light AWAY from the surface where it impinges, and is very difficult to measure because it requires a sensor that can essentially look into the sun without being blinded (saturating, if not destroying the sensor).

      • Reflectance, diffuse or specular, is always directional. At microscopical scale every reflecting infinitesimal point reflects specularly.
        Since the incident on the planetary spot solar flux is directional, the SW reflection from that point necessarily has a forward lobe.

        We have now the Φ – the solar irradiation accepting factor (the planet surface spherical shape and roughness coefficient, which is for smooth surface planets and moons
        Φ =0,47)
        Please pay attention to the coupled physic term:
        Φ(1 -a)S
        It works perfectly well for all smooth surface planets and moons without-atmosphere, or with a thin atmosphere, Earth included.

    • Bill Fabrizio

      Clyde … Enjoyed your paper, and your humor.

      > The albedo commonly used by climatologists is on the low-end of a range of measured values. Even the high-value albedos, are too low to be used as an estimate of total terrestrial reflectivity because albedo excludes almost all specular reflections. Albedo is an under-estimate of the reflectivity of Earth surficial materials; it is only appropriate for clouds, and to a lesser extent, for bare sands. At the very least, specular-reflection has to be taken into account, as well as diffuse-reflectance, for the oceans. The most accurate representation would be through measured BRDF for all angles of solar incidence for the major surficial materials found on Earth.

      Just curious, any guesses on the ‘high end’?

      • Clyde Spencer

        More like veiled sarcasm than actual humor. :-)

        My gut tells me that the underestimate is only a few percent — probably a single digit. However, the situation is complex and what is needed is a computer program to step through the entire illuminated hemisphere, minute by minute, over a 24-hour period, through an entire year, using a map of the land surface classification with the appropriate BRDF for the class of diffuse-reflecting material on the surface. (Plants often have a waxy leaf surface that will enhance specular reflectance, and some have leaves that follow the sun.) For the open oceans, Fresnel’s Formula can be substituted for a BRDF, with the addition of the diffuse reflectance of the suspended particles. It is complicated further because the clouds change with the time of day and season, as well as anomalous weather such as hurricanes/typhoons. It is not a trivial problem!

        However, until someone does the simulation, perhaps for several years for which goes satellite imagery of clouds is available, we won’t know just how severe the underestimate is.

      • Bill Fabrizio

        Clyde …

        > One frequently sees reference to the nominal 30% albedo of Earth with respect to the energy budget and alleged anthropogenic global warming. Although, the CRC Handbook of Chemistry and Physics lists a value of 36.7% in recent editions.

        Why do you think they have a higher rating?

  9. Another well contested assumption is that the atmospheric CO2 content is controlled by human emissions. Berry, Salby, Harde and others have presented good analysis that refutes this assumption.

    • No. That’s a fact. We emit double the amount it remains in the atmosphere, and the rest of the system is a net sink. Berry, Salby, and Harde are absolutely wrong on that.

  10. Clyde Spencer

    “… Sherwood20 assumes that areas that have experienced little warming will eventually ‘catch up’ with areas that have experienced more warming, and that this will lead to cloud feedback becoming more positive.”

    The unstated assumption is that the areas with low warming are just random variations that will even out with time. However, we are confident that the high rates of warming in the Arctic are the result of low water vapor concentration. I suspect that areas that are humid will show little warming. Thus, if that proves to be true, it will falsify Sherwood’s unexamined assumption.

    I’d like to see some of these researchers, who are being paid well to do research, look at the warming by Köppen climate classification. That would, I think, be instructive as to what factors might be contributing to natural variation.

    • Clyde, there is one such study of change in Koppen defined climate zones.
      “This work used a global temperature and precipitation observation dataset to reveal variations and changes of climate over the period 1901–2010, demonstrating the power of the Köppen classification in describing not only climate change, but also climate variability on various temporal scales. It is concluded that the most significant change over 1901–2010 is a distinct areal increase of the dry climate (B) accompanied by a significant areal decrease of the polar climate (E) since the 1980s. The areas of spatially stable climate regions for interannual and interdecadal variations are also identified, which have practical and theoretical implication.

      My synopsis

      • Clyde and Ron. The only valid way to approach “Climate Change”. No physical phenomena or processes are functions of GAST. None.

  11. Clyde Spencer

    “Earth’s average temperature has increased by about 1.2°C since pre-industrial, so the median of 2.2°C corresponds to a temperature increase from today to 2100 of about 1.0°C.”

    The unstated assumption is that all of the warming that has taken place since industrialization is a result of anthropogenic emissions. It is reasonable to assume that the increase in CO2 is anthropogenic. However, the last glacial maximum occurred about 20,000 years ago, and it wasn’t anthropogenic emissions that terminated that last continental glaciation. Therefore, to predict future temperatures based on CO2 increases alone, also has to assume that whatever was driving warming prior to the Industrial Age, suddenly and inexplicably ceased to make a contribution, and will not resume in the near future. How does one justify assuming that natural, long-term variations stop, coincident with industrialization?

    • There is good reason to not assume the increase in CO2 is all human caused. Berry, Salby, and Harde have published well reasoned arguments refuting it.

    • Your comment is spot on.There is more to the calculation than the minuscule contribution of man. In particular, the temperature measurement system is highly flawed, as has been shown by the work of Anthony Watts. And the totality of the errors all go in one direction. Up ! Also, it has been alleged that CO2 was selected because of it’s relationship to GDP. An examination of the UN policies, which clearly drive this, clearly show a preference toward totalitarianism, a complete change in how we live, including a reduction of industry and even the destruction of our energy and food sources. Mr. Spencer alludes to other drivers, that are being ignored, that must include Galactic Cosmic rays, that affect cloud cover, Solar magnetic intensity and polarity, particulate in our area of the Milky Way,(wherever that may be at any point in time) and you can’t forget solar initial motion potentially resulting in the disordered trefoil-like shaped solar obit pattern. ala: Charvatova The possibilities are endless.We should not limit ourselves to what the policy drivers claim.

    • ” How does one justify assuming that natural, long-term variations stop, coincident with industrialization?”

      Argumentum ad ignorantiam.

      • Let me rephrase that: They do not know anything, other than CO2 and industrial aerosols, capable of changing the long-term climate for periods shorter than a millenium, when Milankovitch forcing starts becoming significant.

        This is because they ignore the cumulative nature of solar indirect effects, that act precisely in the multidecadal-multicentennial timeframe. Right in between multidecadal oceanic oscillations and Milankovitch forcing.

      • Clyde Spencer

        Thank you for rephrasing.

  12. Clyde Spencer

    “…, the effective climate sensitivity estimate of 2.16°C is going to be revised up or down.”

    It certainly is a moving target. I’m reminded of the exchange in the Hitchhikers Guide to the Galaxy, Where the computer, Deep Thought, tells the representatives of the Union of Amalgamated Philosophers that they will have millions of years to debate what the final answer will be while it is compute-bound working on the answer, which will guarantee them job security.

  13. Interesting. So when exactly were these “preindustrial” times which you use as a benchmark for global temperature increases? The Roman or Medieval Warm Periods, or the more recent Little Ice Age, from which many would argue we are continuing to recover? The man-made CO2 thesis of climate sensitivity overlooks the impacts of fluctuations in solar radiation and the circulation of heat between the poles and the equator, and natural variability overall which is effectively excluded under the UNFCCC.

    • You ask one question that I felt begged for a reply.
      “The Roman or Medieval Warm Periods, or the more recent Little Ice Age, from which many would argue we are continuing to recover?”

      If you go to this thread you find that RWP and MWP occurred at an Eddy cycle peak, whereas the LIA occurred at the root of the cycle.
      More: that cycle can be followed back in time another 6kyrs. As a system it appears unstable and drifts form one extreme to the other near every 1/2Kyr. So presently we are drifting towards the next ‘warm’ peak, at which point it triggers a reversal.
      The warming may bring its problems – or benefits to some- but the bigger problem is how will the reversal be. To science that is still around the corner and very invisible. But in hindsight, and there is hindsight for open eyes, one can learn what that can be like.

      If interested see this: See the updated bottom graph. Your three dates are set against the Eddy cycle (red sine-wave), the other earlier dates also appear. As well as the civilisations of the times.
      The bigger question is: Will our advanced technology be our saving, or our undoing?

  14. Ulric Lyons

    So the circulation models expect increasing positive Northern Annular mode conditions with rising CO2 forcing. That is why the Met Office tell us our summer will get hotter and drier, (though the summers have in fact become on average wetter since 1995).

    Positive North Atlantic Oscillation conditions will drive a colder AMO, causing an increase in low cloud cover, slower surface wind speeds over the oceans, and reduced water vapour.

    To explain the warming of the AMO from 1995, and the associated decline in low cloud cover, increased ocean surface wind speeds, and increase in low-mid troposphere water vapour, we need to be looking for a net decline in total climate forcings.

  15. A combination of statistical uncertainties or an uncertain combination of statistics? From a pragmatic standpoint, looks like the latter. Conclusion: we have no idea what warming (or cooling) will occur 100 years from now.

    Propose new strategy. Dump “climate change” philosophy and concentrate heavily on energy economics. If looks reasonably cost competitive, try it. If not, then no money from tax payer, but you are free to use your own money.

  16. Meanwhile, supposed fears of global warming demand that the EPA shutter industries with energy-killing regulations that – by their own estimate – “would only reduce sea levels by 2 sheets of paper by 2050.”

  17. Mike Jonas

    Thanks, Judith, for a long thoughtful and detailed study.

    Let’s start with “There are several ways to calculate climate sensitivity. We can base it on the historical record of the past 150 years, where we know approximately how temperature, greenhouse gases, aerosols etc have changed (historical evidence). Or we can estimate the strengths of the various known feedback mechanisms and sum them (process evidence). Or it can be calculated based on how much average temperature has changed since the last ice age glaciation or other warm or cold periods in the Earth’s history (paleo evidence). A fourth possibility is to use climate models – large computer programs that attempt to simulate Earth’s past and future climate under different assumptions.”:
    1. You can’t estimate ECS from “the historical record of the past 150 years” because you don’t know how by how much temperature was affected by natural variation.
    2. You can’t estimate ECS from the strengths of the various known feedback mechanisms, because (a) the known feedback mechanisms are not at all well understood (just see the IPCC uncertainties re clouds for example), and (b) the unknown feedback mechanisms are unknown.
    3. You can’t estimate ECS from the change in temperature since the last ice age glaciation or other warm or cold periods, because you don’t know how much effect natural variation had on temperature over those periods.
    4. You can’t estimate ECS from climate models because the models do not have natural variation built into them. (I had an online conversation with an associate of Steven Sherwood a long time ago, in which I argued that various natural factors should feature in the models and they argued that they couldn’t be coded into the models because those factors were not understood).

    So the whole of Sherwood20 is on a very shaky foundation indeed. One might reasonably say that it has no foundation at all. As Javier has succinctly commented in this thread: The problem is that it is all guesswork.

    Moving on to clouds: The IPCC says, rather weakly, that it’s very likely (over 90%) that the feedback effect from clouds is positive, and that therefore changes in cloud cover as a result of increased temperature will amplify the temperature increase. Their “90%” probability has to be nonsense, given all their own acknowledged uncertainty. My own study of the data (
    ) suggests the exact opposite: “The implication is that the climate models and the IPCC over-estimate the effect of the atmospheric CO 2 change over the study period on radiative forcing by a factor of about two or maybe much more, and their future projections of global warming caused by man-made CO2 are therefore likely to be much too high. There is an indication of this in recent acknowledgements that the climate models have been running too ‘hot’ [9], possibly because of problems rendering clouds [10].”.

    • Richard Greene

      The Mike Jonas 1:26am July 9 comment would make a better article THAN THE ARTICLE HE IS COMMENTING ABOUT.

    • Since when has science stopped because something couldn’t be demonstrated or measured? There are thousands of articles and hundreds of books about dark matter and dark energy and nobody has been able to demonstrate that they exist, much less measure anything about them.

      Many scientists are just highly specialized charlatans. No credit should be given to them until they can provide evidence of what they say.

      That the planet is warming is not evidence of its cause.

  18. Geir Aaslid

    Excellent study, but based on guesswork as Javier has commented.
    This guesswork is entirely based on the political assumption that there is no natural variability.
    Indeed, job security for the climate industrial complex.

  19. Pingback: How much warming can we expect in the 21st century? - Climate-

  20. Paul Quondam

    One minor problem with all climate models is that they predict an atmosphere void of greenhouse gases will have a lapse rate ca. 9K/km. Climate science appears unaware that Maxwell (Kinetic Theory, 1866) and Boltzmann (Statistics, 1876) provided mathematic proofs that equilibria are isothermal, with or without gravitational fields present. More recently, Spencer(2009), Brown (2012), and Wijngaarden & Happer (2023) have offered supporting heuristic arguments.

    • I didn’t known that. Do you have a link to any climate model run output with an atmosphere void of greenhouse gases that shows a large lapse rate?

      • What’s important is the difference between the energy the earth absorbs from the sun and what it radiates into space. That’s called radiative forcing. That is between 0.5 and 1 W/m2. It’s difficult to measure because it’s the difference between two large numbers.

        It’s not theoretical. We know that because the earth’s temperature is rising. It’s caused by CO2 because for the temperature of the earth to be continually rising, the cause must be continually increasing. Of all the possible causes, only CO2 has been continually increasing.

      • Paul Quondam

        I’ve been privately informed that all climate models start with the presumption that the adiabatic lapse rate (g/Cp) is the physical lapse, e.g.
        where the concenrations of all GHGs can be set to zero witout changing the thermal profile. For dry nitrogen, Cp = 1040 J/kg/K (g=9.80665 m/s2)

        I’ve put together an alternative Python program in which temperature gradients are proportional to radiative and convective fluxes, themselves functions of temperature. Rather than conventional RCE equilibria, I believe the troposphere is better described by a thermodynamic nonlinear dissipative process. Unfortunately, the mathematics is not for those seeking intuitive insight.

  21. Pingback: How Much Warming Can We Expect in the 21st Century? • Watts Up With That?

  22. The author writes: “Given the recent rapid pace of technological development, I believe it to be highly likely that potent CO2 removal technologies will be developed this century.”
    This is nonsense because the issue has never been the technology per se – the issue is the ginormous scale of CO2 emissions as well as extant CO2 resident in the atmosphere.
    2021 CO2 emissions were a shade under 38 gigatons – that is to say 861 trillion moles of CO2. Or put another way: 19.3 quadrillion liters of pure gaseous CO2 at stp.
    Energy use of moving air = 101J/(L*atm) = 101J, and 3.6 million Joules = 1 kWh.
    So if we were to just pump 1 year’s worth of pure CO2, it would require 541 Terawatt-hours – doable right?
    Except there is no way to access pure CO2. CO2 at 420 ppm = 0.00042 CO2 concentration in the atmosphere. Meaning to get at the CO2 – we must pump 1/0.00042 liters of atmosphere to get 1 liter of CO2 = 2380 liters of air.
    Multiply 2380 liters of air by 541 Terawatt-hours = 1,288,339 Terawatt-hours.
    The entire world’s electricity usage in 2021 was 25,300 Terawatt-hours.
    So we would need to increase global annual electricity production by over 50x just to pump enough air to access 1 year’s worth of CO2 emissions.
    But capture at the source, you say. Capture at the source requires compression. Compression multiplies the energy usage in direct inverse proportion (L* atm), let’s say we can magically losslessly compress to 100 atmospheres at all sources and also magically not have to transport or have other costs like container weight, compressor weight, etc etc. This drops the electricity requirement from 1288339 TWh to 54,100 TWh – still over twice the entire world’s electricity consumption.
    Nah Gah Happen.
    The author’s fundamental error in the above statement is belief that CO2 capture is a technology problem – it is not.
    CO2 capture is a scale problem.

  23. The author writes: “Given the recent rapid pace of technological development, I believe it to be highly likely that potent CO2 removal technologies will be developed this century.”
    This is nonsense because the issue has never been the technology per se – the issue is the ginormous scale of CO2 emissions as well as extant CO2 resident in the atmosphere.
    2021 CO2 emissions were a shade under 38 gigatons – that is to say 861 trillion moles of CO2. Or put another way: 19.3 quadrillion liters of pure gaseous CO2 at stp.
    Energy use of moving air = 101J/(L*atm) = 101J, and 3.6 million Joules = 1 kWh.
    So if we were to just pump 1 year’s worth of pure CO2, it would require 541 Terawatt-hours – doable right?
    Except there is no way to access pure CO2. CO2 at 420 ppm = 0.00042 CO2 concentration in the atmosphere. Meaning to get at the CO2 – we must pump 1/0.00042 liters of atmosphere to get 1 liter of CO2 = 2380 liters of air.
    Multiply 2380 liters of air by 541 Terawatt-hours = 1,288,339 Terawatt-hours.
    The entire world’s electricity usage in 2021 was 25,300 Terawatt-hours.
    So we would need to increase global annual electricity production by over 50x just to pump enough air to access 1 year’s worth of CO2 emissions.
    But capture at the source, you say. Capture at the source requires compression. Compression multiplies the energy usage in direct inverse proportion (L* atm), let’s say we can magically losslessly compress to 100 atmospheres at all sources and also magically not have to transport. This drops the electricity requirement from 1288339 TWh to 54,100 TWh – still over twice the entire world’s electricity consumption.
    Nah Gah Happen.
    The author’s fundamental error in the above statement is belief that CO2 capture is a technology problem – it is not.
    CO2 capture is a scale problem.

    • Except for capturing point source emissions, direct air CO2 extraction is impractical at the scale needed.

      If we absolutely have to do something to stabilize GHG in the atmosphere there are other options.
      The same technology that perfected industrial scale mono-culture with GMO will let us use the oceans gyres and currents to power giant bio-reactors of genetically engineered algae, plankton that could supercharge CO2 (and other GHGs) sequestration. That massive biomass will then be diffused into other GMO protean rich species of marine animals engineered to metabolize it. In effect terraforming a complete food web while offsetting a meaningful quantity of GHGs.

      • Total human crop biomass is only ~2% of the global total plant biomass.
        In order to “terraform a complete food web” – it would require increasing overall global plant biomass by at least 500%, in turn meaning “human crops” would have to increase 25000%.
        Obviously, the “bioreactors” of your imagination are utterly impossible: You are equally mistaking a scale problem with a technology problem – only one involving oceanic plants instead of air pumps.
        Secondly: simply growing more plant biomass does not itself sequester anything. Just as the present day increased carbon uptake of the plants on Earth is cyclically returned, so too would the “bioreactors” of your speculation also merely cyclically process CO2.
        The “organic origin” geologists all believe that the liquid and gas “fossil fuels” we use today arose due to a very unusual geological situation: a large oceanic region with a an effectively completely anoxic base that still managed to have an enormous amount of fertile water layers over it, plus a very long period of time (tens to hundreds of millions of years).
        The algae/whatever would grow on the top layers, die, then fall into the anoxic deeps to accumulate. This former oceanic area underlies the Middle East to Russia to Nigeria to Eastern North America region hence oil and natural gas availability.
        So how would the completely anoxic oceanic deeps (more or less present now) plus the massive ecological activity above (not the least bit present now) be human engineered?
        The good news is that high CO2 levels go a long way towards improving oceanic productivity; we will simply have to wait a few dozen or hundred million years for that activity to ramp up enough to take in the accumulating CO2 bonanza (from the oceanic plant POV).

      • This is but one of many geoengineering schemes being worked on. Genetic engineering has opened a whole range of synthetic lifeforms that can be designed to terraform the planet and I see no indication it slowing down. Historical comparisons to natural evolution are invalid.

        “If we wanted to remove about 47 gt/year, we would need about 1% of the ocean’s surface assimilating CO2 at full speed.”

        “phytoplankton engineered to convert sunlight and CO2 into a chemically stable bioplastic. Once the phytoplankton is in carbon sequestration mode, they would put all of their photosynthetic energy into sequestering CO2 into a stable bioplastic/carbon sequestration medium. This material could sink to the bottom of the seafloor where it would remain sequestered or be harvested.”

      • People are going to genetic engineer whatever they want; what is far, far less clear is if these organisms will be able to
        1) displace existing ecosystem participants
        2) grow in an ocean devoid of nutrients (which most of the deep sea, surface layer oceans are
        So yet again, you and these techno-topian genetic engineers are solving the wrong problem.

  24. These two books have something to say about this topic:

    Judith Curry’s “Climate Uncertainty and Risk”
    Javier Vinos’s “Climate of the Past, Present and Future”

  25. Pingback: How Much Warming Can We Expect in the 21st Century? • Watts Up With That? - Lead Right News

  26. David Wojick

    Regarding this: “Simply put, this means that (in the very long term) Earth’s temperature will rise between 2.5 and 4.0°C when the amount of CO2 in the atmosphere doubles.”

    Sensitivity is not a prediction. It is the supposed amount of warming if nothing else happened, hence an abstraction. Temperature is not determined solely by CO2, which is a necessary condition for sensitivity being a prediction. That condition is far from met. Even the IPCC lists ten or so anthro forcings with CO2 being just one of them. (Then there are the natural forcings which they ignore.)

    This also means sensitivity cannot be found by observation as that would require first knowing all the other forcings, which we do not.

    • Once again you prove you have no idea what you are talking about.

      It’s true, there are other things that can drive the temperature of the earth higher, but for the temperature of the earth to be continuously increasing, whatever is causing it must be continually increasing. Out of all the possibilities, only CO2 is continually increasing and is the cause. It’s as simple as that.

      David you’re going to have to find another source of income. The climate change denial grift is over — especially because of what has occurred this year. The climate scientists were right and you and the shrinking band of climate deniers are wrong.

  27. “When it comes to what CO2 does to the atmosphere’s energy balance, it’s been found that a doubling of CO2 reduces radiation to space by about 4 watts per square meter (W/m2) over the entire atmosphere”

    This is specifically not true and it is not how it works. First of all the generell estimate is more like 3.7W/m2 for 2xCO2, but that is the marginal issue.

    There are three different kinds of “forcing” – instantaneous, adjusted and effective. “Adjusted” means allowing for the stratosphere to cool, which happens quickly. “Effective” means for the troposphere to heat, which takes a while longer.

    “Instantaneous” however, and that is the starting point, is the reduction of upwelling radiation PLUS the increase in downwelling radiation at the tropopause. This figure is significantly higher than the reduction of emissions TOA.

    So as it is defined, 2xCO2 forcing is about 3.7W/m2, adjusted btw. However the actual reduction of emissions TOA is much smaller, about 2W/m2 only.

    So for one, CO2 forcing is not what you think it is. Two, it is not straight forward. Three, it is more a theory than a fact and it has its issues..

  28. Pingback: How Much Warming Can We Expect in the 21st Century? • Watts Up With That? - News7g

  29. Pingback: Climate Fakery Part 8 – Newsfeed Hasslefree Allsort

  30. Pingback: How much warming can we expect this century? – haakonsk's blog

  31. Robert David Clark

    Ms. Curry,
    I guess that there is a possibility that the only item affecting, what you call, Climate Change is the rise and fall of the average surface temperature of the sun.
    This drop lasted about 10,000 years.
    The last rise lasted about 4,000 years.
    Nature will keep the ocean levels constant, thus the average surface temperature, for about the next 80,000 years.

  32. I have a different theory about climate change that comes from validating, and reverse engineering a simple model I’ve discovered, a model which accurately predicts global temperature with stunning detail, and without the aid of anthropogenic GHG’s. The detailed prediction comes, not from the complexity of the model – which is quite simple – but from the information buried in the sunspot data used to drive the model.

    What I believe I’ve discovered is that all of the warming since the 1980’s is almost entirely the result of increased solar magnetic field intensity, which started rising in the mid 1930’s and remained high and relatively constant until 2010. On a global scale, and in response to solar forcing, the earth behaves like an integrator, or at least a single-pole low-pass filter with very long time constant, so the rise in temperature everyone wants to attribute to CO2, is simply the integration of a sustain increase in forcing caused by the solar modulation of galactic cosmic rays. Integrate a step and you get a ramp.

    There’s are nice reconstructions of solar modulation in: Vieira, L. E., Solanki, S. K., Krivova, N. A., & Usoskin, I. (2011), and also Usoskin, Bazilevskaya, A., & Kovaltsov, (2011).

    The empirical model doesn’t explain how the solar magnetic fields impact temperature, but I’m partial to the theory that solar modulation of GCR’s impact low cloud formation:. Svensmark, H., & Friis-Christensen, E. (1997)

    Temperatures would have started rising before 1980, except that solar magnetic field contributions were offset by declining TSI contributions. The slight cooling period prior to 1980 was the result of irradiance contributions to temperature, which peaked just after 1940, falling slightly faster than the rise due to solar magnetic fields. The irradiance contributions largely leveled off after 1980 allowing temperature increases due to solar modulation to rise unabated.

    From a few different variants of model I’ve developed, I’ve learned many things including:
    1. The CO2 contribution to global warming is at most 0.3°C, and is probably much less than that.
    2. The model explains the two post-2000 “pauses”, and predicts that the current pause we’re in will last into the mid 30’s with a chance of slight cooling.
    3. It doesn’t matter what cycle 25 does because there’s a 13-year lag in the earth’s response to solar forcing, so the near term prediction will not change.
    4. I have another theory around major ENSO events, but I’ll let you look at the predictions and decide for yourself if there’s a correlation.

    Here’s a png of the prediction with major ENSO events highlighted.

    The python code to implement the model variants, and a brief model description are publically available at:

    I’m not quite ready to release the details on how I separated, the TSI and magnetic field contributions to global temperature, but I will tell you that the model on the github site is responding to both.

  33. Is there research showing how the sensitivity of temperature to CO2 varies with the amount of CO2 in the atmosphere?

    Do studies computing sensitivity using historical data take this into account?

  34. Ireneusz Palmowski

    “Average global temperature is a concept invented by and for the global-warming hypothesis. It is more a political concept than a scientific one.”
    Seasonal changes and regional disparities, as Milloy points out, severely compromise the concept of a singular ‘global temperature.’ For instance, temperatures rise during the Northern Hemisphere’s summer due to increased land exposure to sunlight, a fact conveniently overlooked by the climate alarmist contingent.

  35. Ireneusz Palmowski

    Significant changes in climate will occur as a result of changes in the strength of the magnetic field of the Sun and Earth.

    • Curious George

      One picture is worth one thousand sciences.

    • Who is to say that the effects of the polarity reversal of the Sun’s magnetic field that marked Solar Cycle 24’s midpoint last April, 2014 will not act in concert with other natural forces to amplify the effects of the Sun on the Earth’s future climate? We just don’t know: lag times and amounts could be large or small. But, what has Western academia ever done in the past that would give us confidence in their belief – despite all of the uncertainties that exist – that they can foresee the future and are competent to inform politicians how to change it?

      • The ONLY way the Sun directly impacts the earth’s climate is by the energy it admits. Do you have any evidence that the reversal of the sun’s magnetic polarity has caused the sun’s solar irradiance? The answer is NO.

        Solar irradiance has been flat to declining over the last 40 years. The Earth’s temperature has been rising. It’s is IMPOSSIBLE for the Sun to be the cause of that temperature rise.

        BTW changes in the Earth’s magnetic field can’t cause it either. Neither can the other idiotic theories I’ve seen here. Some idiot comes up with something that sounds like it has a scientific basis and throws it out there for the feeble minded to buy into.

        The bottom line is for a permanent change in the climate the Earth’s energy has to increase.That only happens by some source adding energy to the earth. If a theory can’t credibly explain how that happens, it’s junk science.

      • ” It’s is IMPOSSIBLE for the Sun to be the cause of that temperature rise.”

        That word does not exist in science, because it implies we have complete knowledge of all the possible effects, direct and indirect, by which the Sun could have affected climate.

        It is perfectly POSSIBLE for the Sun to have contributed to the late-20th-century warming.

        Look at this:
        Kobashi, T., Box, J.E., Vinther, B.M., Goto‐Azuma, K., Blunier, T., White, J.W.C., Nakaegawa, T. and Andresen, C.S., 2015. Modern solar maximum forced late twentieth century Greenland cooling. Geophysical Research Letters, 42(14), pp.5992-5999.

        It is obvious that if the Modern Solar Maximum was capable of forcing Greenland cooling, it could equally well have forced mid-latitudes warming by the same mechanism.

      • You need to read up on the First Law of Thermodynamics. It’s not just a principle of physics. It is a foundational principle. That means a lot of the other theories of physics rely on it. Relativity and quantum mechanics come to mind.

        What the First Law of Thermodynamics tells us is that for the Earth’s temperature to rise the energy of the Earth must increase. That energy has to come from somewhere. There is no other way for the Earth’s temperature to rise.

        The only way the Sun impacts the Earth’s temperature is by radiating energy to it. If you have a theory that claims that something happening on the Sun is causing the Earth’s temperature to rise, then the energy the Sun is radiating to the Earth must increase. Not only that. The Sun’s radiant energy must be continuously increasing because the temperature of the Earth is continuously increasing. An intermittent effect doesn’t cut it.

        We can measure the amount of energy that the Sun radiates to the Earth. I posted a graph yesterday that showed the solar irradiance has been on a downward trend for the last 60 years. At the same time, the Earth’s temperature has been on an upward trend. If the Sun were controlling Earth’s temperature it should be on a downward trend — not upward. That means it’s impossible for solar activity to be the cause of Earth’s current temperature trend.

        If not the Sun, then what? That pretty easy to determine. It has to be something than can impact the Earth’s temperature that is continually increasing. Of the several possibilities, only atmosphere CO2, courtesy of burning fossil fuels, is continually increasing.

        Determining what’s happening and what’s causing it is simple. Predicting how fast the temperature rises and the consequence of that rise isn’t. That’s where the models come in. Whether the models are accurate or not does not discredit what’s happening and what is causing it.

      • The first law of thermodynamics has nothing to do with it anymore than it has to do with the effect on the temperature of a bucket of water hanging from a water pump versus hanging from the limb of a tree…

      • The First Law of Thermodynamics has everything to do with Climate Change.

        The first two laws of thermodynamics are all about how energy behaves. Climate Change is all about energy transfers. You clearly have no idea what you are talking about and should refrain from commenting on Climate Change.

        Here is what I call Climate Change for Dummies. It has been more widely read than a lot of the books and papers promoted on this site.

      • “The only way the Sun impacts the Earth’s temperature is by radiating energy to it. If you have a theory that claims that something happening on the Sun is causing the Earth’s temperature to rise, then the energy the Sun is radiating to the Earth must increase. Not only that. The Sun’s radiant energy must be continuously increasing because the temperature of the Earth is continuously increasing. An intermittent effect doesn’t cut it.”

        All you are exposing there is your assumptions about how a solar effect on climate should act. But that is not how it acts.

        The Sun acts through a pathway that is known and recognized by the IPCC, known as the top-down pathway that depends exclusively on UV energy delivered at the ozone layer. This pathway recruits the great energy and momentum delivered by planetary waves in the stratosphere, so it turns out that it is not the change in solar energy the only source of energy for the solar effect on climate. This pathway regulates how much infrared energy the planet emits by affecting atmospheric circulation and heat transport. So the temperature does not change because the Sun delivers more energy but because the Earth loses less energy when solar activity is high.

        Everything you say about how the Sun cannot be responsible for the warming becomes irrelevant because it does not act the way you suppose it should act.

      • Where do you get this stuff? Do you make it up as you go along?

        The earth and its atmosphere makeup a closed thermodynamic system. The earth “system” absorbs energy radiated by the sun and radiates IR into space. Currently 99%+ of the energy the earth absorbs gets radiated into space as IR.

        Above is the Sun’s radiant profile. You can see that UV makes a small portion of the sun’s radiant energy. The sun’s radiant energy that doesn’t bounce off of clouds and the earth’s surface is absorbed primarily by the oceans.

        Above is plot of ocean heat content by year. Before 1980 the heat content variation can be called natural variability. After that, the heat content by year continuously increases and cannot be explained away as natural variability. You wonn’ You neverhere the climate skeptic crowd using ocean data for theri arguments. They want to talk about atmospheric temperature which is a function of conduction and convection from the surface and is a lot easier to explain away as natural variation.

        I don’t know where you got that stuff, but it’s wrong. The atmosphere can’t hold much energy as compared to the oceans. If the atmosphere were absorbing the bulk of the energy the sun radiates, the temperature profile.of the troposphere would be inverted. The earth warms the atmosphere — not vice versa.

  36. This is ridiculous. As I have said, what happens in the past has zero impact on what is happening today. Energy is accumulating on the earth and man-made CO2 is the cause.

    For three days last week the earth achieved record high average temperatures. There are severe heat waves occurring around the planet, wild fires in Canada causing air quality problems in cities, severe flooding in CA. The list goes on and on.

    The charlatans are those promoting natural variation or solar activity as the source. All of that is BS. They’re trying to make a buck off this nonsense. Shame on them. They better make it while they can because the money is going to dry up as the public see these theories for what they are — nonsense.

  37. Ireneusz Palmowski

    No one can predict the strength of El Niño. Currently, it only works in the Niño 1.2 region, where the circulation gets blocked.

  38. I love how this article blames the right for bringing ESG to the “culture wars.” However, the root cause of this problem is that ESG was created and shoved down our throats by left-wing Climate Doomers.

    ESG investing has been swept into the nation’s culture wars by GOP officials, and congressional Republicans are planning a series of hearings this month that focus on the investing strategy. Cox’s comments come as a few others, mainly on the left, try to defuse the rhetoric over ESG.

  39. EVs are running out of juice …

    Details: The nationwide supply of EVs in stock has swelled nearly 350% this year, to more than 92,000 units.

    That’s a 92-day supply — roughly three months’ worth of EVs, and nearly twice the industry average.
    For comparison, dealers have a relatively low 54 days’ worth of gasoline-powered vehicles in inventory as they rebound from pandemic-related supply chain interruptions.
    In normal times, there’s usually a 70-day supply.
    Notably, Cox’s inventory data doesn’t include Tesla, which sells direct to consumers.

    • Rob Starkey

      The data didn’t include Tesla which sells direct to consumer. I wonder if the other brands inventories are growing because they are inferior to Tesla.

      • Musk has cut prices a couple of times recently, so maybe Tesla is selling units. The bigger EVs come with a huge price tag, however, so for them, I think they are just too expensive for most people.

        In early July, Seeking Alpha reported Tesla´s delivery numbers for the second quarter:

        Tesla said Sunday it delivered 466,140 electric vehicles in the second quarter of 2023, easily beating estimates of around 445,000 units. The company produced 479,700 vehicles in the same period. The second quarter numbers represent 83% increase in deliveries compared to the 254,695 reported during the same period a year earlier, and 10% growth in deliveries sequentially compared to the 422,875 the company reported in the first quarter of 2023.

        These numbers are astounding, as Tesla surpassed Wall Street’s expectations, indicating strong demand, efficiency, and a possible hike in EPS.

  40. Climate models, anyone?


    In Science and the Modern World, Alfred North Whitehead (1925/1953) critically discusses the historical development of science and its larger impact on our civilization and culture today. The fallacy of misplaced concreteness (FMC) is a notion central to his analysis, both of the process of inquiry and to the general sustainability of quality of life. This paper is part of a panel of four presentations relevant to the theory, practice, and teaching of science. In this paper I identify the FMC as a set of variations on the central theme of misplacing concreteness, by mistaking the abstract for the concrete, and I define the component notions involved. More than half of the paper involves a representative range of concrete examples of the FMC. The realm of the aesthetic, of patient and sensitive attention, the full range of immediate bodily feeling, and the variety of real values revealed therein, turns out to be both the victim of and the remedy for the FMC. As Whitehead says: “Sensitiveness without impulse spells decadence, and impulse without sensitiveness spells brutality” (1925/1953, p. 200).

  41. The debate on climate change is over. The climate scientists have been vindicated. You and the others who cling to the climate denial BS will have to find something else to be wrong about

    In case you didn’t notice, the average planetary temperaure set new highs last week. How about the floods in NY and VT or the out of control wildfires in Canda? Severe heat waves are occurring across the planet. The Atlantic Ocean is 3 C higher than it was a year ago which could result in a severe hurricane season.

    I don’t see why the right has any problem with ESG being imposed on corporations. The right was applauding the SC’s Hobby Lobby decision that allowed a corporation to impose its religious beliefs on its employees. What’s good for the goose is good for the gander.

    It looks like the activity on this board is drying up. It’s getting harder and harder to deny reality when it’s in your face every day.

    • JJ, Your comment is a compendium of exaggerations, fear mongering, and misrepresentations.

      There is little evidence that extreme weather is getting any worse. In fact, tornadoes for example are decreasing in North America. Heat waves are getting a little worse as one might expect but deaths from heat waves are way down.

      ESG distorts the marketplace and harms shareholders and the public by encouraging disfunctional “solutions” like electric vehicles which cause vast pollution and human suffering to mine the rare materials needed.

      The debate over climate change is not over as many climate scientists admit. The greenhouse effect is settled science but the ultimate warming we will have is very uncertain with there being good evidence that it will be a lot less than the IPCC estimates.

      • dpy6629, I’m curious as to why you claim that the science is not settled, but then state the greenhouse effect is? If you simply mean that CO2 is a greenhouse gas, I agree with you.

        On my github page, where I describe my prediction model, you might find the first three graphs of some interest. The first prediction graph is the sunspot model combined with the CO2 model. The second graph is the CO2 model by itself. The third graph is the sunspot model without CO2. The combined model has the lowest prediction error, but when you compare the predictions, it’s not clear that the CO2 model contribution is at all helpful.

        Ignore the sunspot model predictions prior to 1900. These predictions involve sunspot data prior to 1800, which is also prior to the Dalton Minimum. I suspect that data is not very accurate, and is probably overstated. I’m not the only one with that opinion.

      • “JJ, Your comment is a compendium of exaggerations, fear mongering, and misrepresentations.”

        What’s your basis for that statement? Mine are news reports from around the world.

        “tornadoes for example are decreasing in North America.”

        Irrelevant. Tornadoes forming are not solely because of a lot of energy sloshing around on the planet. There are other factors involved. What all that energy sloshing around means is that when they do form, they will be stronger and last longer.

        “Heat waves are getting a little worse as one might expect but deaths from heat waves are way down.”

        A little worse? Tell that to the people living in TX and AZ. Canada had 100 degree temperatures at the highest latitude ever. The world record temperature was broken 3 times last week and the week had the highest temperature for a week ever.

        It’s not just here by the way. Severe heat waves are occurring around the world. Then there are floods. Did you see what happened in NY and VT. Then there are the wildfires in Canada. Did you see the pollution in NY. You could actually see it. NY looked like Beijing.

        “ESG distorts the marketplace and harms shareholders and the public by encouraging disfunctional “solutions” like electric vehicles which cause vast pollution and human suffering to mine the rare materials needed.”

        The conservatives were applauding the Supreme Court when it allowed the owner of a private corporation — not religious — to force his religious beliefs on his employees. This is no different. What a group of investors or investment firm decides to invest in for whatever reason is their right whether you like it or not. It’s called karma.

        “The debate over climate change is not over as many climate scientists admit.”

        There aren’t many left that dispute the debate climate change is over and it’s shrinking rapidly. It’s mostly the diehards and the grifters that are trying to make money off of climate change.

        “the ultimate warming we will have is very uncertain with there being good evidence that it will be a lot less than the IPCC estimates.”

        There is no evidence that the IPCC estimates are wrong. They produce a range of estimates based on how successful we are at ending our addiction to fossil fuels.

    • Joe - the non climate scientist

      JJB’s comment “The Atlantic Ocean is 3 C higher than it was a year ago which could result in a severe hurricane season.”

      JJB – do you Really think the atlantic is 3C higher ?

      JJB ‘s comment – “In case you didn’t notice, the average planetary temperaure set new highs last week.”

      JJB – I presume you noticed that supposed planet high temp was up by almost 1-2f. do you think that was credible.

    • “The debate on climate change is over.”

      Just because you say so.

      “It’s getting harder and harder to deny reality when it’s in your face every day.”

      The reality of a warming planet says nothing about its cause. And there is really no evidence that the warming is caused by the increase in CO2.

      • Rob Johnson-Taylor

        When has there ever been a debate on “Climate Change”? The climate has always changed, and it is essential that it does. Is not the debate over the contributory causes of climate change, or am I missing something?

      • Leslie, you will be unsurprised to learn the IPCC ASSUMES all GMT increases since ~1900 are anthropogenic. We don’t spend ANY money on learning about natural variation, so we have no way to demonstrate how much of warming – which, you’re right, is happening, as you’d expect during the recovery from a Little Ice Age (which ended in ~1850s) – is actually anthropogenic. The alarm and hysteria you hear is not coming from scientists. It’s coming from politicians “not letting a crisis go to waste”. IMHO

    • JJ

      Given such a long hiatus from here after all your theories were completely discredited, I’m flabbergasted you would show up again. Masochism comes to mind.

      Saying that AGW is proven because the weather has been hot sounds more like a rationale the editors of TEEN MAGAZINE would use. I had hoped while you were away, you would have read some actual science.

      • Mostly a word salad that never makes a point. He said this:

        “Given the recent rapid pace of technological development, I believe it to be highly likely that potent CO2 removal technologies will be developed this century. However, other methods may be more economically effective in limiting an unwanted temperature rise, e.g. manipulating the cloud cover, as Bjørn Lomborg suggests in an interview on Econlib (skip forward to 8:35 and listen for 2 minutes or read in footnote 17)).”

        What he believes is wrong. It is not technically feasible to remove CO2 from the atmosphere while simultaneously continuing to burn fossil fuels.

        CO2 is fairly inert. That’s why it stays in the atmosphere for a long time. Being the product of combustion, it is a low energy molecule. That means energy must be added to chemically react it with something. The energy will have to come from renewables. That energy could be recovered from the compound used to capture CO2 if the CO2 is extracted. What do you do with the extracted CO2? Store it underground? Finally, CO2 in the atmosphere is at low concentration. That means an enormous amount of air must be processed to extract the required quantities of CO2. There are about 500 refineries worldwide that produce fossil fuels. It would take 10s of thousands of plants to process the air to remove the CO2 generated by burning those fossil fuels.

        Barring a true scientific miracle, scrubbing the atmosphere of CO2 is not the answer. It might be used at the margins to combat the latent temperature rise.

  42. Ireneusz Palmowski

    Unexpected weakening of the southern polar vortex in the lower stratosphere despite low temperatures in the stratosphere.

  43. Hochelaga (the name I wish to use on this reply)

    I love to read JJB’s comments here. He is the fountain of knowledge. Quick to dismiss anyone who hasn’t fully accepted the doctrine of CO2’s absolute power to bring about climate change. He demonstrates at every intervention that he is always right. Such pomposity is scary and I pity his former students.

  44. Ireneusz Palmowski

    The animation above shows the strength of Earth’s magnetic field and how it changed between 1999 and May 2016.

    Blue depicts where the field is weak and red shows regions where it is strong. As well as recent data from the Swarm constellation, information from the CHAMP and Ørsted satellites were also used to create the map.

    It shows clearly that the field has weakened by about 3.5% at high latitudes over North America, while it has strengthened about 2% over Asia. The region where the field is at its weakest – the South Atlantic Anomaly – has moved steadily westward and weakened further by about 2%.

    In addition, the magnetic north pole is wandering east, towards Asia.

  45. frankclimate

    Thanks for this detailed article about the sensitivity approaches of AR6 (mostly Sherwood et al 2020) and the improvements introduced by Lewis (2022). However, the usage of ECS for estimating the temperature level in 2100 is problematic due to the fact that the ECS will unroll it’s impact over greater time spans of centuries. If one trys to calculate the warming to 2100 it’s more appropriate to use the TCR (Transient Climate Response) value, in Lewis (2022) about 1.5 K/2*CO2 (see table 8 in Lewis 2022). This will give a warming under the RCP4.5 scenario of about 2K vs. pre industrial times.
    best Frank

  46. how do I square all this work with the asymptotic nature of CO2? Doesn’t ECS (expressed as “the amount of warming to expect after a doubling of CO2) reduce rather drastically eventually? As I understand this, it’s a sort of saturation. So the ECS from a second doubling of CO2 (from, say, 600ppm to 1200 ppm) would be drastically smaller than the one from 300ppm to 600ppm. Would really like to understand that.

    • The answer to your question is NO. The asymptotic argument or the “saturation argument is based on a fixed rate of photons that CO2 can absorb. The earth is not a fixed rate source. Currently, as the earth’s temperature rises, it emits more photons that CO2 can be absorbed with the temperature rise.

      Even if the lower troposphere is “saturated” with CO2, the upper atmosphere is not. Increasing the amount of CO2 in the upper atmosphere will absorb energy radiated there. The radiation you see at TOA is from the upper atmosphere and not from the surface of the planet.

      The limit to what CO2 can do is the temperature of the planet Venus. Add the same amount of CO2 to our atmosphere that Venus has and we will be at the same temperature — 460 C.

    • Leslie MacMillan

      A logarithmic response does not behave asymptotically. It increases without bound with slope always > 0. By definition of a logarithmic relationship, the absolute increase in temperature would be the same (not less) for all subsequent doublings, as long as the logarithmic relationship holds. In a relationship showing saturation, the dependent variable would approach some upper bound asymptotically with slope approaching zero at the limit as the independent varianble goes to infinity.

      Logarithmic relationships look superficially like saturation relationships because both are monotonic functions with decreasing positive slope which, through motivated reasoning cause skeptics to argue that rising atmospheric CO2 level above what it happens to be now has no further effect at all. But they are wrong.

  47. Rob Johnson-Taylor

    I have a really stupid question.
    Slightly tangential to this thread.
    I have a Stirling engine, whose simplicity of design enthrals me.
    Are there any natural, nature, Stirling engines?
    Given the current strong temperature difference between southern Europe an it’s northern parts is there a nature based Stirling engine, I don’t mean turning wheels, but moving tectonic plates or powering some other process?
    Sorry if this question is to stupid for words, only I can’t sleep at the moment for thinking about it.

  48. Braccili’s claim that the “debate on climate change is over” puts him at odds with the IPCC. Their description of climate science is that it is “nonlinear, coupled and chaotic”. Perhaps Braccili has a complete understanding of chaotic systems. If so, he is the first person in the world to do so and will no doubt be awarded a Nobel Prize

  49. Anthony M. Torti

    The math doesn’t add up. If non-human sources account for 90% of the CO2 then eliminating ALL of the human sources will only delay the inevitable. The level of CO2 will rise at a rate 10% lower than the current rate but it will eventually rise to the “magic catastrophic” level expressed by the fear mongers.

    The bottom line is it really doesn’t matter because humans will adapt. Some regions may become flooded while other regions will be able to support life. We’ve seen many instances that under the ice sheets are indications of human habitation from a previous era.

    The politicians have a saying “never let a good crisis go to waste”. The UN power brokers have no concern over climate change. Their only concern is a new world order that concentrates power to them. This is the largest scam in world history.

    • Anthony, this may reflect the shortcomings of my understanding, but as I understand it ECS is a variable that reduces over time. Each additional CO2 molecule follows the process of “diminishing marginal returns” so that eventually the atmosphere is “saturated” and additional CO2 will have trivial-to-the-point-of-vanishingly-small impacts on the climate. We’re probably already there. I do agree with you that finding adaptations locally makes more sense than trying to limit CO2. Miami probably needs higher sea walls no matter whether “climate change” is an issue or not.

      • That’s not true.

        The lower troposphere is “saturated” that means the CO2 is absorbing all the radiation from the earth in its absorption band. There are two effects that dispel the notion that CO2 can no longer cause climate change.

        The earth is a variable source of radiation. As the temperature of the planet rises, the earth radiates more energy that CO2 can absorb and reradiate.

        The molar density of the atmosphere drops drastically with altitude. That means that the upper troposphere is not “saturated”. The troposphere is warmed by conduction, convection, and evaporation from the surface. This cause CO2 and H2O to radiate energy which escapes to space. Adding more CO2 to the atmosphere absorbs this energy and further warms the earth. CO2 is nowhere near being benign.

    • Nonsense!

      All that has to be done is to reduce the total CO2 production below the natural CO2 removal rate and CO2 will be removed from the atmosphere. Our current problem is due to the production of CO2 by burning fossil fuels, is overwhelming the natural CO2 removal processes.

      For all those who think we can burn fossil fuels forever, ponder this: Fossil fuels are finite. Within 100 – 200 years we will have exhausted our fossil fuel inventory. That means we will be forced into using renewables. Why? Because they are, for all practical purposes, an infinite supply of energy. The transition is going to happen whether you like it or not.

      • There is a process for allocating scarce resources within our nation. It’s called a price system. That that system is being overridden over an “emergency” of dubious origin is the heart of our problem. If we have hundreds of years of fossil fuels in the ground, how convenient is that? We can go to renewables when all the renewable problems are solved (persistence chief among them). Nuclear fusion may be viable within 50 years. Forcing us into “wind and solar” without viable battery backup (for, you know, “nighttime”) at an accelerated rate that is unncessary is not good public policy IMHO.

      • 1. Renewables are currently cheaper than coal and competitive with natural gas and will soon be cheaper than natural gas. That before you consider what fossil fuels are doing to the environment or the cost of repairing the damage it’s doing to the planet.

        2. Fusion has always been 50 years off. The problem is that the sun initiates fusion by gravity. Here on earth, we need to rely on artificially induced kinetic energy of atoms. I suspect we will need to figure out how to control gravity before fusion becomes viable.

        3. Oil is a major source of vital chemicals. We need to preserve oil for this purpose. Buring it to produce energy is ridiculous. Burning a fuel to generate energy is wildly inefficient. Most of the energy winds up heating the atmosphere.

        4. There are other options than storage batteries to solve the intermittency problem. I talked about them before.

      • Joe - the non climate scientist

        JJB – How can someone have the intellectual capacity to understand the complexities of climate science when they lack the intellectual capacity to grasp the obvious limitations of renewables?

        How can someone fail to see the academic level fraud in jacobson’s 30 second renewable electric grid test. Is basic critical thinking, Is basic due diligence forbidden when advocating AGW science.

      • In case you haven’t noticed, the replacement of fossil fuels with renewables is under way. In TX renewables are cheaper than fossil fuels. The conservative politicians in TX are trying to figure out how to incentivize the use of fossil fuels and disincentivize the use of renewables by regulation. I thought conservatives believed in “free markets” and deregulation. I guess their “free market” beliefs are situational.

        You cling to the hope that the “problems” with renewables are insurmountable. They are not. Your biggest complaint is intermittency. That can be solved by a wide area smart grid, storage batteries, or NH3 being used as an energy storage medium.

        We should quit using fossil fuels and preserve oil for the production of important petrochemicals. The sooner we make the transition to renewables the better.

        A smart guy like you should know better.

      • JBB – you will have to stop India and China from burning coal. What we do in the US is irrelevant.

      • Mr/Dr/Prof Braccili

        I really am at a loss to understand why you contend that ‘increasing’ carbon dioxide ‘is a problem’.

        It’s certainly not a problem to plants – please name for me the plant species which can only survive at 280ppm carbon dioxide and not thrive at 600ppm. Please also explain why 3000ppm carbon dioxide in the root zone of plants is ‘not a problem’, but 10% of that in the air is. It’s not logical.

        Please explain why warming for 200 years out of a little ice age is a bad thing. I suggest you try growing vegetables on Svalbard right now and see how likely you might be to starve.
        We don’t want summers like 1816 any time soon, thank you very much, not where humans have successfully cultivated crops for generations.

        Please explain why we know anything about long-term drivers of climate (centennial and millennium drivers) when we’ve only been studying meteorology seriously for under 3 generations. We know what’s happened in real detail since about 1900: big deal! We know very little about what triggered multicentennial and multimillenium shifts thousands/millions of years ago, other than it wasn’t triggered by the then miniscule populations of humans.

        Please explain why deforestation is not the reason for any ‘climate change’ that has occurred since 1900, rather than increases in concentration of a trace gas. Do you know the basics of temperature dynamics amongst trees (you know, mitigation of maximum heat and minimum cold in the 24hr
        cycle)? Reversing desertification using permaculture is a far more practical solution to ‘earth’s woes’ than nonsensical ‘carbon capture technology’, which does nothing to affect the amount of deserts on earth. So far, annual rainfalls of 200mm have proven sufficient to ‘re-green the desert’ in the Dead Sea Valley. So there’s plenty of places to target suitable projects on earth, isn’t there?

        Have you thought about regenerating aquifers by simply stopping millions of folks in California from putting their water down drains, rather letting it soak into the ground where it fell/was purified after release from white goods within the home? We hear all this stuff about ‘droughts’ regularly in California, yet every time you get masses of rain you just let it go straight back into the ocean! Learn from Indian villages….

        Please explain how humanoids survived the past three multicentennial ‘warm periods’ in the current interglacial. They were hotter than today, despite the latest lying by non-scientists in the media.

        I don’t buy the ‘everything is the fault of humans and carbon dioxide’ mantra.

        I do buy that this is a co-ordinated, globalist, elitist game to steal trillions from the less wealthy for the benefit of those that will never, ever need any more wealth again.

      • Where do you get this stuff?

        As I’ve said about a million times, the past is IRRELEVANT. What’s going on now is not related to anything that happened in the past unless you can show that current conditions were the same as some past period. Trust me, you can’t. Just pointing to a chart that was assembled from, at best, questionable data isn’t going to do it. Do you know the impact of solar radiation in the past, or volcanic activity or how dense the atmosphere was? The answer is NO. All of those will have an impact on planetary temperature.

        CO2 has some good things it does, and it has some bad things. One of the bad things it does is to cause the earth to retain energy — driving its temperature higher. Currently, that outweighs the good things it does.

        Flash from Montana! The state lost the case to the kids who sued to protect the environment for future generations. This will be the first of many losses for the fossil fuel industry. Hopefully, they get sued out of existence.

        You can read about it here:

        Oil majors are getting desperate:

        Stonewalling and promoting junk science isn’t as effective as it used to be. It’s hard to claim the sky isn’t falling when you can look out the window and see the sky is falling.

      • Can you comment on the “Saturation” argument, that the impact of CO2 is logarithmic and thus an example of “diminshing marginal returns” so that additional CO2 at the levels humans output them will create a trivial heating effect. Are you saying that’s not the case?

      • I’m definitely saying that is not the case.

        The “saturation” argument is based on the fact that if you shine a fixed source of light that CO2 can absorb through a container and then started adding CO2 to that container, CO2 will absorb that light. If CO2 is added in equal increments, each subsequent increment will absorb less light than the preceding increment until adding more CO2 has no impact on the amount of light absorbed.
        That is a scientific fact. What happens between the atmosphere and the earth is different in two ways and makes this fact irrelevant.

        It is true that in the lower troposphere is supersaturated with CO2, but the earth is not a fixed source of radiation that CO2 can absorb. Currently, when the earth’s temperature rises it emits more radiation that CO2 can absorb. Even if we maintain the current quantity of CO2 in the atmosphere, the earth’s temperature will continue to rise.

        The molar density of CO2 in the atmosphere decreasing rapidly with height. Energy from the surface of the earth is primarily transferred to the atmosphere by conduction, convection, and evaporation. That energy cause H2O and CO2 molecules in the upper troposphere to emit energy which can be absorbed by adding more CO2 to the atmosphere. That means CO2 remains a potent greenhouse gas even though CO2 in the lower troposphere is absorbing all the energy the earth emits,

        The “saturation” theory is climate denial pseudo-science. It takes a scientific fact and applies to a different situation where it is not applicable.

  50. Robert D Clark

    There is no Global Warming.
    Nature just finished setting up the new Ice Age.
    The extra heat is now stored in the atmosphere as water vapor and the lack of heat is stored in the Ice Blocks sitting on solid earth at the poles.
    We just completed lowering the oceans to where the oceans are reflecting an equal amount of heat to the black sky as the earth retains from the sun.
    The sun is now in a heating stage.
    Nature will break off the Ice blocks for the next 80,000years.

  51. Dennis Conlon

    Just one realization: I was wondering how water vapor could be a greenhouse gas given its lack of carbon, then I remembered how water spoiled my IR scans when I tried to measure organics in water. That darn OH bond. Real science is fantastic. Thanks for this article and all the comments.

  52. Wei Zhang (MN)

    I like to look at climate model forecasts and the subsequent observations to compute bias. Then adjust future model forecasts for the model bias. This “Climate MOS” generally yields a forecast warming of roughly 1.5 deg/century. That has been roughly the observed rate for a while. That leaves about 1.1 deg of warming for the rest of this century.

    • Wei, the question we then have to ask is: what is the impact of another 1.1 degree C for the rest of the century? Is it worth the whole Net Zero scenario? $50T for the US and $150T for the world? PLUS the imposition of industrial policy “to save the planet”? I don’t think so.

      • No- the question if the actions in question will have any positive impact on the weather in the last part of the 21st century. I suggest they will not as the change in the CO2 growth curve will be minimal.

      • I’d say we agree there, based on the physical process of saturation of CO2 in the atmosphere.