by Nic Lewis
In a recently published paper (REA16),[i] Mark Richardson et al. claim that recent observation-based energy budget estimates of the Earth’s transient climate response (TCR) are biased substantially low, with the true value some 24% higher.
JC note: The Richardson et al paper was discussed previously in this post: Towards reconciling climate sensitivity estimates from climate models and observations
This claim is based purely on simulations by CMIP5 climate models. As I shall show, observational evidence points to any bias actually being small. Moreover, the related claims made by Kyle Armour, in an accompanying “news & views” opinion piece,[ii] fall apart upon examination.
The main claim in REA16 is that, in models, surface air-temperature warming over 1861-2009 is 24% greater than would be recorded by HadCRUT4 because it preferentially samples slower-warming regions and water warms less than air. About 15 percentage points of this excess result from masking to HadCRUT4v4 geographical coverage. The remaining 9 percentage points are due to HadCRUT4 blending air and sea surface temperature (SST) data, and arise partly from water warming less than air over the open ocean and partly from changes in sea ice redistributing air and water measurements.
REA16 infer an observation-based best estimate for TCR from 1.66°C, 24% higher than the value of 1.34°C if based on HadCRUT4v4. Since the scaling factor used is based purely on simulations by CMIP5 models, rather than on observations, the estimate is only valid if those simulations realistically reproduce the spatiotemporal pattern of actual warming for both SST and near-surface air temperature (tas), and changes in sea-ice cover. It is clear that they fail to do so. For instance, the models simulate fast warming, and retreating sea-ice, in the sparsely observed southern high latitudes. The available evidence indicates that, on the contrary, warming in this region has been slower than average, pointing to the bias due to sparse observations over it being in the opposite direction to that estimated from model simulations. Nor is there good observational evidence that air over the open ocean warms faster than SST. Therefore, the REA16 model-based 24% bias figure cannot be regarded as realistic for observation-based TCR estimates.
It should also be noted that the 1.66°C TCR estimate ignores the fact that the method used overestimates canonical CMIP5 model TCRs (those per AR5 WG1 Table 9.5) by ~5% (Supplementary Information, page 4). Including this scaling factor along with the temperature measurement scaling factor reduces the estimate to 1.57°C (Supplementary Table 11).
Relevant details of and peculiarities in REA16
REA16 focus on energy-budget TCR estimates using the ratio of the changes in global temperature and in forcing, measuring both changes as the difference between the mean over an early baseline period and the mean over a recent final period. They refer to this variously as the difference method and as the Otto et al.[iii] method; it was introduced over a decade earlier by Gregory et al.[iv] and copied by both Otto et al. (2013) and Lewis and Curry (2015).[v] The primary baseline and final periods used by REA16 are 1861–80 and 2000–09, almost matching those used in the best-constrained Otto et al. estimate. Lewis and Curry, taking longer 1859–82 base and 1995-2011 final periods, obtained the same 1.33°C best estimate for TCR as Otto et al., using the same HadCRUT4v2 global temperature dataset.
REA16 estimate the TCR of each CMIP5 model by comparing its global warming with forcing estimated in the same way as in Otto et al., using model-specific data where available and multimodel mean forcing otherwise. The method is somewhat circular, since forcing for each model is calculated each year as the product of its estimated climate feedback parameter and its simulated global warming, adjusted by the change in its radiative imbalance (heat uptake). Each model’s climate feedback parameter is derived by regressing the model’s radiative imbalance response against its global temperature response over the 150 years following an abrupt quadrupling of CO2 concentration.
In model historical simulations the weighted average period from when each forcing increment arose to 2000–09 is only ~30 years, not 150 years. Accordingly, the forcing estimation method relies upon a model exhibiting a fairly linear climate response, and hence having a climate feedback parameter (and an effective climate sensitivity) that does not vary with time (in addition to having a temperature response that is proportional to forcing). In this context, the statement in REA16 that they do not calculate equilibrium climate sensitivity (ECS) “to avoid the assumption of linear climate response” is peculiar: they have already made this assumption in deriving model forcings.
Although REA16 is based on simulations by all CMIP5 models for which relevant data are available, the weighting given to each model in determining the median estimates that are given varies over a range of ten to one. That is because, unlike for most IPCC model-based estimates, each available model-simulation – rather than each model – is given an equal weighting. Whilst only one simulation is included for most models, almost 60% of the simulations that determine the median estimates come from the 25% of models with four or more simulation runs.
REA16 do not appear to state the estimated median TCR applicable to the 84 historical-RCP8.5 CMIP5 simulations used. Dividing the primary periods tas-only difference method figure of 1.98°C per Supplementary Table 6 by 1.05 to allow for the stated overestimation by the difference method implies a median estimate for true TCR of 1.89°C. Back-calculating TCR from the difference method bias values in Supplementary Table 5 instead gives an estimate of 1.90°C. The figures are rather higher than the median TCR of 1.80°C that I calculate to apply to the subset of 68 simulations by models for which the canonical TCR is known.
There seem to be inconsistencies in REA16 between different estimates of the bias resulting from use of the difference method and blended air and SST temperature data. The top RH panel of Supplementary Figure 4 shows that the median TCR estimate when doing so, with 2000–09 as the final decade is ~2.00°C. This is a 5% overestimate of the apparent actual value of ~1.90°C rather than (as stated in Supplementary Table 5) an underestimate of 8%. Moreover, contrary to Supplementary Figure 4, Supplementary Table 6 gives a median TCR estimate in this case of 1.81°C, implying an underestimate of 4%, not 8%. Something appears to be wrong here.
REA16 also claim that energy budget TCR estimates are sensitive to analysis period(s), particularly when using a trend method. However, Supplementary Figure 4 shows that the chosen difference method provides stable estimation of model TCRs provided that the final decade has, like the 1861–80 base period, low volcanic forcing. That is, for decades ending in the late 2000s on. As discussed in some detail in LC15, sensitivity estimation using an energy budget difference method is sensitive to variations between the base and final periods in volcanic forcing, due to its very low apparent efficacy, so periods with matching volcanism should be used. The sensitivity, shown in Supplementary Table 6, of TCR estimation using the difference method to choice of base period when using a 2000–09 final period is explicable primarily by poor matching of volcanic forcing when base periods other than 1861–80 are used. Good matching of the mean state of the Atlantic Multidecadal Oscillation (AMO) between the base and final period is also necessary for reliable observational estimation of TCR.
The effect of blending air and SST data
I question whether using SST as a proxy for tas over the open ocean has caused any downward bias in estimation of TCR in the real climate system, or even (to any significant extent) in CMIP5 models.
The paper REA16 primarily cite to support faster warming of tas over open water than SST,[vi] which is also model-based, attributes this effect to the thermal inertia of the ocean causing a lag in ocean warming. This argument appears to be unsound. Another paper,[vii] which they also cite, instead derives an equilibrium air – sea surface warming differential from a theoretical model based on an assumed relative humidity height profile, with thermal inertia playing no role. This is a better argument. However, it depends on the assumed relative humidity profile being realistic, which it may not be. The first paper cited notes (caveating that observational uncertainties are considerable) that models do not match observed changes in subtropical relative humidity or in global precipitation.
For CMIP5 models, REA16 states that the tas vs SST warming differential is about 9% on the RCP8.5 scenario and is broadly consistent between models historically and over the 21st century. However, the differential I calculate is far smaller than that. I compared the increases in tas and ‘ts‘ between the means for first two decades of the RCP8.5 simulation (2006–25) and the last two decades of the 21st century, using ensemble mean data for each of the 36 CMIP5 models for which data was available. CMIP5 variable ‘ts‘ is surface temperature, stated to be SST for the open ocean and skin temperature elsewhere. The excess of the global mean increase in tas over that in ts, averaged across all models, was only 2%. Whilst ts is not quite the same as tas over land and sea ice, there is little indication from a latitudinal analysis that the comparison is biased by any differences in their behaviour over land and sea ice. Consistent with this, Figures 2 and S2 of Cowtan et al. 2015[viii] (which use respectively tas and ts over land and sea ice) show very similar changes over time (save in the case of one model). Accordingly, I conclude that the stated 9% differential greatly overstates the mean difference in model warming between tas and blended air-sea temperatures. To a large extent that is because the 9% figure also includes an effect, when anomaly temperatures are used, from changes in sea ice extent. However, Figure 2 of Cowtan et al 2015 shows, based on essentially the same set of CMIP5 RCP8.5 simulations as REA16 and excluding sea-ice related effects, a mean differential of ~5% (range 1% – 7%), over double the 2% I estimate.
So, models exhibit a range of behaviours. What do observations show? Unfortunately, there is limited evidence as to whether and to what extent differential air-sea surface warming occurs in the real climate system. However, in the deep tropics, where the theoretical effects on the surface energy budget of temperature-driven changes in evaporation and water vapour are particularly strong, there is a near quarter century record of both SST and tas from the Tropical Atmosphere Ocean array of fixed buoys in the Pacific ocean. With averages over the full array extent based on a minimum of 40% valid data points, SAT and SST data are available for 1993-2015. The trend increase in SST over that period is 0.078°C/decade, considerably higher than the 0.047°C/decade for tas, not lower. If the required minimum is reduced to 20%, trends can be calculated over 1992-2015, for which they are 0.029°C/decade for SST, and 1.5% higher at 0.030°C/decade for tas. This evidence, although limited both spatially and temporally, does not suggest that tas increases faster than SST.
The effect of sea ice changes
The separation in REA16 of the effect of masking from that of sea ice changes on blending air and water temperature changes is somewhat artificial, since HadCRUT4 has limited coverage in areas where sea ice occurs. However, I will follow the REA16 approach. Their model-based estimate of the effect of sea ice changes appears to be ~4%, the difference between the 9 percentage points bias due to blending and the 5 percentage points (per Cowtan et al. 2015) due purely to the use of SST rather than tas for the open ocean. Changes in sea ice make a difference only when temperatures are measured as anomalies relative to a reference period, however I can find no mention in REA16 of what reference period is used.
CMIP5 models have generally simulated decreases in sea ice extent since 1900, accelerating over recent decades, around both poles (AR5 WG1 Figure 9.42). In reality, Antarctic sea ice extent has increased, not decreased, over the satellite era. Its behaviour prior to 1979 is unknown. On the other hand, since ~2005 Arctic sea ice has declined more rapidly than per mean CMIP5 projections. Differences in air temperatures above affected sea ice in the two regions, and the use of widely varying model weightings in REA16, complicate the picture. It is difficult to tell to what extent REA16’s implicit 4 percentage point estimate is biased. Nevertheless, based on sea ice data from 1979 on and unrealistically high long term warming by CMIP5 models in high southern latitudes (as discussed below), it seems to me likely to be an overestimate for changes between the baseline 1861–80 and final 2000–09 periods used in REA16.
The effect of masking to HadCRUT4 coverage
I turn now to the claims about incomplete, and changing, data coverage biasing down HadCRUT4 warming by 15 percentage points. The reduction in global warming from masking to HadCRUT4 coverage is based on fast CMIP5 model historical period warming in southern high latitudes as well as northern; see REA16 Supplementary Fig. 6, LH panel and Supplementary Table 8. But this is the opposite of what has happened; high southern latitudes have warmed more slowly than average, over the period for which data are available.
Based on HadCRUT4 data with a minimum of 20% grid cells with data, warming over 60S–90S averaged 0.05°C/decade from 1934 to 2015. The trend is similar using a 10% or 25% minimum; higher minima result in no pre-WW2 data. This trend is much lower than the 0.08°C/decade global mean trend over the period. For the larger 50S–90S region a trend over 1880–2015 can be calculated, at 0.03°C/decade, if a minimum of 15% of valid data points is accepted. Again, this is much lower than the global mean trend of 0.065°C/decade over the same period. An infilled spatial plot of warming since 1960 per BEST (http://berkeleyearth.org/wp-content/uploads/2015/03/land-and-ocean-trend-comparison-map-large.png) likewise shows slower than average warming in southern high latitudes. And UAH (v6.0beta5) and RSS (v03_3) lower-troposphere datasets show very low warming south of 60S over 1979–2015: respectively 0.01 and –0.02°C/decade.
It follows that the real effect of masking to HadCRUT4 coverage over the historical period is, in the southern extra-tropics, almost certainly the opposite of that simulated by CMIP5 models. Therefore, in the real world the global effect of masking is likely to be far smaller than the ~15% bias claimed by REA15.
In an article earlier this year updating the Lewis and Curry results,[ix] I addressed the key claims about the effects of masking to HadCRUT4 coverage made in Cowtan et al. 2015 and repeated in REA16, writing:
“It has been claimed (apparently based on HadCRUT4v1) that incomplete coverage of high-latitude zones in the HadCRUT4 dataset biases down its estimate of recent rates of increase in GMST [Cowtan and Way 2014].[x] Representation in the Arctic has improved in subsequent versions of HadCRUT4. Even for HadCRUT4v2, used in [Lewis and Curry], the increase in GMST over the period concerned actually exceeds the area-weighted average of increases for ten separate latitude zones, so underweighting of high-latitude zones does not seem to cause a downwards bias. The issue appears to relate more to interpolation over sea ice than to coverage over land and open ocean in high latitudes.
The possibility of coverage bias in HadCRUT4 has since been independently examined by ECMWF using their well-regarded ERA-Interim reanalysis dataset. They found no reduction in that dataset’s 1979-2014 trend in 2 m near-surface air temperature when the globally-complete coverage was reduced to match that of HadCRUT4v4.[xi] Since the ERA-interim reanalysis combines observations from multiple sources and of multiple atmospheric variables, based on a model that is well-proven for weather forecasting, it should in principle provide a more reliable infilling of areas where surface data [are] very sparse, such as high-latitude zones, than mechanistic methods such as kriging. Moreover, during the early decades of the HadCRUT4 record (which includes the 1859-1882 base period) data [were] sparse over much of the globe, and global infilling may introduce significant errors.”
Thus, the claim by Cowtan and Way (2014) that the ERA-interim analysis shows a rapidly increasing cold bias in HadCRUT4 after 1998 does not apply to HadCRUT4v4 over the longer post 1978 period. Focussing first on this period, the performance of the ERA-Interim and six other reanalyses in the Arctic was examined by Lindsay et al.[xii] Although the accuracy of reanalyses in the fast warming but sparsely observed Arctic region has been questioned, the authors found that ERA-interim had a very high correlation with monthly temperature anomalies at 449 Arctic land stations. They reckoned ERA-interim to be the most accurate reanalysis for surface air temperature both in absolute terms and as to (post 1979) trend.
Lindsay et al. found GISTEMP to have a higher post-1978 trend in the Arctic than ERA-interim, but GISTEMP uses a crude interpolation and extrapolation based infilling method. Moreover, the ERA-interim version used by ECMWF to investigate possible coverage bias differs from the main dataset. It incorporates a homogeneity adjustment to its post 2001 SST data that significantly increases its temperature trend over that of the main ERA-interim reanalysis. Taking account of that might well eliminate the Arctic trend shortfall compared with GISTEMP. Certainly, over 1979-2015 both the adjusted ERA-interim and HadCRUT4v4 datasets showed a slightly higher trend in global temperature (of respectively 0.166 and 0.165 °C/decade) than did GISTEMP (0.162°C/decade).
Another recent study, Dodd et al,[xiii] stated that “ERA-Interim has been found to be consistent with independent observations of Arctic [surface air temperatures] and provides realistic estimates of Arctic temperatures and temperature trends that outperform, or are comparable to, other currently available reanalyses for all areas of the Arctic so far investigated.” In her Phd thesis, Dodd also noted that “The issues arising from using drifting platforms in this study highlight the difficulty of investigating [surface air temperatures] over Arctic sea ice.” All this suggests that mechanistic infilling methods are unlikely to outperform the ERA-interim reanalysis in the Arctic, or indeed the Antarctic.
Prior to 1979, there is very little evidence as to the actual effects of incomplete observational coverage, or of blending air and SST measurement, on estimated trends in global temperature. However, there are two well known long-term surface temperature datasets that are based (on a decadal timescale upwards) on air temperature over the ocean as well as land, and which moreover infill to obtain complete or near complete global coverage: NOAAv4.01 and GISSTEMP. Cowtan et al (2015) accept that the new NOAA data set “incorporates adjustments to SSTs to match night-time marine air temperatures and so may be more comparable to model air temperatures”. GISSTEMP uses the NOAAv4.01 SST data set (ERSST4). Both NOAAv4.01 and GISSTEMP show almost identical changes in mean GMST to that per HadCRUT4v4 from 1880-1899, the first two decades they cover, to 1995-2015, the final period used in the update of Lewis and Curry. This suggests that any downwards bias in TCR estimation arising from use of HadCRUT4v4 is likely to be very small. Moreover, whilst some downwards bias in HadCRUT4v4 warming may exist, there are also possible sources of upwards bias, particularly over land, such as the effects of urbanisation and of destabilisation by greenhouse gases of the night-time boundary layer.
A way to resolve some of the uncertainties arising from poor early observational coverage
It is doubtful that any method of global infilling of temperatures based on the limited observational coverage available in the second half of the 19th century or (to a decreasing extent) during the first half of the 20th century is very reliable.
Fortunately, there is no need to use the full historical period when estimating TCR. Uncertainty regarding ocean heat uptake in the middle of the historical period, although a problem for ECS estimation, is not relevant to TCR estimation. Lewis and Curry gave an estimate of TCR based on changes from 1930–50 to 1995–2011, periods that were well matched for mean volcanic activity and AMO state, and which delineate a period over which forcing approximated a 70 years ramp. That TCR estimate was 1.33°C, the same as the primary TCR estimate using 1859–82 as the base period. Updating the final period to 1995–2015 and using HadCRUT4v4 left the estimate using the 1930–50 base period unchanged at 1.33°C. The infilling of HadCRUT4 by Cowtan and Way is prone to lesser error when using a 1930–50 base period rather than 1859–82 (or 1861–80 as in REA16), since observational coverage was less sparse during 1930–50. Accordingly, estimating TCR using an infilled temperature dataset makes more sense when the later base period is used.
So does use of the infilled Cowtan and Way dataset increase the 1930–50 to 1995–2015 TCR estimate by anything like 15%, the coverage bias for CMIP5 models reported in REA16 for the full historical period? No. The bias is an insignificant 3%, with TCR estimated at 1.37°C. Small additional biases, discussed above, from changes in sea ice and differences in warming rates of SST and air just above the open ocean (which it appears the Cowtan and Way dataset does not adjust for) might push up the bias marginally. However, ~80% of the total warming involved occurred after 1979, and as noted earlier since 1979 the trend in HadCRUT4v4 matches that in the (adjusted) ERA-interim dataset, which estimates purely surface air temperature, not a blend with SST, and has complete coverage. That suggests the bias from estimating TCR from 1930–50 to 1995–2015 using HadCRUT4v4 data is very minor, and that observation based estimates of TCR of ~1.33°C need to be revised up by, at most, a small fraction of the 24% claimed in REA16.
Claims by Kyle Armour
In an opinion piece related to REA16 in the same issue of Nature Climate Change, “Climate sensitivity on the rise”, Kyle Armour made three claims:
- That, as a result of REA16’s findings, observation-based estimates of climate sensitivity and TCR must be revised upwards by 24%.
- That the findings in Marvel et al (2015)[xiv] about various other types of forcing having differing effects on global temperature from CO2 (different efficacies) call for multiplying observational estimates of climate sensitivity and TCR by a further factor of 1.30.
- That a robust behaviour in models of apparent (effective) climate sensitivity being lower in the early years after a forcing is imposed than subsequently, rather than remaining constant, requires multiplying estimates of climate sensitivity by a further factor of ~1.25 in order to convert what they actually estimate (effective climate sensitivity) to ECS.
I will show that each of these claims is very wrong. Taking them in turn:
- REA16’s findings are purely model based and do not reflect behaviour in the real climate system. There is little evidence for any major bias when TCR is estimated using observed changes from early in the historical period to the recent past, but limited observational coverage in the early part makes it difficult to quantify bias. However, TCR can also validly be estimated from observed warming since 1930–1950, most of which occurred during the well observed post-1978 satellite era. Doing so produces an identical TCR estimate to when using the long period, and any downwards bias in the estimate appears to be very small. An adjustment factor in the range 1.01x to 1.05x, not 1.24x, appears warranted.
- As I have pointed out elsewhere,[xv] Marvel et al has a number of serious faults, only two of which have to date been corrected.[xvi] Nonetheless, for what it is worth, after correcting those two errors Marvel et al.’s primary (iRF) estimate of the effect on global temperature of the mix of forcings acting during the historical period is the same as if the forcing had been, as per the definition of TCR, solely due to CO2. That is, historical forcing has an estimated transient efficacy of 1.0 (actually 0.99). That would, ignoring the other problems with Marvel et al., justify a multiplicative adjustment to TCR estimates of 1.01x, not 1.30x.
- It is not true that increasing effective sensitivity is a “robust” feature of models. In four CMIP5 models, the shortfall of climate sensitivity estimated using the first 35 years’ data following an abrupt CO2 increase (roughly corresponding to the weighted average duration of forcing increments over the historical period) compared to that estimated using the standard 150 year regression method, is negligible (2% or less) for six models; for three of those the short period estimate is actually higher. The average shortfall over all CMIP5 models for which I have data is only 7%. Moreover, there is little evidence that the principal causes of estimated ECS exceeding multidecadal effective climate sensitivity in many CMIP5 models (in particular, weakening of the Pacific Walker circulation) are occurring in the real world. So any adjustment to observational estimates of climate sensitivity on account of effective climate sensitivity being, in many models, below ECS (a) does not appear to be well supported by observations; and (b) if based on the average behaviour of CMIP5 models, should be 1.08x rather than 1.25x.
[i] Mark Richardson, Kevin Cowtan, Ed Hawkins and Martin Stolpe. Reconciled climate response estimates from climate models and the energy budget of Earth. Nature Clim Chng (2016) doi:10.1038/nclimate3066
[ii] Kyle Armour. Projection and prediction: Climate sensitivity on the rise Nature Clim Chng (2016) doi:10.1038/nclimate3079
[iii] Otto, A. et al. Energy budget constraints on climate response. Nature Geosci. 6, 415-416 (2013).
[iv] Gregory, J. M., Stouffer, R. J., Raper, S. C. B., Stott, P. A. & Rayner, N. A. An Observationally Based Estimate of the Climate Sensitivity. J. Clim. 15, 3117–3121 (2002).
[v] Lewis, N. & Curry, J. A. The implications for climate sensitivity of AR5 forcing and heat uptake estimates. Clim. Dynam. 45, 1009_1023 (2015).
[vi] Richter, I. & Xie, S.-P. Muted precipitation increase in global warming simulations: a surface evaporation perspective. J. Geophys. Res. 113, D24118 (2008).
[vii] Ramanathan, V. The role of ocean-atmosphere interactions in the CO2 climate problem. J. Atmos. Sci. 38, 918_930 (1981).
[viii] Cowtan, K. et al. Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures. Geophys. Res. Lett. 42, 6526–6534 (2015).
[x] Cowtan, K. & Way, R. G. Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Q. J. R. Meteorol. Soc. 140, 1935_1944. (2014)
[xi] See http://www.ecmwf.int/en/about/media-centre/news/2015/ecmwf-releases-global-reanalysis-data-2014-0. The data graphed in the final figure shows the same 1979-2014 trend whether or not coverage is reduced to match HadCRUT4.
[xii] Lindsay, R et al. Evaluation of Seven Different Atmospheric Reanalysis Products in the Arctic. J Clim 27, 2588–2606 (2014)
[xiii] Dodd, MA, °C Merchant, NA Rayner and CP Morice. An Investigation into the Impact of using Various Techniques to Estimate Arctic Surface Air Temperature Anomalies. J Clim 28, 1743-1763 (2015).
[xiv] Kate Marvel, Gavin A. Schmidt, Ron L. Miller and Larissa S. Nazarenko, et al.: Implications for climate sensitivity from the response to individual forcings. Nature Climate Change DOI: 10.1038/NCLIMATE2888 (2015).
JC note: As with all guest posts, please keep your comments civil and relevant
So what if 24% is accepted and you are ignored. Then what? Will all the data then be adjusted?
I might add that this seems to be a situation where the goal posts keep being moved. When do you give up? If this becomes the new standard how will we ever know where we are actually at?
“how will we ever know where we are actually at?”
What about the following? As I understand it, what REA shows is that if you determine the TCR from models using blended temperatures and accounting for coverage bias, you get a different result to what you get if you don’t account for coverage bias and don’t use blended temperatures. Also, the former method (blended temperatures and accounting for coverage bias) produces a result that is quite consistent with the energy balance estimates.
Therefore, it appears that the models are at least suggesting that there is a difference, and that if you do an apples-to-apples comparison with energy balance estimates, the models and energy balance estimates are consistent. Therefore, it would appear that claims that models overestimate climate sensitivity are not warranted – at the moment at least.
So, if you’re right, you appear to be suggesting that the model result that the TCR depends on whether or not you account for coverage bias and use blended temperatures is somehow wrong, and that it is just chance that when you do account for coverage bias and use blended temperatures you get a result that is consistent with the energy balance estimates.
Is not Tas and ts two different values and therefore an apples to oranges comparison instead?
The excess of the global mean increase in tas over that in ts, averaged across all models, was only 2%.
Well, yes, that’s what REA are pointing out. Energy balance estimates use temperature records in which the temperatures over the oceans is based on SSTs, rather than air temperatures. Most model estimates have used air temperatures everywhere. What they’re showing is that if you do your model estimate using SSTs over the oceans and air temperatures over land (and account for coverage bias) you get results that are consistent with energy balance estimates.
But models are tuned to be as close to the 1960-1990 climate record as possible based on how it is currently measured , not based on marine air temperatures.
So if the same goal post moving exercise if done to hindcasts they will under-estimate historical warming. That means that the parameterisations in the models are wrong. A model which under-estimates historical warming gets recent warming “consistent”.
ie models are over sensitive.
Also the famous 2C limit is based on temperatures according to the current metrics. So if we are to redefine global mean temperature in terms of Tas then we need to raise the “safe” warming target enshrined at Paris to 2.5 or 3 degrees C.
Which every way you twist it there is still the same fundamental problem.
It is well over due that climatologists put more effort into getting their models to work rather than making contorted efforts to explain away the problem or redefine the historical record to fit the defective models.
I’ll trust models more once they get a latent heat of water vaporization right.
They knew about the problem for years.
nobody is waiting for your approval
Steven, I really did not expect you to defend wrong models.
all models are wrong. including your model of what I think
I am adapting my expectations.
I suppose your models of what others think are wrong as well?
careful Ken, you could get dizzy spinning in circles like that.
My roommate, during some of my years in collage, was the best there was at this stuff, he said write down everything you know and then relate it someway and at some point, it is complicated enough and you can say this conclusion or result is obvious and write down what you are promoting. This is an excellent example, built on every alarmist thought, expanded to support the alarmism by only disagreeing with the amount of warming and not understanding, explaining or promoting natural climate cycles.
This consensus stuff is too complicate and model dependent to even consider. The different sides are fighting only with the consensus alarmist tools. Different degrees of how bad, but all promoting bad. That is really flawed thinking and is not based on actual data.
Natural cycles make sense.
About 2000 years ago, there was a Roman Warm Period and then it got cold. About 1000 years ago, there was a Medieval Warm Period and then it got cold. That was the Little Ice Age. When Oceans are warm, Polar Oceans thaw, snowfall increases and rebuilds ice on Greenland, Antarctic and Mountain Glaciers. Ice builds, spreads and makes earth cold again. Snowfall decreases and the Sun removes ice every year until it gets warm again. It is warm again now because it is supposed to be warm now. It is a natural cycle and we did not cause it. CO2 just makes green things grow better, while using less water. The alarmists scare us so they can tax and control us.
Very precise and thorough rebuttal.
A more general observation about this type of paper. Both Marvel and Richardson argue for ‘adjusting’ observational conclusions about ECS and TCR upward to better conform to CMIP5 model estimates. In effect, they are karlizing the historical temperature data in different ways. This clearly evidences the warmunist belief that the models have skill; that models are true so something in the data must be wrong.
That the models do not work is evidenced by the ever growing discrepancy between CMIP5 and balloon/satellite observations for the lower troposphere, especially the tropical troposphere where the discrepancy is now nearly 4x at YE 2015–despite the period from 1979 to YE 2005 having been parameterized to best hindcast, per the CMIP5 ‘experimental design’. That is, the Christy chart Schmidt tried but failed to discredit (Mcintyre doubly shredded Gavin) by itself proves no CMIP5 temperature skill (except for Russian model INMCM4, which has the lowest CO2 forcing, highest ocean thermal inertia, and lowest water vapor feedback).
This is exactly backwards to true science. Feynman’s statement that when data doesn’t match hypothesis then the hypothesis is wrong, applies here. It is astounding that these ‘climate scientists’ don’t see how perversely upside down their ‘scientific’ arguments are. This should come back to bite them hard in the future.
SST-Temperature in Models in TOS not TS.
I watched out your periode 2006-25 for RCP8.5 it gives: (ocean)
So in TAS(near Surface) the Ocean is warming 33% faster then in TOS (Sea Surface Temperature)
So Christian, just to be certain of what you’ve done, you are saying you ran 2006 to 2015 for RCP8.5 using one run with SST and another with 2 meters above the ocean surface, and the result with 2 meters above the ocean surface was 33% faster than with SST, and in agreement with models?
whoops, 2006 – 2025.
Y, but only on the certain that the Climate Explorer gives the correct value, if you want to make it on your own just klick both
Or in other Words, by the Year 2100 (Both on the same Baseline)
TAS would warmed by 3.18K (near Surface)
TOS would warmed by 2.45K (SST)
I don’t believe that can be correct – a 33% tas vs tos warming difference over the open ocean is ridiculous. Perhaps it is contaminated by sea ice effects. My calcualtions are based from downloaded CMIP5 model data.
My understanding is that tos and ts are essentially the same for the open ocean, although calculated on different grids.
See comment before, (Christian Nicht Mehr Steger | July 12, 2016 at 2:44 pm |) it clearly shows that SST is warming 33% less than Near Surface Temperature over the ocens.
So you can check it on you own
Soory, i mean : Christian Nicht Mehr Steger | July 12, 2016 at 2:38 pm |
” Perhaps it is contaminated by sea ice effects”
here the data for tropics (25S-25N)
But its less, for the Tropics its 9% more warming in near Surface then in SST, which is also much more then your 2%
Exluding the polar caps (60S-60N) it comes out, that near Surface Temperatur over the ocean is increasing 14% more then the SST
Did you download your data from the CMIP repository or KNMI?
I downloaded my data from the ETH subarchive, which is a user-friendly mirror of a subset of the CMIP5 data archived on ESG.
ah cool. thanks.
Has the world’s best geomagnetic data base; the Einstein’s old school with more than 20 Nobel laureates.
I suggest RCP8.5 be dropped in this type of work/discussion. It should be evident by now that its concentration and temperature projections are absurd. Use RCP6 on an interim basis if you wish. If emissions do stay below RCP6 projections, then something else will have to be used.
I agree that the RCP8.5 scenario is unrealistic. IMO RCP6.0 probably, prior to the new 2015 Paris conference ‘pledges’ or any further emission reduction actions, gave a reasonable projection of likely changes in total forcing for much of the rest of this century.
It is astounding that these ‘climate scientists’ don’t see how perversely upside down their ‘scientific’ arguments are.
Scientists on the different sides are using the same flawed tools to argue about CO2 sensitivity.
When will some more turn to understanding natural climate cycles?
A few do, but most only consider their own natural theory and do not consider other skeptic theory.
I have not been to a climate conference that was dedicated to studying natural climate cycles. I think the London Climate Conference in September will be different. There are discussion and debate sessions in the agenda.
And in the Water conference in Bulgaria in October.
So i doing my own analysis, it shows that there is no decrepancy between Models and Observation if we compare apples to apples or read here: https://andthentheresphysics.wordpress.com/2016/06/28/climate-sensitivity-reconciled/#comment-82033
Different sides are using models that don’t work to explain stuff they don’t understand. We know they don’t understand because they build the models with what they do understand and the models still don’t work.
PCT, a great comment. Terrific sound bite. Can be shortened to: Warmunists don’t understand climate. We know that because they build climate models with what they do understand, and the models don’t work.
Skinny La Nina? in the +PDO+AMO style:
Big, Fat, Orange Sun…
or that dang invisible gas CO2, you tell me?
There is an interesting oscillation in that thin blue line with spurs running out both N and S. It has a wavelength of about 9 deg. of longitude.
I wonder what that is telling us.
In fact, there is a hint of a similar pattern in the Atlantic too. Seems to be about the same scale, with similar round, alternating incursions of warmer water from each hemisphere.
I looked through a lot of the older plots. I found hints of it, but if are plots as pronounced as the current ones, I missed them:
It’s almost like there is no feed of cold water coming up from Antarctica along the SM coast.
And yes, the Atlantic sometimes shows the same pattern.
JCH and climategrog:
I believe you are talking about the ITCZ’s effects, as visible in SST measurements.
If ACO2 turned off,everything would be peachy? Righto. Climate sensitivity is a both a member of the libertarian and the conservative movement, and a bible thumper to boot. Serendipity abounds. Righto.
Climate is the average of weather. It’s a number. It has no sensitivity. Only foolish Warmists redefine climate to mean anything they want at the time.
It’s a symptom of Cargo Cult Science, sometimes called climatology.
I’ll leave it to Richardson and Marvel to address your criticisms of their work. I found their analyses to be convincing, and took their findings at face value in my News & Views. If their results are in error, my estimates would be too, obviously.
However, I can address your claims regarding time-dependent climate sensitivity. As you and I have discussed before, I believe your calculations to be flawed. The reason is that you’re still estimating ECS from regression over the full 150 years of the CMIP5 4xCO2 simulations — a method we know to underestimate ECS in the models. More accurate estimates of ECS for the CMIP5 models can be found in Geoffroy et al, from regression over years 21-150 as in Andrews et al, or in my presentation at bit.ly/29LhzAL. All show higher ECS values, which leads to the result that effective climate sensitivity is substantially lower than ECS — by around 30% on average, not by the 7% that you claim. This behavior is robust in the sense that the vast majority of models suggest an upward revision is necessary (with some showing ECS over 50% higher than effective sensitivity!). You’ve raised good questions about whether this model behavior applies to nature, and if so, which of the models we should use to guide us — but let’s not misrepresent what the models are actually showing.
Perhaps it is a failure to understand on my part, but it sure sounds like what you are saying here boils down to “If it doesn’t agree with the models, it can’t be right.”
Kyle, if I understand correctly, they’re discussing a metric that’s pulled from the models, used to compare the models and observations.
Nic is calculating this model metric one way, while Kyle is calculating it another. See the last point of Nic’s post (#3), and compare it to Kyle’s comment above.
I don’t know Ben, the only one who refers to observational data is Nic.
I see Kyle refer to “More accurate estimates of ECS for the CMIP5 models …” and “This behavior is robust in the sense that the vast majority of models suggest an upward revision is necessary ”
So they use difference techniques to produce input to the models and when the majority of models seem to agree they believe they must be doing it correctly? I understand using two different methods to come up with an input. What I don’t get (again, assuming I understand it at all) is the part of believing agreement among models means anything. It is agreement with observations that counts, not consensus between the models.
“More accurate estimates of ECS for the CMIP5 models can be found in Geoffroy et al, from regression over years 21-150 as in Andrews et al, or in my presentation at bit.ly/29LhzAL.”
Yes, in apprupt xCo2-Simmulation the Climate responds here on time, because such things like AMOC do stronger slown down as in a 1% increase scenario (which is belived counter some parts of the warming)
Climate is normal until proven otherwise, is what Dr. Ed Berry says is the null hypothesis of AGW. “If you can’t prove the null hypothesis is wrong then your climate hypothesis is wrong,” says Berry. “You have not proved climate is abnormal.” Moreover, Berry points out that no one has shown the findings of Soon, Connolly & Connolly (‘Re-evaluating the role of solar variability on Northern Hemisphere temperature trends since the 19th century’) are wrong. Soon, et al., showed that since 1880, “global temperature correlates with total solar irradiance but not with CO2. No correlation, no cause-effect.”
Someone should teach Ed Berry about Type I and Type II errors. Also, that noone has shown that the findings of Soon, Connolly and Connolly are wrong, does not mean that they not wrong.
My null hypothesis for the day
Ho: we are not alone in the universe. A special race of invisible
unicorns is watching over us
Like you said, if you cant prove the null wrong……..
Ho: Wagathon has no brain..
say hello to type II errors.
Foolish Warmists continuously raise pointless irrelevant arguments, with the clear intent of avoiding having to address fact. An example of the Warmist deny, divert, and confuse, tactics, writ large.
An hypothesis is generally accepted by real scientists as a proposed explanation for a phenomenon.
The foolish Warmist definition seems to be whatever lunatic fancy emerges from their fantasy. My hypothesis, based on observation of your writings, is that you suffer from a mental defect.
Others claim you are quite intelligent. Maybe they are using the Warmist definition of intelligence.
My hypothesis is that the presence of GHGs has no heating effect on the Earth. This hypothesis is based on the assumption that the Earth’s surface was initially molten, and is not now. It has cooled. The corollary hypothesis is that there is no net energy balance, so beloved of foolish Warmists, over this time. The Earth has apparently cooled.
My hypothesis about the non-heating effect of GHGs can be simply falsified. Just show a repeatable experiment demonstrating the ability of CO2 to raise the temperature of an object which it surrounds – with appropriate restrictions.
Heating CO2 does not demonstrate any intrinsic CO2 heating abilities, although the distinction is lost on foolish Warmists, who follow Cargo Cult Science principles of convenience and correlation.
As to your hypothesis that Wagathon has no brain, what phenomenon are you trying to explain? it is obviously impossible for a person with no brain to write understandably. Are you writing Warmese, with its own perverted definitions, or attempting to be gratuitously offensive?
You can’t even provide an hypothesis explaining the supposed planet heating effect of CO2, can you? Foolish Warmist.
See, Aliens Cause Global Warming: A Caltech Lecture by Michael Crichton
flat earth Flynn for the Wynn
Soon, Connolly & Connolly took the Hoyt & Schatten 1993 TSI series, which both Hoyt and Schatten have disavowed, and compared it to the Connolly’s temperature series, which was “published” in an online journal that they created themselves for the purpose of publishing their own papers.
I’d missed that, thanks. So their paper using a TSI timeseries that is not regarded as reliable and compares it to a temperature timeseries that is essentially their own, rather than one of the more widely accepted timeseries?
Yes. The TSI series is Hoyt & Schatten 1993 extended to the present using ACRIM, and the tempererature series are the Connolly’s own, “published” here: http://oprj.net/
Thanks. Just looked at their paper more closely. Their NH temperature series shows more than 0.5C of cooling between about 1950 and 1970. GISSTEMP shows around 0.2C for 23.6 – 90N (and little cooling for 23.6S – 23.6N).
Both Hoyt & Schatten disavowed their own 1993 TSI series after natural variations in solar activity made a dog’s breakfast of Western Academia’s prognostications and AGM fearmongering?
After it was shown that variation in solar activity was wildly overstated in past work. e.g. this is what Schatten now says:
“Solar activity appears to reach and sustain for extended intervals of time the same level in each of the last three centuries since 1700 and the past several decades do not seem to have been exceptionally active, contrary to what is often claimed”
I don’t, however, see a statement from Hoyt (who is retired) disavowing their old work.
Apparently notwithstanding evidence that, as it turns out, “the modern Grand maximum (which occurred during solar cycles 19–23, i.e., 1950-2009),” says Ilya Usoskin, “was a rare or even unique event, in both magnitude and duration, in the past three millennia.” [Usoskin et al., Evidence for distinct modes of solar activity, A&A 562 (2014)
Good to know that even though Schatten has been thrown under the bus, Hoyt and Schatten 1993 will live on forever.
Please forgive an uneducated question …
How can meaningful measurements of surface temperature and air temp 2 meters above the surface have any meaning in the context of tides?
Climatologists no doubt claim that such concerns are irrelevant. Temperatures can be adjusted and massaged to fit their toy computer games, in any case.
Foolish Warmist climatologists live in a fantasy, where altering the past apparently changes the future. I suppose it keeps them off the streets.
The theory is correct, so any observational record that indicates less/no warming either has to be adjusted or relegated to the junk pile. What’s so hard!?
One significant problem with the mad assertions of foolish Warmists is that not one of them has managed to produce a falsifiable hypothesis relating to the alleged planet heating properties of CO2.
As you point out so eloquently, self styled climatologists either adjust or discard observations which they don’t like.
Cargo Cult Science, without even the pretense of a falsiable hypothesis.
Foolish Warmists! The triumph of fantasy over fact – a tenet of the foolish Warmist Church of Latter Day Scientism!
And yet, the world has actually cooled for four and a half billion years. Obviously too hard for foolish Warmists to accept. How do you intend to adjust for that? I know, just substitute another fantasy for inconvenient fact!
“The theory is correct, so any observational record that indicates less/no warming either has to be adjusted or relegated to the junk pile. What’s so hard!? ”
Isn’t this what some here have been claiming is how a segment of climate scientists reason? Or is it just some JCH sarc?
“Please forgive an uneducated question …”
Steven Mosher, in typical foolish Warmist fashion, attempts to hide his lack of knowledge, by dismissing a request for knowledge.
Foolish Warmists seem addicted to demonstrating their foolishness by uttering meaningless syllables such as “err . . .”, “no” “wrong”, “duh”, “huh” on a regular basis. The mark of the knowledge deficient fool.
Foolish Warmisst pretend to measure sea surface temperatures. They can’t even define what the surface is, or where it is, at any given time. Is there any point? Who knows? Foolish Warmists just lapse into meaningless sciencey jargon. Nothing explained, nothing of use. Just more meaningless blather.
Thanks for commenting. Estimating ECS from regression over the full 150 years of the 4xCO2 simulations is the standard method of estimating ECS in CMIP5 models – it was used for the canonical estimates given in AR5 WG1 Table 9.5. I don’t think that we know that it underestimates ECS in CMIP5 models generally. That knowleedge would require running models to equilibrium, which has been done in very few cases. Just extrapolating linearly from a regression over years 21-150 seems unsound to me.
For instance, I calculate that in HadGEM2-ES ECS estimated that way is 5.65 K vs 4.55 K based on years 1-150. The Geoffroy et al (2013, part II) J Clim estimate is similar at 5.55 K. But as shown in Figure 11 of Andrews et al (2015) J Clim, as the simulation continues the regression slope alters back again, and by the end of the simulation in year 1290 the best estimate of ECS is ~4.7 K, much closer to the years 1-150 based estimate than the years 21-150 based estimate. I don’t think this is the only model in which the regression slope increases again sometime after year 150.
I don’t see that I am misrepresenting anything – I stated how the ECS that I was referring to was calculated, and that is the standard method of estimation, used by the IPCC. Your method is not the standard one, and it is not at present clear whether it generally provides a more accurate estimate of ECS for CMIP5 models.
From a practical point of view, I think warming over the next several centuries can be projected quite accurately enough using canonical 150 year regression based ECS estimates. Beyond that time, slow feedbacks that are ignored in ECS may be more important than how the climate feedback parameter changes.
It worries me that you found the Marvel et al analysis to be convincing. May I suggest that you read my critique of if? The fact that they have had to make two sustantial corrections to their paper following my critique – they used the wrong F2xCO2 values and failed to measure land use forcing in the historical run – should surely dent confidence in the quality of their analysis.
This is an astounding claim: “From a practical point of view, I think warming over the next several centuries can be projected quite accurately enough using canonical 150 year regression based ECS estimates.”
Does projected mean predicted? Surely not. But if not then what does it mean? What is it that can be done accurately enough over the next several centuries? Several centuries!
I agree, how can we expect any model to project future temperatures given that most of the temperature change is caused by natural variability? Have you seen this chart that shows the temperatures in Ireland, Iceland and Greenland changed from near-glacial to near current temperatures in 7 years (14.500 years ago) and in 9 years (11,500 years ago)? see Figure 15:21 here: http://eprints.maynoothuniversity.ie/1983/1/McCarron.pdf
Climate changes abruptly. Always has. Always will. Also relevant is that life thrives during warm times and rapid warming events but struggles during cold times and cooling events. (Note, although this is of a regional rapid change, not global, it is relevant because life is impacted by the local climate change, not the global average).
There’s much more on abrupt climate changes in the past. Until the models can predict the time until the start of the next abrupt change, the sign of the change (warming or cooling), rate of change, magnitude of change, the models will be highly misleading for informing policy (IMO)
Climate sensitivity is a deeply confused concept. Sometimes it means (abstractly) the warming that would occur if nothing else happened, which is not a prediction, just an abstraction. But then it is used as a prediction, which is completely unsupportable because CO2 does not control temperature.
In Cowtan & Way (2014), Fig. 4(b), they only use IABP data from Rigor, so the red line only goes up to 1998. That’s only halfway along the graph. Has anyone tried adding the rest of the IABP data to present, to fill in the remainder?
Then something way inside the unexamined earth or under the unexamined oceans or way above the earth will have its say. Bond Event due? Some say…but that’s just saying, not examining.
And with a few more half-digested scraps of new sort-of-knowledge the Age of Incuriosity will extrapolate from slightly different points to arrive at slightly different wrong conclusions.
I just want the best estimate …. with confidence limits <0.1C :)
At the same time, I want the same for the damage function.
If you want confidence limits < 0.1 C, you will probably have to wait a few decades.
Heck, the definition of climate sensitivity itself may exclude confidence levels that high since climate sensitivity is not necessary a constant function of temperature.
ECS is not a function of temperature but it is a value at a particular time and state of the Earth’s surface conditions If we could measure it with high precision the confidence limits would very small. The confidence limits are the inaccuracy in estimating the value, not the range of the value itself at a point in time.
You say it’s not a function of temperature… but we know that forcing is not a perfect logarithmic function of CO2 concentrations, but more importantly the feedbacks are not perfectly linear. So warming due to doubling from 280 ppm to 560 ppm might be different than warming from 400 ppm to 800 ppm. And warming from 280 ppm to 560 ppm at current solar irradiance might be different than warming from 280 ppm to 560 ppm at maunder minimum solar irradiance.
My confidence in climatologists’ confidence limits is limited. Their confidence in their confidence limits appears to be unlimited.
Confident confidence limits, limited by limited confidence, but confidently resulting in unlimited confidence, evidenced by boundless confidence in unstated, but confident confidence limits, seems like something a a foolish Warmist climatologist might seriously propose, in all confidence.
I think what I have written is true, but my confidence is limited.
I hope this explains the science underlying climatology.
ECS does change as the conditions of the surface of the Earth change – e.g. amount of ice cover, desert, forest, grassland, location of the tectonic plates etc. I accept that. But you are missing my point. For the past 30+ years the confidence limits on the estimates of ECS have been around 1.5C to 4.5C, a range of 3C. But that is the uncertainty on the estimates, not on the variability of ECS for the current Earth state.
Ghil’s Figure 1.1 http://research.atmos.ucla.edu/tcd///PREPRINTS/Math_clim-Taipei-M_Ghil_vf.pdf depicts how ECS may change with “change of insolation
at the top of the atmosphere” (i.e. ECS is the slope of the tangent to the curve). So ECS does change a bit. but that is not the point. The point is that if we could measure ECS it would have a very low uncertainty.
I am not suggesting we’ll suddenly get there. My original comment was meant to be tongue in cheek (Sorry, I should have made that clear for people who don’t get that).
you dont need estimates that tight.
you dont need damages
totally different approach.
From your link –
“While such analyses assume ‘perfect foresight’ of a benevolent ‘social planner’, an accompanying suite of experiments explicitly acknowledges the rather uncertain nature of key responses to human decisions within the climate as well as the technology system. We outline first results into that direction and indicate an intrinsic need for generalisation within target approaches under uncertainty.”
I can’t see details of the experiments referred to. I’m assuming the “experiments” are not experiments in the physical scientific sense, but I may have overlooked them. I’m sure you will help if you can.
You’ve been saying repeatedly for several years, that you don’t need to know the damages. I presume you think just have to believe they will be bad). But you’ve never explained your justification for spending $trillions per year on climate industry for no benefit. I think you are nowhere near as smart as you think you are. I think you haven’t the faintest clue about what information is needed for rational policy analysis (with emphasis on rational) .
To try to make this simple enough for you to understand, to justify expenditure in $ we need to show the benefit in $. Temperature change cannot be converted to $. But damages or benefits (e.g. of temperature change and/or increasing GHG concentrations) are presented in $ and can be compared with costs (if the methods and assumptions are properly comparable). To estimate the damages or benefits we need to know the damage function. If you don’t understand this I suggest you begin by reading Nordhaus “A Question of Balance”. It’s a good introduction for you.
Sorry to be condescending to you, but you’ve earned it after several years of really stupid and ignorant comments on the matter of impacts.
“To try to make this simple enough for you to understand, to justify expenditure in $ we need to show the benefit in $.”
Or you show the benefit in terms of utility.
you dont need damages.
just sit down.. watch the video and learn.
or go read taleb.
you are much better off promoting Nuclear. go do that. people might listen.
You dont need to justify spending trillions.
1. because nobody is spending.. its a consumption reduction
2. its a tiny portion of GDP
Everything you are asking for is beside the point. just watch and learn
You haven’t go a clue what you are talking about. You can’t even explain what you are trying to say. You know nothing about policy analysis. You are making baseless assertions about stuff that is way outside your area of expertise. Go read Nordhaus, Tol, Lomborg . They understand. You don’t. Taleb is a scientist. He is not a policy analysist. Scientists don’t.
Oh yea, and the climate industry is costing $ trillions. And your argument that it is a small proportion of GDP really shows up your ignorance. What is relevant is the net benefit, not the proportion of GDP. And you can’t calculate net-benefit without having the damage function.
You are incredibly ignorant for someone so pompous and arrogant.
The Australian Treasury estimated the the Australian ETS (which started in 2012 and was repealed in 2014), would cost the Australian economy A$1.345 trillion (in 2011 A$) from 2012 to 2050. the cost per year would continue to escalate thereafter. No benefits were estimated (just innuendo in words). The electorate wide dumped Labor government at the next election.
I am amazed at how little you understand about what is relevant for policy analysis. You do, however, provide some insight into why the developed countries have got into such a mess with belief in the CAGW religion overcoming rational analysis and rational decision making..
A few points:
1) if you want to be taken seriously on finance / economic matters, don’t reference Taleb. Since the financial crisis, he has been distinguished mainly by a litany of failed predictions. In the academic world, he is regarded as a gifted writer with a flair for self promotion. Perhaps you should read his nonsense on homeopathy and GMOs before you cite him as an authority.
2) your point about “nobody is spending” and this is just a “consumption reduction” is so silly it is not even wrong. It shows a fundamental misunderstanding of economic concepts. I assume you have taken at least a basic economics course and remember C+I+G. Just ponder that formula for a minute and you will realize your error. You may want to also read up on social costs.
3) yes, the cost is significant. The Stern Review placed the cost of mitigation at 1% of global GDP, subsequently revised up to 2%. Many other economists are in a similar neighborhood. I’ll leave it to you to look up global GDP.
In case you miss it, I responded to three of your recent comments here: https://judithcurry.com/2016/07/12/are-energy-budget-climate-sensitivity-values-biased-low/#comment-796711
“should surely dent confidence in the quality of their analysis.”
Not an ice cube’s chance in Hades’ realm. Motivated reasoning explains why. They will never accept that the CMIP models are biased high, independent of the factual evidence.
This thread appears to me to be Models All The Way Down; i.e., an effort to model the past compared to recent observational data. The likely assumption being, once the models can faithfully reproduce the past, then the future can be assumed to be more in keeping with the models and future observations
None of the above makes any sense to me. The past remains the past. All the nuances that made the past as it turned out to be, are no longer the relevant influences the present holds. And, such determinants are unlikely to be as influential in the future as they may be currently. Chaos is inherently unpredictable. Words such as “deterministic chaos” are a non-sequitur attempting to provide language, and, hence, structure on a moving target. We have even come to use the terms Transient and Equilibrium Sensitivity responses as if these were real events or measurable behaviors.
Frankly, although I admire the effort and determination of trying to make sense of our real world, I don’t see any such understanding in the near future as our knowledge gap is huge.
There is another thing that bothers me and that is: evaporation of water surfaces. Dry cold air coursing over somewhat warmer bodies of water at a high rate of speed, should pick up more water, say in the Arctic and Antarctic Oceans than say a gentile breeze of humid air coursing over an open body of tropical water. Does anyone else see that there would be a substantial difference in evaporation between pole and equator? Do models make such differential estimates?
Re evaporation of water.
I agree there are substantial evaporation rate differences between poles and equator. Trying to measure the differences is fraught with difficulty – possibly to the point of uselessness. Note –
“This type of negative correlation is a major disadvantage of all pan and tank measurements and was well displayed by the measurements reported by Richards (1979) for the dry summer of 1976 (Fig. 4). Perhaps this is why Buchan (1867) wrote that ‘There is no class of observation which shows such diversity, we may almost say contrariety of results, as those made by different observers on evaporation.’”
I assume that deposition of ice in the Antarctica is greater than sublimation, otherwise the ice could not have reached the depths it has. Apart from that, I have about as much idea as anybody else.
It’s all very complicated. I don’t think climatologists have the faintest idea how it all works. They don’t even have a falsifiable hypothesis about the planet heating abilities of CO2. Subjects such as practical evaporation effects would probably leave them totally flummoxed.
Yet another missing brick in the wall.
But it can be measured. We have data on how much the air warmed at every surface station that’s recording data that day, and you can calculate how much solar forcing there is that day, and stations poleward of Lat 23 all have a nicely varying solar forcing. CS =delta T/delta Whr
The point of the REA16 paper was that if you do like with like and match the models only where the data was and allow for the differences in type of data, they agree with the data. This is to counter the common argument that the models are too sensitive compared to global warming. The argument then centers on what happens where there isn’t data. Who knows? The models have an answer. The skeptics think that despite matching the data so well where there is data, they somehow can’t match the real world where there is none. Tough argument to make. Having no data, you can argue anything on it.
Do observations qualify as data? Do model results qualify as data?
Yes and yes.
Jim, you do not seem to be an experimental physicist. Not everybody has to be one; thank you for stating your position clearly.
Some experimental physicists are a bit narrow in their outlook, but they are OK. In the modern world you have model experiments that, like all experiments, produce data too.
People like Steven Mosher produce data, as you put it. They fabricate data. That is, they produce data where none actually existed. They call this process science.
Toy computer games also produce digital fabrication. You may call this “data” if you wish, and the process which creates it, “experiment”.
If you hear muffled laughter in the background, it may be real scientists expressing their opinions.
I know some people believe that the characters in TV shows are real. Likewise, some people believe that computer graphics produced by NASA, for example, are reality.
Good for you. The country probably needs more gullible believers!
Keep that planetary heating going full blast!
You have two categories, “Verified by Mother Nature”, and “Verified by Models”. I only have one.
Well, there you have it, it is settled. Jim D has decreed that model results are “data”.
Now we can eliminate the effort to collect all of those pesky observations.
too funny Flynn.
Sorry I dont produce data where there is none.
I make a prediction. you can go get the data and test it.
that behavior has a name…..
oh ya science.
Don’t models have to be validated in order for their output to be considered as data?
Tim, don’t worry. The CAM 5.1 model I am referring to has been scientifically validated, whatever it means.
Models produce data. Models are used in engineering, for example, where you can test building structures or planes with data from model experiments. You can model the earth’s climate and weather. increase the sun’s strength in an experiment and see what happens. Change the GHGs and compare it to the natural experiment where the same thing is being done. It’s all data.
A reliable weather model would be extremely welcome.
Ask Judith about the ECMWF model some time. Her business interest depends on its reliability.
You wrote –
“I make a prediction. you can go get the data and test it.”
Any fool, even a Warmist fool, can make a prediction. What is the precise prediction you have made? If it still in the future, it cannot be tested.
When did you make this wonderful prediction? Was it any better than a naive persistence forecast? How did you establish this?
Foolish Warmist. Much faith, little fact, even less understanding.
Good luck with the prediction business. I’m sure you’ll become enormously wealthy and famous, if such is your desire, once the world becomes aware of your predictive abilities.
Are you sure are talking about the future, or the Warmist definition of prediction, which relates to adjusting figure to fit the output of a toy computer game?
The world wonders!
Flynn. Go download the prediction.
Foolish Warmist. If you think that I live to do your bidding, you are sadly misguided.
Mindless foolish Warmist blather praising your own magnificence seems to be having less effect these days.
Maybe you could keep trying to change the future by adjusting the past. It may work, eventually – who knows? In the meantime, the universe continues to unfold as it should.
No warming due to CO2,
Once again you have commented without having apparently read the material you are commenting on.
Go read what Lewis wrote above. He is saying that the models don’t match with the data even with those adjustments. His argumentation doesn’t centre on what happens when there isn’t data, in fact he specifically goes on to do comparisons with data that has much less of a problem in this regard.
He did not show any graph displaying what he was saying. If they don’t match where the data is, he needs to show his equivalent of the graphs in the paper, otherwise it is just words.
I’m not sure what to take from what you are saying. Do I understand you to say that because it is in words but not pictures you are unable to understand it?
I love this comment from Jim D: “He did not show any graph displaying what he was saying”
So if you don’t produce graphs, it ain’t science.
Most scientific papers do have graphs. They are important to making a case. It rarely can be done with just words. You need to show data to refute data. That’s how it works.
HAS, yes, he made no quantification of how what he said affects global temperatures. Does the model fit the observations where the REA16 paper said it does or not? Did he even refute that part? For all we know, everything he says is within the error bars already presented by REA16.
As I said go read Lewis and Richardson so you can comment on what the debate is about, rather than what you in an apparently superficial read given the absence of pictures to guide, think it might be about.
There is a clue in the 1st para of this post and in the last three sentences of Richardson’s abstract.
HAS, that is why I said at the top Lewis has to counter that argument by arguing about places where data isn’t. That’s his only way, and his argument was even weaker for not having done any math on the assertions he makes.
It’s well known that foolish Warmists love to create brightly coloured and essentially meaningless pictures.
They seem to be of the view that the brighter the colours, the more reliable future predictions become.
One day, these foolish Warmists will realise that the right hand side of the graph (the future) may not bear any relationship to the left hand side. Economists and financial wizards may come to the same conclusion. Who knows?
“The point of the REA16 paper was that if you do like with like and match the models only where the data was and allow for the differences in type of data, they agree with the data.”
As an aside, if this is true then all previous attempts to tune the models have been done wrong – they tuned their orange model to the apple reality, as it were. Yet marvellously, REA16 shows they got it right! How about that, eh? That’s obviously why data should be made to match models, innit? Eh? Eh?
Do I detect a faint whiff of sarcasm? Is it because the bumbling fumblers can’t even model what’s already occurred?
Quickly! Summon a skilled data manipulator (preferably with an immaculate white scientific coat) to torture the data into compliance.
The future will look after itself, as usual.
Some people are surprised because the models can only match the data when they have the forcing changes going on. It is necessary to have all the physics to do this. Omit the forcing and the model can’t represent the data.
Mike Flynn: Faint whiff? Your olfactory sense is decidedly under-reporting!
Jim D: What has the CONTENTS of the model physics got to do with TUNING the models and the COMPARISONS of models to reality?
My point, as Mike noticed made with sarcasm, was this:
The models are NOT ground up – we do not and likely never will have enough computing power to simulate the climate from first principles. Therefore there are parameterisations. These need to be “tuned”. They were, using GMST among others. Now REA16 is saying that the GMST calculation is different between measurements and models. Since this is new research, no-one used (as far as I know) REA16 style comparisons to tune the model parameters – they used the ones that were out by 24% according to REA16. REA16 then says that if we do it correctly (the comparison) reality fits the models very well.
So it’s an amazing piece of luck, wouldn’t you say, that models badly tuned because of faulty comparisons to reality are still surprisingly accurate when you do the comparison the “correct” (REA16) way.
Surely REA16 is telling us that we need to retune all climate models using the correct comparison method and then and only then re-run the models and re-do the comparisons using the “correct” (REA16) approach. You cannot hide behind “this doesn’t matter”, because clearly REA16’s 24% is not in any way “insignificant”.
However, to me, even then this is fiddling at the edges – absolute temps (not trends) are out a goodly way in the models. This matters – ice doesn’t melt when the anomaly changes x degrees, it melts based on absolute temperature. Melting ice affects albedo etc etc. Similar things pertain to precipitation. Get that wrong, your projections are useless – especially where (as per IPCC) feedbacks dominate.
The tuning is related to the global energy budget, not the surface temperature map. Matching surface temperatures is a bonus.
I really do despair. You constantly show you don’t read what you are commenting on. Go read what literature on tuning GCMs exists.
To give you a hand, the online draft of the abstract of Mauritsen et al. “Tuning the Climate of a Global Model”, starts:
“During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system. These desired properties are observables, such as the radiation balance at the top of the atmosphere, the global mean temperature, sea ice, clouds and wind fields. The tuning is typically performed by adjusting uncertain, or even non-observable, parameters related to processes not explicitly represented at the model grid resolution.”
The paper makes it clear that once upon a time radiation balance was the main game, but these days getting historic global temps right is the entry level for models, and this is what gets tuned to.
“Yes and yes”.
As often as this has seemed the case (for the warmists), I don’t think I’ve ever seen it admitted to.
Could someone please explain to this layman how increasing the climate sensitivity values will make models more accurate if the models were already too warm 97% of the time already?
harkin1, that’s an easy one. They will tell us that the earth must have warmed more than we think and therefore the corrected observations confirm the models, and visa versa. See how that works?
Well, one of the results from REA is to show that the warming in the models depends on whether it is based on air temperatures only, or on air temperatures over land and SSTs over the ocean. Taking coverage bias into account also has an impact. Therefore, part of the reason that the models show more warming than has been observed might simply be that it’s not a true apples-to-apples comparison.
ATTP, there is an apples to apples comparison that does not show the models in a good light, and which avoids all the many problems with surface temperature records: UHI and poor siting, air/SST gemisch, poor coverage, Karlization, kriging over discontinuous surface types, to name a few.
That is to compare CMIP5 troposphere predictions to satellite and balloon observations. And 3 sat data sets almost perfectly match 4 balloon data sets since 1979 despite being independently derived by two different means. The troposphere is where the GHE operates; if that’s wrong then modeled surface temps cannot be right. Despite having been parameterized to best hindcast from 2006 back 30 years, so that <1/3 of the period (since sat obs began in 1979) is true out of sample model projection, the CMIP5 global troposphere mean is running hot by 2.5x. The tropical (+/- 20 latitude) troposphere mean is running 3x hot. That is a big model fail, which is why Gavin Schmidt tried (and failed, see climate audit) to vociferously object to simple depictions of this easy to comprehend fail. See Christy's Feb. 2016 Congressional testimony for the specifics supporting these statements. Which in turn suggests no faith should be placed in model TCR.
“and which avoids all the many problems with surface temperature records: UHI and poor siting, air/SST gemisch, poor coverage, Karlization, kriging over discontinuous surface types, to name a few.”
1. If you take NCDC and use RAW DATA ONLY, that is DONT DO
the Karlization, And then compute TCR per Nic Lewis
TCR goes UP!!!
I will keep this easy for RUD
1. Compute TCR and ECS per Nic Lewis, use his base periods
and final periods
2. Look at ALL versions of global temperature. NCDC, GISS, HADCRUT, BerkeleyEarth.
3. Look at
b) Adjusted… even various versions of adjustments.
Your answer will be essentially the same for all versions. the differences are minor 5-15% None of which changes what we should do from a policy perspective.
Why would I do that? Isn’t the definition of insanity doing the same thing and then expecting a different result.
But you fail to examine the two natural time scales that can be calculated with the available data with the least amount of stepping on the data.
And that still ignores that modern warming is explained from multiple separate regional changes.
Ristvan makes a good point I think. One would hope that after 50 years of development, that GCM’s would do a good job not just at the surface for an integral quantity like global temperature anomaly, but for the bulk atmosphere as well. There has been continuous wrangling about satellite data and balloon data and the hot spot for well nigh 25 years. One would think that if there was a data problem, it would have been found by now. And then there is polar amplification. It seems the models tell a little bit of an inaccurate story for both polar regions. As Nic points out, this difference between SST and surface air temperature seems to not be really very well supported either.
I often wonder if there is something about the planetary boundary layer and evaporation that is still not adequately represented. It is very complex with local topography playing a key role. And the grid is totally inadequate to resolve the profile near the surface.
> I will keep this easy […]
Even easier: ask Sir Rud to provide one and only one apples-to-apples comparison such as advertized.
I don’t always advertize an apples to apples comparison, but when I do, I don’t armwave to a shopping list and let otters figure out the combinatorics. Unless we’re into hispter science:
In addition to what I said previously, It is also probably true that surface temperature data has also undergone a very thorough vetting process and one would think that the problems would be resolved by now, which seems likely I think, at least where there is good coverage in 1st world countries.
The post says the models were not too warm.They matched the data where it was observed, so that means you need to pay more attention to them.
“The post says the models were not too warm.They matched the data where it was observed, so that means you need to pay more attention to them.”
So all the models which came previous and influenced the work of the IPPC were garbage?
Good to know, thanks!
It is a positive thing that the models match the observations. Take it and move on.
JimD, A similar mechanism would also explain GCM different rates of warming of SST and air temperatures. If the boundary layer is under resolved, the temperature profile would be washed out and result in the surface temperature being too high, i.e., too close to the temperature outside the boundary layer. Normally, a turbulent BL requires at least 20 to 30 grid cells to resolve with standard eddy viscosity models such as those used in GCMs.
I think the last I looked a couple years ago, CMP5 models still had the kludge where it allows a supersaturation of water vapor at SST and air boundary , or iirc they didn’t get feedback, and ran cold.
Your result is spurious.
Go read skeptics on why your method fails.
Or pay me 500 per hour and I will show you.
Now. Why look at all versions ?
Simple. To show that choices in temperature series Don t change the answer in a policy relevant manner.
It’s called sensitivity analysis.
Changing your assumptions to show they don’t matter.
Matching models to observations, or vice versa, the
get out card – heat’s hiding in the ocean deeps.
Sink me, how did it gets there and why it don’t show’s
another question when there when short wave light
penetrates only the first hundred metres depth and
long wave back radiation a few cm only. Contra land,
in oceans evaporation, not radiation rules.
+10 for your link Beth!
Not much science but the logic seems spot on.
Greetings Peter, good to hear from you.
Re the late John Daly, resisting climate drama, (hysteria?)
investigating sea level rise at Isle of the Dead historic
mean sea gauge. Measured in 1841, still shows no
The global warming establishment is the pharaonic construction of Western academia to serve the political interests of the Left. The long hiatus in global warming is already shaking the global warming house of cards; and, nature is dictating what happens next which may be global cooling not warming –e.g., has global warming paused or is the globe at the top of a climate change rollercoaster?
The models also say there should be about a 0.02 C / decade decrease in the diurnal temperature range and there has been a decrease of 0.07 C/ decade because minimums are raising much faster than maximums. Since it is probably the oceans causing most of that change the proper thing to do would be to use the increase in the maximum temperatures and use the models to determine the increase in the minimum temperatures. It appears we have warmed a lot less than expected.
The right thing to do is to stop using the stupid diurnal range, it doesn’t represent a physical value that has any use in understanding what is happening during the day/night cycle.
You get a portion of the cooling from yesterdays thermal cycle, and part of the cooling from todays thermal cycle. But you don’t get a complete cooling period, you can’t directly compare warming to cooling.
I am still astonished anyone who is suppose to be exploring the interaction between forcing and response would use this for anything.
Nic Lewis — I am just a layman on Climate Science and have a “Big Picture” question:
I’ve read several articles that say in looking at the “Big Picture”, that TCR estimates from scientists like Gavin Schmidt versus Lewis/Curry are not all that different time wise:
For example, that if we assume a CO2 emissions rate in a BAU scenario — there is only about 30 years difference between Dr. Schmidt’s conservative estimate of TCR versus Lewis/Curry.
Could you comment on this? Thank you.
In terms of how much longer CO2 emissions could keep growing at recent rates and still meet some goal for the maximum rise in global temperature if TCR were ~1.35 K rather than ~2 K (the effective TCR of CMIP5 models), I wouldn’t argue with a figure of 30 years. Indeed, it would expect a rather shorter time.
However, I’m not sure that is the best way to think of it. I’m not a policy expert, but, if I understand correctly, the conventional way of thinking about it is as fllows. If TCR is lower then the social cost of carbon will be lower, since a given amount of emissions will produce less warming and hence cause less damage. In turn that would mean that, beyond a certain point, more costly ways of curbing CO2 emissions would no longer be worthwhile undertaking, at least not at first. However, less costly ways of curbing emissions than that would still be worthwhile. So, emission reductions should simply be more gradual – giving more time for technological developments to bring down the cost of reducing those emissions for which it is currently high – rather than BAU continuing for a further period before serious emission reductions are made.
I think formally you can take your distribution for the TCR and calculate some range for the SCC. However, your 5-95% range is something like 0.9 – 2.5C. The IPCC’s likely range (which is 17-83%, I think) is 1 – 2.5C. Hence, your range suggests that it is about 3 times less likely than the IPCC suggests that the TCR will be above 2.5C.
So, sure, you could use your TCR distribution to calculate an SCC that is lower than you would get with the IPCC range. However, there is only one reality and if that reality is one in which the TCR does turn out to be high, we might prefer to have acted sooner, rather than later.
@ ATTP – You need a lot more information to calculate the SCC than TCR.
Nic Lewis — Thanks for your reply. If we couple two things together, we have a pretty positive AGW message: (1) TCR per Lewis/Curry; (2) Dr. Molina’s and Dr. Curry’s favorable opinion of no-regrets “Fast Mitigation” (ie., reducing emissions of smog, black carbon, methane, HFCs).
Although I thought it was obvious, I was suggesting that you could do an SCC calculation using the IPCC TCR range plus everything else that you would need to do such a calculation, and then repeating the same calculation with the only change being that you use Nic’s TCR range.
@ ATTP – Unfortunately, there is no well accepted way (yet) on the methodology to do such calculations nor is there agreement on the damage function, mitigation costs, etc. Also, most integrated assessment models don’t take TCR as an input.
Maybe you could take the DICE model and change the ECS parameter slightly and see the results. But the DICE model doesn’t use an expected social welfare maximization approach so doesn’t properly take uncertainty into account.
“However, there is only one reality and if that reality is one in which the TCR does turn out to be high, we might prefer to have acted sooner, rather than later.”
Agreed. It is therefore important to seek a much more precise estimate of TCR than has to date proved possible. In the meantime, if the damage function used is correct and the discount rate is generally agreed (both big ‘ifs’) then AFAIK integrated assessment models should in theory be able to arrive at an optimum SCC given a probability distribution for the relationship between an emission pathway and a warming profile. The primary input into that will be a probability density for TCR, although other probabilistic estimates will be required for such things as carbon cycle parameters and the current radaitive imbalance.
Such an approach seems to me more rational than planning on a worst case TCR value regardless of cost or of what damage would result if the worst case was true.
I realise that there is no agreed way. That doesn’t mean that you can’t repeat a calculation using two different TCR estimates. Also, even if we can’t agree on the actual calculation, we can probably agree that a lower TCR will produce a lower SCC.
Only if you can be confident that the more precise estimate that you get is actually also more accurate.
I wasn’t actually suggesting planning on a worst case TCR value. I was really just suggesting that a reduction of a few in the probability of a high TCR value shouldn’t really be a reason to breathe a sigh of relief. As it is, if we take the IPCC’s values, then a 50% chance of remaining below 2C will require emitting no more than roughly another 500GtC. That’s 50 years at current emissions. If we use your values, we can increase that to say 750GtC, so about 75 years at current emissions (numbers approximate). Either way, it would seem that achieving this will require starting to reduce emissions pretty soon.
Of course, you could change the target, or choose not to have one, but it’s hard to see how whether you consider your TCR values, or the IPCC’s, that we wouldn’t conclude that starting to reduce emissions soon would be a sensible option (and, of course, reducing emissions doesn’t mean not using fossil fuels, it just means reducing the net emission of the CO2 into the atmosphere).
@ ATTP – “As it is, if we take the IPCC’s values, then a 50% chance of remaining below 2C will require emitting no more than roughly another 500GtC… Either way, it would seem that achieving this will require starting to reduce emissions pretty soon.”
All this requires that the 2C target is actually desirable. Based on the results of integrated assessment models, an optimal emissions path is one where warming peaks at around 3-4 C above pre-industrial levels around 2120.
I wasn’t arguing for the target, as I was hoping you might get from the end of my comment. However, how do you conclude that IAMs suggest that optimal warming would be 3-4C above pre-industrial?
@ ATTP – “However, how do you conclude that IAMs suggest that optimal warming would be 3-4C above pre-industrial?”
Well take the DICE model results for example:
Page 31. Opt is optimal.
I think the problem here is to treat this as a static analysis. The information we have changes in time and we can reasonably make forecasts about how that dynamic is progressing in the same way as make forecasts of scc. Even without such forecasts, and particularly in respect of the extremes, we have real options. We need to consider the CBA of waiting 10 years to see if it is 5% or 17% probability of being >2.5C. If it is then the cost of the 10 years not addressing this directly may well be insignificant when compared with the cost of going all out today to manage this risk if it is not.
But there isn’t any evidence of a high sensitivity in the measurements.
Question for Jim D: You said “In the modern world you have model experiments that, like all experiments, produce data too.” Could you define the term ‘model experiment[s].’
It’s not surprising to see differing ideas about sensitivity because:
1. equilibrium is a fuzzy term and
2. there are so many fuzzy ways of making up a global average number:
But it’s all probably a meaningless exercise since even if one knew GMST, most of the imaginary goblins of global warming aren’t correlated with GMST. So, neither sensitivity nor even temperature are particularly useful metrics.
TE, I agree that the only meaningful climate impacts woild be regional. I also agree climate models do not regionally downscale, so say nothing about potentially meaningful impacts. I disagree that observational TCR and ECS (effective) are not meaningful and important. The likely values (~1.33 and ~1.5-1.8) teach three things. First, the faux 2C goal is not an issue in this century, if ever. Second, because the rate of change is slow, adaptation, not mitigation, is the only sensible policy for the next few decades, by which time more will be known about important issues like attribution and the length of pause. Third, the models yet again do not agree with observation, so relying on them is a fools errand.
All three sound bites are IMO useful on the sciency side of this political debate.
TE “So, neither sensitivity nor even temperature are particularly useful metrics.”
Precisely. All because of The One Trace Gas.
The now receding El Nino demonstrated that a purely natural event can send the global temperature up and down more than half a degree in a matter of months – something said gas might do over decades. Not only that, it caused sudden, massive changes in precipitation patterns over about half the globe – and that’s what really counts. We might well live with a degree or two warmer over a century, but sudden droughts and floods have wiped out peoples in the past and will do so again in the future.
Were it not for The One Trace Gas this might be studied objectively, but here we have the illustrious climate community, conjecturing about discrepancies of a few tenths of a degree between faulty observations and fanciful computer models – like a cabal of soothsayers trying to divine an incongruity between the alignment of the stars and the entrails of a goat.
since the highest damages come from Sea level rise..
you might want to google Steric..
na.. just keep posting crap.
Scientists actually listen to Nic
you and rud?
well you get no traction.. which is sad because you are not dumb.. just unfocused.
Man figured out the answer to sea level rise a long time ago.
Any Dutch around here?
“since the highest damages come from Sea level rise..”
I was under the impression that the highest damages were from productivity losses and that damages from sea level rise is relatively small (<$1 per metric ton of CO2). Do you have any information to back up this claim?
Any Dutch around here?
Long ago the EPA did a study on ADAPTING to sea level increase..
guess where I am going with this.
There are three reactions to Sea level rise
A) deny the science.. nobody listens to you.
B) accept the science and argue for mitigation
C) accept the science and question mitigation as a panacea
I went back to the beach I frequented 40 years ago.
Unlike many things which have changed since then, the piers, the roads and beach houses are all still there, pretty much as I left them.
A) deny the science.. nobody listens to you.
B) accept the science and argue for mitigation
C) accept the science and question mitigation as a panacea
D) hyperventilate… many will listen, but mostly those predisposed to panic over irrelevancy
since the highest damages come from Sea level rise..
A one meter rise today would be catastrophic.
A one meter rise over 300 years ( about the current rate ) would be irrelevant.
I went back to the beach I frequented 40 years ago.
Unlike many things which have changed since then, the piers, the roads and beach houses are all still there, pretty much as I left them.
Why would anybody pay attention to you? Sea level rise is not uniform. What happened at your beach is representative of not much at all. Meanwhile, the Dutch are spending money on sea level rise. Yeah, they overreacted; they forgot to send the dutch boy back to his beach.
Why would anybody pay attention to you? Sea level rise is not uniform. What happened at your beach is representative of not much at all.
Who’m I gonna believe? You? or my own eyes?
Sorry, I’ll believe my observations more than the panicked any day.
Meanwhile, the Dutch are spending money on sea level rise.
Always a good move when much of your country is beneath sea level to begin with.
And they’d have to even if sea level were unrealistically static because the Netherlands are sinking:
Parts of the Netherlands are sinking even faster than sea level is rising:
Of course the Dutch will mitigate future sea level rise. A third of their country is below sea level. However, they have been mitigating sea level changes severe storms and land subsidence for two thousand years. Why would they stop now?
The Left wants society to change and as Daniel Botkin observed, “The only way to get our society to truly change is to frighten people.” That’s where sea level rise comes in: the Left wants you be frightened and drowning in your living room seems like a good idea to them as spreading fears about the demise of the polar bears, calving of ice bergs and disappearance of glaciers just ain’t cutting it for them since we learned there has been no global warming going on 2 decades.
Do The Netherlands fall under A, B, or c?
Finally, sea level rise will continue for centuries
beyond 2100, and sea level rise over the 22nd century
is projected to exceed that of the 21st century
(Jevrejeva et al. 2012). This long-term aspect should
be considered in adaptation plans. …
That’s Professor Curry’s SL scientist.
You left out option D.
D) – assign the problem to engineers.
Some years ago, it was discovered that the volume of the ocean basins was continuously changing. Continental drift is unceasing and three dimensional.
Maybe you are unaware that this occurs. Your insistence that heat from the Sun is somehow transferred throughout the oceans, and expanding them while the rest of the Earth remains static, is, quite simply, bizarre.
Foolish Warmists like Al Gore, might believe that a body exposed to a heat source just keeps absorbing heat ad infinitum. This might account for him believing that the Earth had absorbed so much heat from the Sun that the Earth’s interior temperature was millions of degrees. Fool!
Foolish Warmists believe that an object on the Earth’s surface will steadily increase in temperature, year on year, even if the Suns output remains constant. Foolish Warmists!
After four and a half billion years of absorbing sunlight, the Earth has actually cooled. Warming due to CO2? Only in the fevered imaginations of foolish Warmists!
Maybe you could try learning some science. A science degree at a reputable university might help. Richard Feynman’s physics lectures are available at no cost now, and are generally regarded as quite good.
I wish you well if you decide to pursue some scientific education.
Surprised no one pointed out to SM that steric SL rise is not the scientific or policy problem. The mass contribution from melting ice caps (which also relates to the 2C target) is the big risk, and the big unknown.
“The paper REA16 primarily cite to support faster warming of tas over open water than SST,[vi] which is also model-based, attributes this effect to the thermal inertia of the ocean causing a lag in ocean warming. This argument appears to be unsound.”
This inertia situation is something that needs a bit more investigation and clarity.
The thermal inertia lag argument makes no sense here. To be fair, Richardson et al don’t themselves promote it, instead proffering the sounder surface energy balance argument advanced by the second paper they cite (Ramanathan 1981). But it is strange that they first cite a paper that makes the thermal lag argument. It is, however, a minor point.
I agree it is a minor point as far as this paper is concerned, but thermal inertia of the oceans should depend on the total heat capacity of the oceans not a slab simplification. Rosenthal et al. 2013 indicates that are long time scales that need to be considered plus there was a recent paper that indicated southern hemisphere wind patterns are causing a lag in SH warming. So when I see mention of thermal inertia, I know it can be read as long term persistence or dismissed as a non-factor. It would be nice to clarify exactly what constitutes thermal inertia in climate model land if the term is going to be tossed out there.
Boy am I enjoying this comment thread, with authors on both sides making point and counter-point. +1000
ya Flynn and Springer and Lang are doing a great job challenging the Lukewarmer Lewis..
Here is a clue for skeptics. If you want to be taken seriously and actually exert influence in the debate….
follow Nic Lewis’ example.
ya do science.
Thanks for your support, however I must correct you.
I am not challenging anybody. Maybe you missed the fact that until somebody provides a falsifiable hypothesis to support the contention that CO2 causes warming, I put it in the same category as phlogiston or caloric. Just mild amusement at misguided assumptions.
Facts are facts. If foolish Warmists claim CO2 has planet heating properties, then bully for them! If the Earth was created molten, the foolish Warmists may face difficulty formulating a falsifiable hypothesis which includes this fact.
You may be a foolish Warmist, or not. Your claims to be a scientist, and your apparent fixation with the reduction of CO2 levels in the atmosphere, certainly support the view that you are somewhat gullible, and easily led.
Your beliefs are your beliefs. Attempting to coerce or bully people into accepting your beliefs as fact is symptomatic of fanatics, cultists, and foolish Warmists.
Mike Flynn wrote: “I am not challenging anybody. Maybe you missed the fact that until somebody provides a falsifiable hypothesis to support the contention that CO2 causes warming, I put it in the same category as phlogiston or caloric. Just mild amusement at misguided assumptions.”
The Earth’s is not the best place to learn about what CO2 does in our atmosphere. Aside from the inconvenient fact that we have only one planet, the changes are too slow (ca 0.2 degC/decade) to measure accurately, particularly SST – which has been measured by a series of changing techniques. The we have El Ninos and La Ninas – unforced variability – that can change GMST by 0.2 degC/month! And other forms of unforced variability such as the AMO and PDO, which change too slowly to have been properly characterized. Then we come to aerosols and the unknowns of the indirect aerosol effect.
Scientist test their falsifiable hypothesis in the laboratory under the simplest, most carefully controlled conditions possible. Then they move on to more complicated systems
If you want to know what CO2 does to infrared radiation passing through the atmosphere, study it in the laboratory, where precise, reproducible measurements are possible, at different temperatures and pressures, diluted with different gases, etc. The interactions between GHG and radiation have been studies in this way for nearly a century.
Throw in conservation of energy and the emission of thermal IR and you have pretty good evidence that CO2 causes warming. The only questions is how much? That is what Nic and others are trying to find out. That does require dealing with all of the above difficulties, which is why the IPCC’s confidence interval of ECS hasn’t shrunk in more than a quarter-century.
Unfortunately, the fact that CO2 can be heated, provides precisely no support for the claim that CO2 heats anything by itself. You’ll no doubt notice that the foolish Warmist heating effect of CO2 is only noted where there is a source of heat capable of heating an object to above ambient temperatures.
The heating effect of CO2 does not occur at night, indoors, in enclosures, in the shade, when it is cloudy, raining, snowing, when it is cold, and so on.
It is the effect that has no effect.
The Earth has cooled for four and a half billion years, it would seem. Nothing seems to have prevented this natural consequence of physical principles.
In relation to the opacity of dry, CO2 free air, to certain wavelengths of infrared light, compared with pure CO2, Tyndall found a ratio of some 1:2000 (a little less, from memory, but let’s be generous).
However, 400 ppm is 4 in 10,000. Proportionally , 4 units of CO2 will absorb 8000 units of radiation – 4 x 2000. The balance of the air (comprising mainly O2 and N2) will absorb 9,960 equivalent units of radiation, or more than that absorbed by the CO2 of the same wavelength, even though the air is thought of as largely transparent to infrared light.
Very roughly, some 23% of the insolation reaching the atmosphere from the Sun does not reach the ground. As much as foolish Warmists would like to imagine, not all of this energy is absorbed by GHGs.
Even foolish Warmists with PhDs seem to be in denial of physics.
The Earth has cooled, and a PhD is not necessary to understand why.
If you believe I have erred in fact, I would appreciate correction. I may have misremembered Tyndall’s figures, but the principle remains valid.
Mike Flynn wrote: “If you believe I have erred in fact, I would appreciate correction.”
The full explanation is quick complicated, but I’ll try.
Mike wrote: “Unfortunately, the fact that CO2 can be heated, provides precisely no support for the claim that CO2 heats anything by itself. You’ll no doubt notice that the foolish Warmist heating effect of CO2 is only noted where there is a source of heat capable of heating an object to above ambient temperatures.”
“The heating effect of CO2 does not occur at night, indoors, in enclosures, in the shade, when it is cloudy, raining, snowing, when it is cold, and so on. It is the effect that has no effect.”
Frank replies: Everything above absolute zero emits radiation, though a few things (like N2 and O2) emit relatively little compared with blackbodies. At the temperature relevant to our climate, we call that radiation thermal infrared (or LWR). In thermodynamics (but not everyday conversation), heat is the NET flux of energy between two objects and heat always flows from hot to cold. Therefore, thermal radiation transfers energy – but not heat – from cooler objects to warmer objects here on Earth. To determine why something is getting warmer or cooler (is gaining or losing internal energy), one needs to consider all of the routes by which that object is gaining or losing energy – radiation, thermal collisions, evaporation, etc. With everything on earth radiating thermal IR at everything else on the planet (except nitrogen and oxygen gases), and molecules constantly colliding with each other (including N2 and O2), and with heat being transferred by convection, it doesn’t make any sense to discuss what things heat and are heated specifically by CO2. AOGCMs try to do so, but we don’t need to rely on their accuracy to understand why CO2 changes climate.
To avoid these complications, take advantage of the fact that energy can only enter the planet via radiation (and that the surface well insulated from the heat released by radiative decay makes the mantle and core hot). We use the Schwarzschild equation* to calculate the change in OLR that would follow any instantaneous change in our atmosphere – a doubling of CO2 for example:
dI = emission – absorption
dI = n*o*B(lambda,T)*ds – n*o*I*ds
The change in radiation, technically spectral intensity, (dI) at a particular wavelength (lambda) as it travels an incremental distance (ds) past a particular GHG in the atmosphere depends on the density of GHG molecules (n), their absorption cross-section (o) measured in the laboratory for the GHG at that wavelength, the spectral intensity of radiation entering the ds increment (I), and the Planck function (which depends on temperature and wavelength). Changes in OLR or DLR are calculated by numerically integrating this equation along a path from the surface to space (or space to the surface) and then integrating over all relevant wavelengths. Since numerical integration is performed over ds increments short enough that incoming radiation (I), n, o and T don’t change appreciably, this methodology correctly handles the problem of “saturation” and the problem of two GHGs competing to absorb the same photons. MODERN and HITRAN are programs that automate these calculations. They are not AOGCMs.
The Schwarzschild eqn what tells us that TOA OLR would instantly fall about 4 W/m2 if CO2 were instantly doubled and before anything else changed. When this radiative imbalance is converted to internal energy, conservation of energy demands that our atmosphere instantly BEGIN to warm. That warming will persist until the Earth emits (or reflects) an additional 4 W/m2 to space – until the radiative imbalance no longer exists. This approach avoids the difficulty of calculating all of the details about how, how much, where and when CO2 will warm our climate. Therefore, we can be sure that increasing CO2 will warm our climate, but not how much, how fast, or where. Since most photons escaping to space are emitted by the atmosphere, this approach doesn’t prove that the surface will warm – a problem alarmists like to ignore – but I will come back to it later.
The predictions of the Schwarzschild eqn have been validated by looking at the spectra of OLR reaching satellites in space and DLR reaching the surface of the Earth at locations with differing humidity and temperature. (The spectrum of OLR observed from space has changed over the decades from rising CO2, but 4 W/m2 per doubling is a calculated – not measured – value.)
*If the Schwarzschild eqn looks unfamiliar, consider two limiting situations: In the limit where I is much greater than B(lambda.T) – for example in the laboratory with a 2000 K filament as a light source – the first term can be neglected and integration produces Beer’s Law. In the limit where radiation has passed far enough through a homogeneous gas that absorption has come into equilibrium with emission and dI = 0, I = B(lambda,T) and the radiation has blackbody spectral intensity. Planck’s Law was derived assuming radiation in equilibrium with “quantized oscillators”. Blackbody radiation is emitted by objects whose internal radiation is in equilibrium with its temperature. (Emissivity less than unity arises from reflection at a surface.) The Schwarzschild equation is needed when such an equilibrium hasn’t been reached. This is the case in the atmosphere at some altitudes and wavelengths. Alarmists skip over this problem by postulating “optically thick” slabs of atmosphere that don’t really exist on our planet.
Using the Schwarzschild eqn requires that we know the temperature, density, and composition of the atmosphere at all altitudes. However, because temperature in the troposphere is controlled mostly by convection, we can only use such radiative transfer calculations to tell us how radiation would instantly change – not how temperature will change as a result of this change in radiation. For temperature change, you need an AOGCM (and they have numerous problems).
The relationship between the surface temperature (our climate) and the temperature of the atmosphere (which varies with altitude) is determined by buoyancy-driven convection and the adiabatic lapse rate (which depends on humidity). When the lapse rate is unstable (i.e. the upper atmosphere is too cold compared with the lower atmosphere which is heated by the surface which is heated by SWR), convection moves heat to the upper atmosphere. When increased CO2 causes less heat to be radiated to space from the atmosphere and the atmosphere to warm, less heat will need to be convected upward to maintain a stable lapse rate. By convention, we begin by assuming that the lapse rate won’t change and call the correction for the decrease in lapse rate with rising humidity “lapse-rate feedback”. In principle, the upper atmosphere could warm enough to emit or reflect an additional 4 W/m2 of radiation to space at the same time surface temperature is falling. Even the most skeptical climate scientists accept calculations showing that increasing water vapor has a greater warming influence as a GHG than a SURFACE cooling influence via a decreased lapse rate. How much absolute humidity will change (and where) can only be guessed via AOGCMs, but as long as absolute humidity doesn’t fall, increasing CO2 will cause warming at the surface.
All of the debate about climate sensitivity is equivalent to asking how much the surface of the planet needs to warm for the atmosphere (and to a small extent the surface) to emit or reflect an additional 4 W/m2 of radiation to space. A simple blackbody at 255 K needs to warm 1 K to emit an additional 3.7 W/m2 W/m2 of thermal infrared, so this is simplest way to roughly quantify the warming expected for a doubling of CO2.
The Earth is not a simple blackbody. A warmer Earth will have more water vapor (a potent GHG) in its atmosphere – and different clouds and a lower lapse rate – and probably less snow and ice on its surface reflecting SWR. However, all of these changes (“feedbacks”) begin with the reduction in OLR to space caused by increased CO2 (“forcing”) and therefore can’t reduce warming to zero. In principle, negative cloud feedbacks could reduce warming for doubled CO2 to 0.1 K or less, but that would mean that the reflection of SWR and emission of LWR by clouds is extraordinarily sensitive a small change in surface temperature. In general, there are fewer and higher clouds during the summer than the winter, so I suspect cloud feedback is positive. However, Linden has shown that net feedback is negative in the tropics when the temperature changes rapidly in response to El Nino. If applicable globally, his negative feedback could reduce the warming from doubled CO2 to 0.5 K. In summary, feedbacks amplify or suppress warming – they can’t eliminate it.
Mike also wrote: “The Earth has cooled for four and a half billion years, it would seem. Nothing seems to have prevented this natural consequence of physical principles.”
This is wrong. Lord Kelvin calculated that the inside of the Earth would have solidified in 10-100 million years. Therefore the surface of the Earth must have cooled in a few million years and has remained in thermal equilibrium with incoming post-albedo solar radiation and OLR for the past 4 billion years.
Mike also wrote: “In relation to the opacity of dry, CO2 free air, to certain wavelengths of infrared light, compared with pure CO2, Tyndall found a ratio of some 1:2000 (a little less, from memory, but let’s be generous). However, 400 ppm is 4 in 10,000. Proportionally, 4 units of CO2 will absorb 8000 units of radiation – 4 x 2000. The balance of the air (comprising mainly O2 and N2) will absorb 9,960 equivalent units of radiation, or more than that absorbed by the CO2 of the same wavelength, even though the air is thought of as largely transparent to infrared light.”
Tyndall’s crude measurements weren’t very accurate and are certainly obsolete today. Think about the difficulties he faced in performing his experiments in 1859. Modern spectrophotometers are far more accurate. Tyndall’s nitrogen and oxygen were probably contaminated with traces of potent GHGs. It is absurd to rely on measurements made nearly two centuries ago. To help aeronautical engineers in the 1960s make best use of the data published about GHGs, a database and programs MODTRAN and HITRAN were established to automate calculations using the Schwarzschild eqn and they are now online. Unfortunately, I can’t quickly figure out how to use them to calculate the relative transmittance of thermal IR through nitrogen, oxygen and carbon dioxide.
Mike also wrote: “Very roughly, some 23% of the insolation reaching the atmosphere from the Sun does not reach the ground. As much as foolish Warmists would like to imagine, not all of this energy is absorbed by GHGs.”
Actually, about 30% of incoming SWR is reflected back to space, mostly by clouds, some at the surface and a little by aerosols. Of the remaining 70%, about 2/3rds reaches the ground. Total, about 47%. The 1/3 of post albedo SWR absorbed by the atmosphere on its way to the surface is mostly absorbed by clouds and water vapor, not nitrogen and oxygen.
It’s heartbreaking to see franktoo has wasted a valuable amount of time on an internet gimmick called Mike Flynn.
Good read… nicely done explanations.
JCH: I thought Michael’s question was thought provoking and deserved a reliable answer, but didn’t know it would take that long. Several years ago I was sick and tired of hearing about CO2 trapping heat in the atmosphere. I was smart enough to know that doubling CO2 doubled the rate of radiative cooling as well as doubling the opportunity for absorption. How did anyone know which phenomena dominates? I wasn’t fooled by models of optically think layers of atmosphere with the same temperature on the top and bottom (and no temperature gradient to drive heat transfer through the layer via the 2LoT). The increase in DLR at the surface is less than 1 W/m2/doubling. Why can’t that extra heat be removed by an increase in convection to where the atmosphere is more transparent? Others were kind enough to help me learn. When I finally saw the Schwarzschild eqn (which is rarely discussed), everything quickly became clear. If Mike prefers to agitate rather than learn, perhaps someone else will benefit. I find my anger years ago somewhat embarrassing today even though I’m mostly a lukewarmer who agrees with Judy that uncertainty is high.
I do my own analysis.
I average the day’s rising temperature for all of the included stations from the analysis area, but only for years they record at least 360 daily records, and divide it by the calculated average from daily solar forcing for each station from the latest TSI data and the latitude.
Curiously, it identifies the area that caused the 97-98 temperature step after the el nino.
BTW, here is an example of a control knob.
I like talking about the ‘sensitivity’ of chaotic operators.
It’s obvious to me that the slope of this line is 0.1.
I beg to disagree. It’s obviously 13.7 – no, wait, – .000165 – no,wait, 1.65 – . . .
How does a chaotic attractor develop in time?
I hope you don’t mind if I stick my bib in.
nickels’ graphic depicts the Lotenz attractor.
A one word answer is “chaotically”.
From Wolfram Mathworld –
“Trajectories within a strange attractor appear to skip around randomly.”
I’m impressed by chaos. It seems to underlying the universe.
Whoops. Lorenz, of course. Sorry.
That’s the point. Chaotic attractors are “clouds” of trajectories of a chaotic system. As such, they are independent of time; only individual trajectories depend on time. Dame Slingo would have to run her model over thousands of years to get an idea of a shape of an attractor (of her model, which may have nothing to do with a real climate). A projection of it onto a single dimension of temperature would look totally unremarkable. To get a nice picture you need two dimensions; what other dimension would she select as the best marketable one?
This picture is the evolution of the boussinesq equation in its first three fourier modes. Its path is a trajectory through time. It covers the attractor, just as any other initial condition would.
Projections into any dimension are chaotic in the sense that they are massively nonlinear.
Any function that isnt dead will have this property.
So slope? It has no meaning.
Young scientists today remain as unbiased as ever according to the mail…
they still carry on their fear of old, fat, white ex-smokers who make all the stupid decisions and we just picked up the baton from the Greatest Generation Ever… what do you expect from us anyway?
fMRI strikes yet again???
CO2 is a greenhouse gas, so as it increases its concentration in the atmosphere, it increases the atmospheric temperature.
So say thousands of writers.
Think of a parabolic mirror, a dish that focuses sunlight. You can use it to create high temperatures in the atmosphere, hot enough to make molten salt. If I focus a mirror, I can heat part of the atmosphere. If I use a huge number of mirrors, I can heat a lot of the atmosphere, like the arrays of collectors used to make solar energy. But do they heat the whole atmosphere? Answer is no, they do not. For every mirror that focusses sunlight, there is a shadow cast under it, wherein it is colder than it would have been. Zero sum game.
Some says greenhouse gases are like blankets that delay the departure of heat and so keep the body warmer. But, as they do so, they prevent some circulation that was outside the blanket, making regions colder than they would have been without the blanket. Zero sum game.
What, then, is the difference between some CO2 molecules getting into excited states by capturing sunlight and a mirror raising some of the air molecules to excited states by capturing some sunlight?
If someone like Steven Mosher can explain the physics of the difference, I might start to listen. Even better, an explanation from an engineer who measures heat in gases, or a spectroscopist who deals with excited states of molecules ….
The interest is not in the capture of energy by GHG, it is in the release of energy, finally to space, after that capture. There are many, many competing descriptions of the exit, but only one can be correct.
I do not doubt that you can use light and CO2 to raise temperatures. I worked for a couple of years with high powered CO2 lasers and have seen them melt one inch thick steel. But, whether the whole atmosphere would show an averaged, increased temperature if we operated many such devices, no, the heat escapes to space.
Back to the start. CO2 is a greenhouse gas, so as it increases its concentration in the atmosphere, it increases the atmospheric temperature locally, but overall, the net heating of the air is zero. Is this the case?
Geoff: Your analysis of the effect of increasing GHG is approximately correct. Doubling the number of GHGs doubles the number of photons emitted towards space, but cuts the distance the average photon travels before being absorbed approximately in half. To a first approximation, these cancel. (This is a modest over-simplification.) However, emission (but not absorption) depends on temperature. When GHGs increase, the average photon that reaches space has been emitted from a higher altitude – and generally a colder altitude. If our atmosphere didn’t have a temperature gradient, increasing GHGs wouldn’t cause any reduction in LWR reaching space. (See Lindzen’s article “Taking Greenhouse Warming Seriously.)
In the troposphere, GHGs actually emit more LWR photons than they absorb. Roughly 100 W/m2 of heat (mostly latent heat) needs to be carried aloft by convection to keep the atmosphere from cooling.
It seems that the Earth has managed to cool from a molten blob, to its present mostly molten state, GHGs notwithstanding.
Fourier said that during the night, the Earth loses all the heat it receives during the day, plus a little more of its own.
He used the little bit extra (measured as best he could) to work backwards and calculate the age of the Earth. Lord Kelvin did likewise. Both were wrong, neither knowing that radioactive mass conversion generates lots of heat from small amounts of matter. E=mc^2, and all that.
Foolish Warmists seem to lack basic understanding in many areas. Crustal movements, the structure of the Earth, chaos, statistics, the causes and mechanics of ocean currents, the relationships between heat, energy, temperature, basic quantum electrodynamics, all seem to be beyond the ken of many foolish Warmists.
No falsifiable hypothesis apparently exists to support the contention that CO2 causes warming. Just more febrile attempts to deny, divert, and confuse when asked to scientifically justify their preposterous claims.
Foolish Warmists! Pfui!
Twiddling the Arrhenius controls to make the planet cool down some? Easy when you’ve just got the one knob on your console!
Donkey Kong was more complex than this climate science.
Fat, small, white, vapor driven… even I am getting a bit confused about just which leg of science I am pulling anymore.
This paper illustrates the difference between science and lock-step political propaganda disguised as “consensus science.” http://www.chroniclelive.co.uk/business/business-news/mini-ice-age-could-freeze-11607587
“On decadal timescales the climate sensitivity [to solar irradiance variation] can be expected to be larger due to several positive feedbacks. Potential positive feedbacks include a decrease in the ice and snow cover, an increase in plant absorptivity as it becomes greener in a warmer world, an increase in absorption by water surfaces as wind velocities decrease (based upon changes in the length of the day), and changes in plant orientation and albedo as wind velocities change. The last three potential feedback loops are not included in present day climate models.” ~ Hoyt & Schatten 1993
In comments on two threads over the past week or so you have asserted we don’t need to know the damage function in order to justify GHG mitigation policies. This suggests a lack of understanding on your part of what is required for rational policy analysis.
Mosher said @ https://judithcurry.com/2016/07/06/is-much-of-current-climate-research-useless/#comment-795921 :
True, GHG emission may cause damages. And they may not. We don’t know. Until we have a damage function we won’t know if GHG emissions are damaging or how damaging. We won’t be able to say if mitigation policies will do more harm than good.
Mosher said @ https://judithcurry.com/2016/07/12/are-energy-budget-climate-sensitivity-values-biased-low/#comment-796347
The video is difficult to hear. However, it is by scientists, not people who understand policy analysis. Furthermore, their argument is fundamentally flawed. They’ve built their argument on an assumption that greater than 2C warming is dangerous and should be avoided. However, they present no evidence to support that fundamental assumption upon which the rest of their argument is built.
Mosher @ https://judithcurry.com/2016/07/12/are-energy-budget-climate-sensitivity-values-biased-low/#comment-796437
Of course we have to justify policies on the basis that the economic benefits exceed the economic costs. Therefore, we have to be able to estimate both the economic costs and the economic benefits of the proposed policy.
And yes, the climate industry is costing around $1.5 trillion per year (according to the Climate Industry itself and also to the Insurance industry). If the benefits do not exceed the costs, those resources and costs could be better employed on other policies to improve human wellbeing (Read “The broken window fallacy”. The Australian Treasury estimated in 2011 that the Australian ETS (which ran for two years before being repealed) would cost $1.345 trillion from 2012 to 2050 (in $2012 A$). They did not estimate benefits (because it is not possible without a valid damage function). Nor could ExternE estimate the external cost of climate change caused by GHG emissions from fossil fuels because there is no damage function.
Steven, I suggest to you, instead of telling me to watch and learn, you should do so yourself or stay out of advocacy.
Have a glance at “Operationalizing climate targets under learning: An application of cost-risk analysis” 2014 Delf Neubersch · Hermann Held · Alexander Otto DOI 10.1007/s10584-014-1223-z to get a feel for the argument that leads to using techniques without a damage function (the argument is damage is too difficult to do in the time we have, so need to use the target as effectively a surrogate). Also interesting discussion of where all this leads.
Thank you. The problem is that there is not justification for the 2C target. it’s a purely political target.
I have another post below awaiting moderation. Have a look at this when it’s released and tell me what you think.
There is a justification for the 2 C target, it’s just a really bad justification.
Warming more than 2 C would mean that the planet is warmer than it has ever been in all of human history, and somehow that is bad. Or something along those lines.
On the other hand, humans have higher average life expectancy than ever in human history, so I guess we need to start killing old people to curb life expectancy.
Peter: Although I agree with you in general about needing to understand he cost of damages due to global warming, there are aspects of damage are obvious:
1) A change in sea level in either direction is costly. Based on the LGM, equilibrium sea level rise is about 120 m / 6 K or 20 m/K. As ice caps retreat poleward, this figure will decrease, but 1/2 or 1/3 this amount would still be troubling. The ice cap in southern Greenland apparently survived several millennia of the Holocene Climate Optimum – which was clearly warmer in the Arctic than pre-industrial based on the northern tree line – but I wouldn’t be shocked if persistence of the 1 K of warming we experienced in the 20th century eventually melted the southern half of Greenland and a similar volume in Antarctica. Of course, there is currently no evidence for the acceleration needed to produce 1 m of SLR by 2100 (an acceleration of 1 inch/decade/decade). If your damage function goes to zero with time (some advocate a plateau), this might be too distant to be important.
2) People, plants and animals are generally well adapted to the climate where they live – so any change is costly. On paper, the Iowa corn belt can migrate into southern Canada. However, there will be costs. People in rich societies move to warming locations – but the cost of electricity for air- conditioning and importing water is high.
Thank you for your comment on the damage function issue. I believe it is the most important and yet the most uncertain of all the inputs needed for rational climate analysis.
I welcome constructive, objective, rational, factual based discussion on the issue. However, your comment is just a restatement of well-worn assertions based on beliefs, not facts. We need numbers and costs, not statement of belief.
If these points were obvious, they’d be easily demonstrated with facts and that would have been done long ago. The fact it hasn’t been done, demonstrates it is not obvious. It is extremely difficult to demonstrate. In fact, I doubt it is even true. But happy to be proven wrong with facts – i.e. the net-cost or net-benefit of an increase in CO2 concentration in the atmosphere.
1) I suspect the real cost of sea level rising due to human caused GHG emissions is negligible. Humans will easily adapt at the rate sea level rises. The cost is trivial compared with cumulative global GDP over 100 years http://link.springer.com/article/10.1007%2Fs11027-010-9220-7 . Furthermore, mitigation policies will have negligible impact, so they cannot be justified unless they are No Regrets policies.
2) Likewise, we’ll adapt and plants will adapt (in part with our help) faster than the climate changes. The benefits of increasing CO2 concentration in terms of extra food may greatly exceed the costs.
I’d welcome a discussion based on facts, not unsupported beliefs and assertions. Please don’t make more unsupported.
Despite 30 years of climate research we still do not have a damage function with acceptably low uncertainty, let alone a widely accepted damage function. ExternE said they could not estimate the externalities of CO2 emissions, nor could The Australian Treasury when estimating the economic cost of the Australian ETS.
Mosher @ https://judithcurry.com/2016/07/06/is-much-of-current-climate-research-useless/#comment-794926
Even if true (but I am not persuaded it is true), so what? Is that good or bad? What is the net-cost or net-benefit of GHG emissions? Without a damage function we have no idea. Therefore there is no rational justification for the climate industry and its $1.5 trillion expenditure.
I don’t accept your statement. It seems to be based on three unstated assumptions, none of which I accept:
1) there is no natural variability,
2) ECS is 3C or greater and
3) global average temperature changes immediately in response to a doubling of CO2 concentration.
Furthermore, these three charts give me no cause for concern about a 3C rise in global average tenperature:
The caption is here: https://www.academia.edu/12114306/Phanerozoic_Global_Temperature_Curve
Source for charts and explanation: https://www.academia.edu/12082909/Some_thoughts_on_Global_Climate_Change_The_Transition_from_Icehouse_to_Hothouse
My interpretation of these three charts is as follows:
The 2nd chart – ‘Tropic to poles temperature gradient – Icehouse to Hothouse’ shows that if the global average temperature increases by 3C from the current ~15C to 18C, the temperature at the poles would increase from -36C to -7C, and the temperature gradient from tropics to poles would decrease from 0.82C to 0.44C per degree latitude. That’s likely to be a massive net-benefit for the mid and higher latitudes.
The 1st chart shows that if the global average temperature increased by 3C, the temperatures would be similar to what they were about 35 million years ago. The 3rd chart shows that the temperature in the tropics 35 million years ago was about 1C higher than now.
This suggests a 3 C rise in global average temperatures means a small (~1C) change in average temperature of the tropics and a huge benefit in warming of the mid and higher latitudes.
Given this, I am not persuaded there is valid justification for the Alarmists’ scaremongering.
In the above comment, I wanted to make these points:
1. even if the global average temperature increases by 3C, the tropics would warm by only about 1C.
2. The high and mid latitudes would warm much more.
3. Warming of the mid an high latitudes would be beneficial for life on Earth
4. We don’t know whether 1C warming of the tropics would net-beneficial or net-damaging, but I am not aware of persuasive evidence it would be a serious threat – because (the third chart shows) the tropics have been much warmer in the past and life thrived in such periods.
Discussion welcome. In particular, is the chart of tropical temperatures consistent with accepted wisdom? If not, please provide a link to a more authoritative and widely accepted chart of the average tropical temperature.
=>> 3. Warming of the mid an high latitudes would be beneficial for life on Earth. ==>
Ya gotta love respect for uncertainty.