Abnormal climate response of the DICE IAM – a trillion dollar error?

by Nic Lewis

Last week, a U.S. federal court upheld the approach that the government uses to calculate the social cost of carbon when it issues regulations [link].  The models appear to have seriously overestimated the social cost of carbon.

Introduction and Summary

Integrated assessment models (IAMs) combine simple models of the carbon cycle and of the response of the climate system to changes in atmospheric carbon dioxide (CO2) concentration with models of economic growth that incorporate the effects of imposing a carbon tax. They calculate the resulting social utility after estimated damages from climate change and costs of measures adopted – as a result of the carbon tax – to abate CO2 emissions. The optimum time-varying carbon tax computed by IAMs, being that which maximises the discounted value of utility out to a specified end date, is equal throughout the period to the social cost of carbon (SCC). Results from IAMs are used by governments when deciding what carbon taxes to impose and/or levels of emission reductions to target.[i]

This article primarily concerns DICE, one of three IAMs used by the US Government in their assessment of the SCC,[ii] which was developed by William Nordhaus. He has written a book chapter that provides a good introduction to IAMs in general and to DICE in particular.[iii] The DICE model spans 2010 to 2309 in 5-year time steps, with the carbon tax being varied from 2015 on. In this article I am just concerned with the workings of the DICE climate module and I do not question the rest of the model.

Although I consider the values of equilibrium climate sensitivity (ECS) used in the DICE model and of the resulting transient climate response (TCR) produced by its climate module to be higher than justified by best estimates based on observations of the climate system, I do not challenge those values here. However, I will show the DICE climate module to be mis-specified, in the sense that the time profile of its temperature response to forcing is inconsistent with understanding of the behaviour of the actual climate system, as reflected in and simulated by current generation (CMIP5) atmosphere-ocean general circulation models (AOGCMs).

It has been shown that the evolution of global-mean temperature in AOGCMs may be well represented by a simple physically-based 2-box model, as used in DICE, with suitable choices of ocean layer depths for each box. However, I show here that the climate module parameter values used in DICE correspond to physically unrealistic ocean characteristics. In the DICE 2-box model, the ocean surface layer that is taken to be continuously in equilibrium with the atmosphere is 550 m deep, compared to estimates in the range 50–150 m based on observations and on fitting 2-box models to AOGCM responses. The DICE 2-box model’s deep ocean layer is less than 200 m deep, a fraction of the value in any CMIP5 AOGCM, and is much more weakly coupled to the surface layer. Unsurprisingly, such parameter choices produce a temperature response time profile that differs substantially from those in AOGCMs and in 2-box models with typical parameter values. As a result, DICE significantly overestimates temperatures from the mid-21st century on, and hence overestimates the SCC and optimum carbon tax, compared with 2-box models having the same ECS and TCR but parameter values that produce an AOGCM-like temperature evolution.

My analysis shows that if the parameters of the DICE climate module are altered from their standard settings so as to be consistent with AOGCM behaviour and actual ocean characteristics, but leaving its ECS and TCR values unchanged, the SCC and optimum carbon tax follows a substantially lower trajectory (a quarter to a third lower, up to 2110), with greater CO2 emissions but a lower peak level of warming. The present value of utility is improved by up to $19 trillion, depending on which alternative parameter set is used. When using these parameter sets, a loss of utility of the order of $1 trillion results from imposition of the higher carbon tax that is optimum when using the DICE climate module standard settings instead of their own, lower, optimum carbon tax. The climate response profile in FUND[iv] and in PAGE,[v] the other two IAMs used by the US government to assess the SCC, appear to be similarly inappropriate, suggesting that they also overestimate the SCC.

The climate module in the current version of DICE

I use the latest version of DICE, DICE-2013R (April 2014), the geophysical modules of which are stated to be largely consistent with the IPCC Fifth Assessment Working Group I report (AR5). It is stated to be in final form.[vi] However, the Excel version available for download[vii] does not incorporate a number of parameter revisions that were made in the primary, GAMS language, October 2013 version DICE2013Rv2 of the model, according to its manual.[viii] Of particular relevance to its climate response are a reduction of the default ECS used, from 3.2°C to 2.9°C, and revisions in other climate module parameters. I have incorporated these changes, and parameter revisions in other parts of the DICE model that affect emissions and the SCC, in an updated Excel version that I refer to as DICE-2013Ra.[ix] The manual states (page 18) that a change in a climate module parameter[x] was made to set the transient temperature sensitivity, as estimated by a regression analysis, at 1.70°C. It is confirmed elsewhere[xi] that this sensitivity is intended to be the same measure as the TCR in AOGCMs, being the increase in global mean surface temperature (GMST) at the end of a seventy year period, starting from equilibrium, during which CO2 concentration rises at 1% p.a. compound, thereby doubling.

The DICE climate module represents a standard global 2-box climate model,[xii] [xiii] in which the atmosphere is in continuous equilibrium with a surface mixed layer of the ocean that exchanges heat with the deep ocean at a rate proportional to the difference in temperature between them. This simple model is fully defined by specifying the ECS, the heat capacities CSO and CDO of the ocean surface and deep layers respectively, and the coefficient of proportionality FSD between the rate of interlayer exchange of heat and the interlayer temperature differential.[xiv] Alternatively, the value of FSD is uniquely determined if the TCR of the model is specified as well as its ECS.

Characteristics of 2-box climate models

The time varying change T(t) in GMST in the model in response to a step change in radiative forcings, such as one caused by a jump in atmospheric CO2 concentration, is given by the weighted sum of two exponential functions with time constants τ1 and τ2:

T(t) = k1 (1 – exp[–t/τ1] + k2 (1 – exp[–t/τ2])

The sensitivity factors k1 and k2 determine how the final, equilibrium GMST response (to which they sum) is divided between two component responses with differing time constants. A derivation of the mathematical relationships between the 2-box model physical parameters and the resulting exponential function time constants and sensitivity factors is given by Berntsen and Fuglestvedt (2008).[xv] Generally, for given values of ECS and FSD, the heat capacity of the ocean surface mixed layer, CSO, which is proportional to its depth, strongly positively influences the value of τ1, whilst that of the deep ocean, CDO, strongly positively influences τ2. The relative values of k1 and k2 are strongly influenced by FSD. When formulating 2-box models, the surface layer that is in equilibrium with the atmosphere is typically taken to have a depth of 70–120 m, averaged over the ocean area, corresponding broadly to estimates of the mixed-layer depth.[xvi] Whilst the mean depth of the ocean is approaching 3,700 m, the effective depth for the purposes of a 2-box model may be smaller due to lesser heat exchange occurring below the bottom of the thermocline (the effective depth of which is of the order of 1,000 m). Consistent with these ocean characteristic depths, in 2-box models one expects τ1 to be a fairly short period – under ten years – and τ2 a long period – over a hundred years.

Although the 2-box climate model is very simple, the 2-exponential response it generates has been found able – with suitable parameter choices – to represent the GMST behaviour of AOGCMs remarkably well.[xvii] Figure 1 below, which reproduces Fig.2 of Caldeira and Myhrvold (2013),[xviii] shows how good 2-exponential fits (blue lines) are for CMIP5 models. Their investigation was based on the results of simulations in which atmospheric CO2 concentration was abruptly quadrupled, which provide a very clear picture of the characteristic time profile of model response to imposed forcing. The 2-exponential fits obtained from those simulations also provided excellent matches to the corresponding models’ GMST behaviour in 140 year ramp-forcing simulations where CO2 concentration was increased by 1% per annum.

Fig1_Caldeira2013_Fig2_4xCO2_emulation

Figure 1: GMST change in various CMIP5 models after an abrupt quadrupling of CO2 concentration at time zero (black dots), and as emulated by various simple models with best-fit parameter values. The blue lines represent fits to a 2-exponential response, as given by a 2-box model. In most cases they overlap with the 3-exponential fit brown lines. The olive green lines are for 1-exponential fits; the red lines for fits to a 1D diffusion model.

Consistent with physically-based expectations, Caldeira and Myhrvold found that the values of τ1 and τ2 giving the best fit varied in the ranges 2.4 – 7.4 and 109 – 463 years respectively, according to the CMIP5 model involved. The implied values of CSO and CDO correspond to ocean surface and deep layer depths ranging between respectively 64–150 m and 768–3313 m.[xix] Another study involving CMIP5 AOGCMs, which contains more detail on 2-box models, reached similar conclusions; it also gave values for FSD, which ranged between 0.50 and 1.16 Wm-2K-1.[xx] Likewise, Boucher and Reddy (2008) fitted a two-exponential model to the CMIP3 HadCM3 model and obtained time constants of 8.4 years for τ1 and 409.5 years for τ2, with sensitivity parameters equating to an ECS of 3.90°C and a TCR of 2.17°C.[xxi] Their fitted model was used in AR5.[xxii] Its implied values of CSO and CDO correspond to ocean surface and deep layer depths of respectively 142 m and 1600 m.[xxiii] Berntsen and Fuglestvedt used somewhat different values for CSO and CDO, corresponding to a surface layer depth of 70 m and a deep layer depth of 3,000 m. The Boucher & Reddy and the Berntsen & Fuglestvedt ocean layer depth combinations span much of what is physically plausible (having regard to the depths of the ocean mixed layer, of the ocean below it and of the thermocline) and required to match CMIP5 model behaviour. Accordingly, I use the CSO and CDO value combinations from those two studies to define reference 2-box model variants, and compare their GMST responses with that of DICE-2013Ra. Whilst for each of these 2-box model variants the exponential time constants vary with the specified values of ECS and TCR, the ocean layer heat capacities and related depths remain unchanged.

Comparing the response of the DICE 2-box model with that of the two reference variants

To objectively compare the GMST responses of the different 2-box model parameterisations it is appropriate to specify the same ECS and TCR values for all of them. The differences in response will then depend only on the ocean surface and deep layer depths employed.

All parameters that affect TCR have the same values in DICE-2013Ra as in DICE-2013Rv2. Although Nordhaus states that the estimated TCR of DICE-2013Rv2 is 1.70°C, the value that I calculate for DICE-2013Ra, by applying a linear forcing ramp for 70 years from a preindustrial equilibrium position, is only 1.57°C.[xxiv] The reason for the inaccuracy in the TCR setting in DICE-2013R is unknown; possibly it arises from the regression model used to estimate the model’s TCR being unsuitable.

I accordingly set the values of FSD in the reference variants of 2-box models, with the Boucher & Reddy and the Berntsen & Fuglestvedt ocean surface and deep layer depths, so as to produce a TCR of 1.57°C when used in DICE-2013Ra, matching that with the model’s original settings, also retaining the same ECS value of 2.9°C. I then calculated, over 250 years, the GMST response of each 2-box model variant to an abrupt doubling of CO2 concentration; the results are shown in Figure 2. The shape of the response of DICE-2013a is notably different from that of the other two model variants. The shapes of the lines for the Boucher & Reddy and Berntsen & Fuglestvedt variants are similar to the blue 2-exponential curves, and to the actual AOGCM responses, in Figure 1, whilst the DICE-2013a line has a quite different shape – one closer to the green 1-exponential curves in Figure 1.[xxv] After 10 years, the CMIP5 AOGCM responses in Figure 1 have reached between 38% and 61% of their estimated ultimate, equilibrium, warming. After 100 years, they have reached between 60% and 86% of their equilibrium warming. The 10 year percentages for the Boucher & Reddy and Berntsen & Fuglestvedt models are respectively 44% and 51%; at 100 years they are 67% and 61%. These values lie within the ranges for the Figure 1 AOGCMs. However, the 10 and 100 year percentages for DICE-2013Ra, at 22% and 89% respectively, lie outside the AOGCM ranges.

Fig2_2-box responses to abrupt2xCO2

Figure 2: GMST response of each 2-box model variant to an abrupt doubling of CO2 concentration

The reason for the abnormally shaped GMST response in DICE2013Ra is that its 2-box climate model embodies physically unrealistic ocean layer depths. Its implicit CSO value equates to a mean depth for the ocean surface layer of 553 m – four to eight times higher than for the two reference model variants, and far greater than the depth of the layer that is in near equilibrium with the atmosphere. The DICE-2013a CDO value equates to a mean depth for the deep ocean layer of 191 m – only a twelfth to a sixth of that in the other two model variants. DICE-2013Ra also has a much lower coefficient of heat exchange between the surface and deep ocean layers: its FSD value of 0.088 Wm-2K-1 is under 10% of that in the other two model variants (which have almost the same FSD values as each other).

As a result of the very weak surface to deep ocean coupling, the DICE-2013Ra second exponential term’s time constant is reasonable (216 years) despite the small depth of the deep ocean layer. However, its first exponential time constant, τ1, is extraordinarily high at 36 years, which is way outside the 2.4 – 7.4 year CMIP5 range found by Caldeira and Myhrvold. For the reference model variants, τ1 is 5 years for Boucher & Reddy and 2.5 years for Berntsen & Fuglestvedt. In DICE-2013Ra, the GMST response is dominated by that of the first exponential term; due to the weak inter-box coupling the longer time constant second term only contributes 9% of the total equilibrium warming.[xxvi] This is far below the 33–61% range reported by Caldeira and Myhrvold for CMIP5 models, and compares with values of approximately 50% for both the Boucher & Reddy and Berntsen & Fuglestvedt based 2-box model variants.

Relevant details of all three 2-box model variants are given in Table 1.

Slide1Table 1: Properties of each 2-box model variant, all with ECS at 2.9°C and TCR at 1.57°C, the values corresponding to the default DICE-2013Ra parameters.[xxvii]

The effect on the SCC and optimum carbon tax of the different 2-box climate models

The peculiar GMST response of the DICE-2013Ra 2-box climate model is much slower initially than that of the two much more physically-plausible alternative 2-box model variants, and than CMIP5 models, but its slope declines much more slowly. By 35 years after a forcing is applied, the DICE-2013Ra temperature response to an imposed forcing has overtaken that of the alternative 2-box model variants with identical ECS and TCR values, and by year 100 it is well above their responses.

The key question is what effect the varying shapes of these three climate response functions, all with identical values for both ECS and TCR, have on the SCC, on the CO2 emission pathways that follow from imposing an optimum carbon tax – one equal to the SCC – and on the present value of everyone’s utility. In order to be able to optimise the carbon tax using the standard Excel solver, which is unreliable for optimising more than a few parameters, I specify the carbon tax by a four parameter model.[xxviii] I leave the savings rate values in the downloaded Excel model unchanged; the optimum savings rate is little affected by the variations in the optimum carbon tax.

Figure 3 shows the resulting optimised carbon tax values with each 2-box climate model variant. Optimisation is applied from 2015 on; the initial 2010 carbon price is $1 in all cases. The peaks occur when the increasing SCC reaches the backstop price, an initially very high price that declines over time, at which any required amount of emission reduction is assumed to be possible.[xxix] Beyond 2150, in all cases the carbon price continues to decline with the backstop price.[xxx] Until 2110, the optimum carbon tax for the Berntsen & Fuglestvedt 2-box model variant is two-thirds, and that for the Boucher & Reddy variant is three-quarters, of that for the DICE2013Ra one.[xxxi] These are substantial reductions.

Fig3_DICE optimised CO2 taxes for 2-box response functions_TCR1.57

Figure 3: DICE-2013Ra optimum carbon tax profile for each 2-box model variant

The effects of the different carbon tax levels on CO2 emissions, CO2 concentration and GMST

A lower carbon tax implies that optimum emissions are higher. Figure 4 shows the profile of CO2 emissions in each case that result from imposing the corresponding optimum carbon tax, and for comparison in the base case where the carbon tax increases only slowly from its 2010 level.[xxxii] The cumulative 2010-2150 emissions in the three cases are 621, 790 and 858 GTCO2 – all far lower than the 2,222 GTCO2 in the base case. However, the emissions for the Berntsen & Fuglestvedt based 2-box model variant are 38% higher, and those for the Boucher & Reddy case 27% higher, than when based on the DICE2013Ra 2-box model.

Fig4_DICE optimised emissions for 2-box response functions_TCR1.57

Figure 4: DICE-2013Ra CO2 emissions with carbon tax optimised for each 2-box model variant

The corresponding atmospheric CO2 concentrations are shown in Figure 5, through to 2295. They peak about 60 ppm higher in the Berntsen & Fuglestvedt case than with the DICE-2013Ra 2-box climate model, reflecting the substantially higher cumulative CO2 emissions, but at only 51% of the baseline case peak level.

Fig5_DICE optimised concentrations for 2-box response functions_200yr TCR1.57

Figure 5: DICE-2013Ra CO2 concentration with carbon tax optimised for each 2-box model variant

It might be thought that with the CO2 concentration being continuously higher with the Berntsen & Fuglestvedt and Boucher & Reddy based 2-box model variants than with DICE-2013Ra’s own 2-box climate model, the resulting increase in GMST would be greater in their cases. In fact, that is not so up to 2300. Indeed the peak GMST rise is noticeably higher with the native DICE-2013Ra climate model, at 3.0°C in 2120, at which time the rise is 2.5°C in the Boucher & Reddy case and 2.3°C in the Berntsen & Fuglestvedt case. Although in those two cases GMST ultimately rises further, in both cases it remains below 2.8°C for at least a thousand years. By comparison, in the baseline case the peak GMST increase is over 6°C.

Fig6_DICE optimised GMST rise for 2-box response functions_280yr TCR1.57

Figure 6: DICE-2013Ra GMST change with the carbon tax optimised for each 2-box model variant

Discussion and conclusions

I have shown that the 2-box climate temperature response function used in DICE-2013R is unreasonable, as it is based on the ocean surface layer that is in equilibrium with the atmosphere being over 550 m deep, coupled very weakly to a deep ocean layer only 190 m thick, and has only 9% of the equilibrium response attributable to the exponential term with a centennial time constant. These values are neither physically realistic, having regard to actual ocean characteristics, nor compatible with the range of values implied by 2-box models that well-match the responses of CMIP5 AOGCMs. The abnormal characteristics of the DICE2013Ra 2-box model cause its estimated SCC and optimum carbon tax to be one third to one half too high over the first hundred projection years, depending on which of two more physically plausible alternative 2-box model variants considered, both having the same ECS and TCR values as DICE-2013Ra, is used. Thereafter the carbon tax rapidly converges on the same value, declining with time, in all cases.

Qualitatively similar comparative results are obtained if the DICE-2013Ra climate module parameter specified in the manual (c1) is adjusted to achieve a TCR of 1.70°C, the value that was apparently intended, its parameters also being adjusted to achieve that TCR for the other two 2-box climate model variants. The optimum carbon tax profile is higher, and the resulting CO2 emissions profile lower, in all three cases. The depth of the ocean surface layer in DICE-2013Ra is slightly lower, at 476 m, than with the default parameter values – still several times higher than is physically realistic or within the range of values required to match the behaviour of CMIP5 models. The proportion of the equilibrium response attributable to the exponential term with a centennial time constant in DICE-2013Ra is almost unaffected by the change in TCR; it remains unrealistically low at 9%.

It may be asked why the optimum carbon tax is higher, and hence the level of emissions lower, when the vast bulk of the GMST response comes from the box 1 exponential term, with a time constant of several decades, rather than being more evenly split between two exponential terms, with time constants of respectively well under a decade and over a century. The primary reason is that in the first case the increased forcing produced by emissions, which peaks around 2090 with the DICE-2013Ra carbon tax profile, causes a much greater increase in GMST in the second and third centuries than it does in the second case. Additionally, almost all of the substantial warming in-the-pipeline as at 2010, of 0.83°C,[xxxiii] emerges over the projection period in the first case, whereas in the second case a substantial fraction of it remains unrealised after 300 years.[xxxiv]

For a rational economist or policymaker, the relevant target might be expected to be the present value (PV) of total utility, discounted over the 300 year time horizon – exactly the target that the optimum carbon tax maximises. In the base case, DICE2013Ra calculates the PV of utility as 2,678.6 (all utilities are in trillions of year 2000 $). With the optimum carbon tax and its own climate model, the PV of utility increases to 2,707.0. If the Boucher & Reddy based 2-box model variant is used instead, the PV of utility rises further, to 2,722.8. If the Berntsen & Fuglestvedt based variant is used, the PV of utility becomes 2,726.0, or $19.0 trillion above the level using the DICE-2013Ra 2-box model. Of greater practical concern is the loss of utility if the higher carbon tax that is optimum when using the DICE climate module standard settings is imposed, when the actual climate response is in line with a more realistic 2-box model. For the Berntsen & Fuglestvedt based variant, this utility loss is $1.1 trillion; it is $0.7 trillion for the Boucher & Reddy based variant.[xxxv] A loss of $1 trillion in the present value of utility equates to about a thirtieth of the total utility attributable to world consumption for the single year 2010.

The climate response profiles in FUND and in PAGE, the other two IAMs used by the US government to assess the SCC, also appear to be inappropriate. They both use one exponential term with the single time constant being of the order of 40 years, so no part of the temperature response has a centennial timescale (versus 9% in DICE-2013Ra). It therefore seems likely that they likewise significantly overestimate the optimum carbon tax at any given level of ECS and TCR, although detailed investigation would be required to confirm this.

It seems rather surprising that all three of the main IAMs have climate response functions with inappropriate, physically unrealistic, time profiles. In any event, it is worrying that governments and their scientific and economic advisers have used these IAMs and, despite considering what ECS and/or TCR values or probability distributions thereof to use, have apparently not checked whether the time profiles of the resulting climate responses were reasonable.

[i] See, e.g., https://www3.epa.gov/climatechange/EPAactivities/economics/scc.html.

[ii] Technical Support Document: Technical Update of the Social Cost of Carbon for Regulatory Impact Analysis Under Executive Order 12866 Interagency Working Group on Social Cost of Carbon, United States Government

[iii] Nordhaus, W. Integrated Economic and Climate Modelling. In Dixon, P.B., Jorgenson, D.W. (Eds.) Handbook of Computable General Equilibrium Modeling. Elsevier (2013) aida.wss.yale.edu/~nordhaus/homepage/documents/Nordhaus_IntegratedAssessmentModels_Handbook_2013.pdf

[iv] Anthoff D and RSJ Tol. The Climate Framework for Uncertainty, Negotiation and Distribution (FUND), Technical Description, Version 3.9 (2014) www.fund-model.org/Fund-3-9-Scientific-Documentation.pdf

[v] Hope, C. The PAGE09 integrated assessment model: A technical description. Cambridge Judge Business School Working Papers 4/2011, University of Cambridge (2011) https://www.jbs.cam.ac.uk/fileadmin/user_upload/research/workingpapers/wp1104.pdf

[vi] http://aida.wss.yale.edu/~nordhaus/homepage/

[vii] DICE-2013 model, beta version of April 22, 2013, developed by William Nordhaus, Yale University. aida.wss.yale.edu/~nordhaus/homepage/documents/DICE_2013R_112513m.xlsm

[viii] The new parameter values are given on page 96–97 of the Second edition( 31 October 2013) of the DICE-2013R manual, available at: aida.wss.yale.edu/~nordhaus/homepage/documents/DICE_Manual_103113r2.pdf . The reduction in the default ECS is discussed on page 18 of the manual.

[ix] Climate module parameter c1 has been changed from 0.104 to (at the default ECS of 2.9°C) 0.098, and c3 from 0.155 to 0.088. I have also incorporated the following other changes per the DICE-2013Rv2 GAMS code in the manual: in Initial atmospheric temperature in 2010, from 0.83°C to 0.80°C; Initial carbon emissions from land use, from 1.53972 to 3.3 GTCO2/yr; in Initial atmospheric concentration of CO2 (used for 2010), from 384.50 to 389.86; in Initial output, from 66.95 to 63.69 2005 $ trillion; in Industrial CO2 emissions for 2010, from 8930 to 9168 MTC; in forcings of non- CO2 GHGs from −0.06 in 2000 and −0.62 in 2100 to 0.25 in 2010 and 0.75 in 2100 (Wm-2); and in the Damage coefficient on temperature squared, from 0.002131 to 0.00267. Minor other differences in the abatement module have been left untouched, as has the growth of population , which differs only marginally between the Excel and GAMS versions.

[x] Parameter c1, also termed ξ1 in the manual and there wrongly referred to as the diffusion parameter and overstated there by a factor of ten.

[xi] Nordhaus, W. Integrated Economic and Climate Modelling, op. cit., p.1088.

[xii] Gregory JM, Vertical heat transports in the ocean and their effect on time-dependent climate change. Clim Dyn (2000) 16:501±515.

[xiii] Held, I. M. et al, Probing the fast and slow components of global warming by returning abruptly to preindustrial forcing. J. Climate, 23 (2010), 2418–2427.

[xiv] The heat capacity of the atmosphere and land is small relative to that of the ocean mixed layer, and is subsumed within the value used for it.

[xv] Berntsen, T and J Fuglestvedt. Global temperature responses to current emissions from the transport sectors. PNAS 105 (2008), 19154–19159. See the Supplementary Information.

[xvi] The annual maximum depth of the ocean surface mixed layer is relevant here.

[xvii] It might be thought that a 1D model in which heat is transferred diffusively between a surface mixed layer and thermocline/ deep ocean, with slow upwelling linked to sinking cold water in polar regions often also being included, would be more appropriate. However, as much of the heat is believed to be transported between ocean layers by advection of water along sloping isopycnal surfaces (Gregory 2000, op. cit.) the simpler 2-box model treatment appears to be satisfactory.

[xviii] Caldeira K and N P Myhrvold. Projections of the pace of warming following an abrupt increase in atmospheric carbon dioxide concentration. Environ. Res. Lett. 8 (2013) 034039 (10pp)

[xix] Ricke K and Caldeira D. Maximum warming occurs about one decade after a carbon dioxide emission Environ Res Lett 9 (2014) 124002 doi:10.1088/1748-9326/9/12/124002

[xx] Geoffrey et al (2013) carried out a similar exercise with a different, overlapping set of CMIP5 models and found good fits, with the values of τ1 and τ2 giving the best fit varied in the ranges 1.6 – 5.5 and 126 – 698 years respectively. Geoffroy O et al. Transient Climate Response in a Two-Layer Energy-Balance Model. Part I: Analytical Solution and Parameter Calibration Using CMIP5 AOGCM Experiments J. Clim. 26 (2012) 1841–57

[xxi] Based on the HadCM3 forcing for a doubling of CO2 concentration of 3.68 Wm-2.

[xxii] The Global Temperature change Potentials given in Table 8.A.1 of AR5.

[xxiii] For the ocean only, based on the ocean covering 71% of the Earth’s surface and a seawater volumetric heat capacity of 4.1 M J K-1 m-3.

[xxiv] Moreover, the theoretical TCR I calculate using the two-box model parameters is only 1.51°C. The difference appears to be due to use of a 5-year time step and to applying forcing increments at the start of each period. When a 1-year time step is used, the TCR calculated from applying a 70-year forcing ramp is only 0.01°C above the theoretical value.

[xxv] If the surface ocean depth in the Boucher & Reddy based variant is swapped with that in the Berntsen & Fuglestvedt based one and the FSD values altered to maintain their TCRs, the shapes of the resulting response curves are almost bounded between those for the two versions with unswapped surface ocean depths over the entire 250-year period.

[xxvi] The warming in the second term of the two exponential representation of a 2-box model is not the same as that of the deep ocean layer in the underlying physical model, which ultimately warms as much as the surface.

[xxvii] In DICE-2013R, the forcing from a doubling of atmospheric CO2 concentration is 3.8 Wm-2, so the sum of the sensitivity factors for each model variant is 2.9/3.8 = 0.763

[xxviii] The carbon tax model specifies an initial value, initial growth rate, and a rate of change of growth rate that increases by the same specified amount each year. Fixed reductions are applied in the final three periods, matching those in the downloaded Excel model (the reasons for which are unknown); doing so has a negligible effect on the present value of utility. When all parameter values are as in the downloaded model, optimising the four parameter fit produces an almost identical carbon tax profile to that in the downloaded model, which were derived by optimising each period’s carbon tax independently, and the present value of utility is unchanged to seven significant figures.

[xxix] Per the DICE manual, the backstop technology can replace all fossil fuels; it could be one that removes carbon from the atmosphere or an all-purpose environmentally benign zero-carbon energy technology. The downward trend in its price is based on assumed technological progress.

[xxx] Up to 2290, from when on the fixed reductions referred to in note 28 are applied.

[xxxi] More precisely, 68% for Berntsen & Fuglestvedt and 73–76% for Boucher & Reddy.

[xxxii] Land use change emissions, which in DICE20113R are assumed to decline on a fixed basis, are not included. In the base case the $1 initial carbon tax increases by 10.4% in each pentad until 2015, from when the backstop CO2 price is applied.

[xxxiii]The currently unrealised, “in-the-pipeline”, warming at the end of 2010 equals ECS of 2.9°C x Forcing in 2010 of 2.14 Wm-2 / Forcing for a doubling of CO2 of 3.8 Wm-2, minus Atmospheric temperature rise in 2010 above preindustrial of 0.8°C.

[xxxiv] 0.80°C of the 0.83°C unrealised warming as at 2010 emerges by the end of the projections when the DICE-2013Ra climate module parameters are used, but only 0.45°C or 0.60°C when parameters based respectively on the Berntsen & Fuglestvedt or Boucher & Reddy models are used.

[xxxv] In the contrary cases, where the actual climate response is as per the DICE-2013Ra 2-box model but a lower carbon tax that is optimum for one of the two variants is imposed, the utility losses are a little (10-20%) larger. However, it is most unlikely that the DICE-2013Ra response could be realistic.

JC note:  As with all guest posts, please keep your comments civil and relevant. A PDF version of this article is available [here].

156 responses to “Abnormal climate response of the DICE IAM – a trillion dollar error?

  1. Pingback: Abnormal climate response of the DICE IAM – a trillion dollar error? – Enjeux énergies et environnement

  2. Pingback: Abnormal climate response of the DICE IAM – a trillion dollar error? – Enjeux énergies et environnement

  3. Nic Lewis, thank you for this. And Judith for posting this important contribution (and many others recently).

    • I agree. It takes courage to question the integrity of government science and the once, constitutionally-independent judicial system.

      • Curious George

        The role of an eyewitness, an expert witness, and a judge:
        – the eyewitness was there, saw it, but does not understand it.
        – the expert witness was not there, but does understand it.
        – the judge was not there, did not see it, does not understand it, but decides it.
        Clearly the choice of expert witnesses is critical. The executive branch is in very strong position. I wonder, who exactly is a defense in these cases? Maybe it is not a matter for courts in the first place.

    • Nic Lewis,

      I am not competent to comment on your analysis. I hope William Nordhaus or wone of his co-workers will comment.

      I feel there is a more significant cause for concern with the outputs from the IAMs, such as the SCC estimates. It is the damage function. WG3 Chapter 3 has 18 mentions of damage function; http://www.ipcc.ch/pdf/assessment-report/ar5/wg3/ipcc_wg3_ar5_chapter3.pdf ; (for other readers, these are some of them):

      “Damage functions in existing Integrated Assessment Models (IAMs) are of low reliability (high confidence).” [3.9, 3.12]”

      “Our general conclusion is that the reliability of damage functions in current IAMs is low.” [p247]

      “To develop better estimates of the social cost of carbon and to better evaluate mitigation options, it would be helpful to have more realistic estimates of the components of the damage function, more closely connected to WGII assessments of physical impacts.”

      “As discussed in Section 3.9, the aggregate damage functions used in many IAMs are generated from a remarkable paucity of data and are thus of low reliability.”

      Would you be willing to run DICE-2013R (Excel) with the default inputs – for the ‘Copenhagen’ scenario – and then vary these inputs to determine the sensitivity of the key outputs (including net-benefit and SCC) to changes of each of them:

      • Participation rate (1/2 ‘Copenhagen’ participation rate and compare with my chart – see below)
      • damage function
      • ECS (default in the Excel version I used was 3.2C)
      • RCP (default in the Excel version I used was ~RCP8.5)
      • Climate module parameters you suggest
      • Finally, a run with all the above inputs set to what you believe is the most likely value.

      Post charts showing the sensitivity to each input with your best estimate for each input parameter.

      Could you include a chart of net-benefit per 5 years (US$ trillion), e.g. something like this:
      http://anglejournal.com/site/assets/files/1727/lang_1.png

      Source: http://anglejournal.com/article/2015-11-why-carbon-pricing-will-not-succeed/

  4. The government is not interested in the “true” cost of carbon if there even is such a thing. What they are interested in is establishing the largest number possible to effect wealth transfer away from the US.

    • Why would government officials seek to transfer wealth away from the US?

      Do you have any evidence that this is what they are doing?

      • The evidence is just about everything they have done.
        They are killing and driving out industry and jobs with the stuff they do in the name of stopping mother nature from changing climate well inside the bounds of the most recent ten thousand years. They think they can regulate CO2 and stop natural cycles and cause temperature to stay at the same value forever and stop changing in cycles, like it has always done. Whoever believes this junk science, please make some binding bets with me.

    • My goodness, DA. You have never heard of the Green Climate Fund, per COP21 to be funded $100 billion per year, every year, from 2020? Why Would Obama agree to such a wealth transfer. Your question. You answer.

    • Jim Lowrie,

      I don’t know about that.

      When it comes to the ‘Land of Stupid’ award, I think that goes to Germany and Energiewende, hands down.

      New York and California were running a close second, but now Australia has a couple of entrants that are closing in fast:

      Australia’s Gas Paradox: Supply Crunch Looms Despite Rich Reserves
      http://www.rigzone.com/news/article.asp?hpf=1&a_id=146164&utm_source=DailyNewsletter&utm_medium=email&utm_term=2016-08-15&utm_content=&utm_campaign=feature_4

      • Australia, with its vast reserves of natural gas, faces a looming shortage at home as states restrict new drilling onshore and cash-strapped oil and gas companies cut spending….

      • Under pressure from green voters and farmers, Victoria has banned onshore gas developments, including fracking, and New South Wales has restricted developments, limiting new supply.

      • In a glimpse of the future, gas prices spiked to A$45 a gigajoule [US$42.80 per MCF — the current price in the United States is $2.59 per MCF] in Victoria in July, about six times the price of Asian LNG , as a cold snap and a power shortage in neighbouring South Australia led to a surge in demand, forcing gas to be piped from the country’s north, incurring high charges.

      • Over the next five years, Australia’s energy market operator projects average wholesale gas prices will rise from A$5.46 per gigajoule [US$5.19 per MCF] to A$9.28 [US$8.83 per MCF].

      • State and federal energy ministers are due to meet on Friday to decide whether to lift a ban on onshore gas developments.

      • Resources Minister Josh Frydenberg said reforms would aim at a “more affordable, accessible and reliable energy supply.”

      • Australia’s energy minister is urgently calling for action to spur new supply.

  5. Yes, I”m sure this is just the post that will convince a judge he/she was wrong. (Eye roll.)

    • No, Nic’s point isn’t that by revealing errors in the government’s models the court will reverse itself. Rather, he is showing that scientists and agencies of the government are deliberately choosing unrealistic assumptions that aren’t supported by physics to promote inappropriate policies and to support an agenda and a myth – one of redemption of the planet through enormous personal and community sacrifices. Sacrifices, by the way, that won’t succeed in achieving its climate goals but which will result in massive transfers (and diminution) of wealth.

      • jimeichstedt

        My article does not show that “scientists and agencies of the government are deliberately choosing unrealistic assumptions that aren’t supported by physics”.

        David Appell

        “A blog post simply isn’t science. I’m sorry, but it’s not.”

        Neither a blog post nor a peer reviewed paper is in itself science. Such a paper reports the details and results of a scientific investigation. A blog post may do likewise. In either case, adequate information to enable replication of the investigation may or may not be given.

      • jimeichstedt

        The moral philosophy of Obama’s EPA is the same as that of the medieval Catholic Chruch. As one Roman theologian of the time put it:

        The Catholic Church holds it better that the entire population of the world should die of starvation in extreme agony…than that one soul, I will not say should be lost, but should commit one single venial sin.

      • stevefitzpatrick

        Nic,
        “adequate information to enable replication of the investigation may or may not be given”

        Might I gently suggest that be modified slightly?
        “adequate information to enable critical evaluation and replication of the investigation may or may not be given”

        But in any case, I doubt Appell can really tell the difference between “science” and “not science”, no matter where it is published.

      • jimeichstedt

        My article does not show that “scientists and agencies of the government are deliberately choosing unrealistic assumptions that aren’t supported by physics”.

        Sorry, Nic – you’re right, of course. We don’t know that they understand that their assumptions are unrealistic nor do we know that they deliberately chose to exaggerate the anticipated effects of man-induced climate change. Forgive me for putting words in your mouth.

      • stevefitzpatrick | August 16, 2016 at 5:48 pm |

        ‘Might I gently suggest that be modified slightly?
        “adequate information to enable critical evaluation and replication of the investigation may or may not be given” ‘

        I’m happy with that.

    • Weird comment. Much more important in the long run is if it would convince Nordhaus that he was wrong. Honest scientists want their work to be right. He’s an expert on the economics, not on the physics.

  6. The Courts defer to the federal agencies and the feds have circled the wagons. Perhaps I am being pessimistic, but at this point in time, I don’t see any changes coming to the federal SCC determination except those made by insiders, and the changes, when they come, in net, will result in higher SCC values. No way, no how do the feds reduce the SCC (despite valiant efforts like Nic’s or Ross et al.’s or ours). I’d love to be wrong!

  7. So a federal judge from the Department of Injustice rules on the validity of highly complex technical models, relying on the highly prejudicial EPA. What next, prosecute those who dare question the government’s climate Jihad? Oh wait, that is exactly what the Department of Injustice is doing.

  8. When are the benefits of increased CO2 considered?

    • “To any unprejudiced person reading this account,” Dyson wrote, “the facts should be obvious: that the non-climatic effects of carbon dioxide as a sustainer of wildlife and crop plants are enormously beneficial, that the possibly harmful climatic effects of carbon dioxide have been greatly exaggerated, and that the benefits clearly outweigh the possible damage.”

      • “Unlike the claims of future global warming disasters these benefits are firmly established and are being felt now. Yet despite this the media overlook the good news and the public remain in the dark. My report should begin to restore a little balance.” ~Dr. Indur Goklany

    • In a powerful foreword to the report, the world-renowned physicist Professor Freeman Dyson FRS endorses Goklany’s conclusions and provides a devastating analysis of why “a whole generation of scientific experts is blind to obvious facts”, arguing that “the thinking of politicians and scientists about controversial issues today is still tribal”.

      http://www.thegwpf.org/climate-doomsayers-ignore-benefits-of-carbon-dioxide-emissions/

    • Appell,so CO2 is unhelpful to plants?
      Ditto for warmth?
      Your thesis relies on CO2 causing the planet to become devasted by increased CO2. There is insufficient data to support such a conclusion. Take a deep breath and acknowledge we are not in a position to make a call, one way or the other. However the information we do have does not warrant hysteria. Tranquillo.

    • So Eddie throws up charts, but doesn’t source them and doesn’t explain them.

      That makes them useless, Eddie.

      • So Eddie throws up charts, but doesn’t source them and doesn’t explain them.

        That makes them useless, Eddie.

        Deniers do tend to deny data – that’s what makes them deniers.

        One of the valid criticisms of both advocates and opponents of climate action is that there is only gloom and doom – every impact is not just negative but horrific, be it severe weather or economic depression from government meddling in the economy.

        In the real world, there are cost-benefit analyses and risk-reward ratios. But the irrational adherence to a tribe ( advocate or opponent ) tends to manifest as exaggeration of harm and denial of benefit. Biology denial is an example.

        Now, exercises such as theorizing the social cost of carbon seem pointless to me because I know the vast assumptions and crude estimates open the door for speculative arguing, not science.

        You can try being honest with yourself ( not with me, I don’t really care ), but it will take some emotional concessions.

    • Dr. Imdur M. Goklany (ibid.):

      The approach used in impacts assessments… suffers from three fundamental flaws. Firstly, they rely on climate models that have failed the reality test. Secondly, they do not fully account for the benefits of carbon dioxide. Thirdly, they implicitly assume that the world of 2100 will not be much different from that of the present – except that we will be emitting more greenhouse gases and the climate will be much warmer. In effect, they assume that for the most part our adaptive capacity will not be any greater than today. But the world of 2015 is already quite different from that of 1990, and the notion that the world of 2100 will be like that of the baseline year verges on the ludicrous. Moreover, this assumption directly contradicts:

      (a) the basic assumption of positive economic growth built into each of the underlying IPCC scenarios

      (b) the experience over the past quarter millennium, of relatively rapid technological change and increasing adaptive capacity.

      It is also refuted by any review of the changes that have taken place in the human condition and the ordinary person’s life from generation to generation, at least as far back as the start of the Industrial Revolution.

      • Even if all integrated assessment model inputs were correct, SCC estimation would still be one-sided and misleading. There is no Intergovernmental Working Group on the social costs of carbon mitigation policies. To put this another way, SCC analysis ignores the social benefits of carbon energy, which are substantial.

        For example, as Goklany explains, capabilities supported by carbon energy—including mechanized agriculture, fertilizers, refrigeration, plastic packaging, and motorized transport of food from farms to population centers and from surplus to deficit regions—are among the chief reasons deaths and death rates from drought have declined by 99.8 percent and 99.9 percent, respectively, since the 1920s. A meal that sustains a human life has a social value far exceeding its cost to the farmer or the market price of the food.

        Not that I’m advocating quantification of carbon’s social benefits–another fool’s errand. The point, rather, is that a carbon tax, whether accurately or inaccurately calibrated to the as yet unknown (and likely unknowable) social cost of carbon, is a tax on food. Is a food tax an “efficient” and humane policy in a world where an estimated 795 million people suffer from chronic hunger?

    • @dave

      There is no reason to believe crop yields will decline with a warmer climate. All those models they run on crop yields can not take into account the ingenuity of the farmers. Crop yields have consistently increased even with current temperature increases.

      I think this article says it all about these so called crop models

      http://phys.org/news/2016-07-gauging-impact-climate-agriculture.html

  9. Nic Lewis,

    These values lie within the ranges for the Figure 1 AOGCMs. However, the 10 and 100 year percentages for DICE-2013Ra, at 22% and 89% respectively, lie outside the AOGCM ranges.

    Your Figure 2 purports to compare results to your Figure 1, which is taken from Caldeira and Myhrvold (2013), yet your Figure 2 only shows results from Boucher & Reddy and Berntsen & Fuglestvedt. This is curious.

    In the supplemental for Caldeira and Myhrvold (2013), Table S2 gives the mean temperature change 100 years after a quadrupling of CO2 as 4.86 K. Divide by two we get 2.43 K for a doubling of CO2, which is much closer to the DICE-2013Ra value of ~2.6 K. It is, in fact, well within the 0.87 K standard deviation of the model means reported in the Caldeira and Myhrvold supplemental. It’s well inside that standard deviation divided by two.

    Apples to apples comparisons with some uncertainty bars in the plots would be helpful when making claims that one model result lies outside the range of a model ensemble result.

    • I”n the supplemental for Caldeira and Myhrvold (2013), Table S2 gives the mean temperature change 100 years after a quadrupling of CO2 as 4.86 K. Divide by two we get 2.43 K for a doubling of CO2″

      It doesn’t work like that.

      • Fair enough, David. However, Nic is arguing that the shape of the plots in his Figure 1 are comparable to the curves in his Figure 2, implying some kind of proportionality. My point stands that he doesn’t show the uncertainty range of the model ensemble used in Caldeira and Myhrvold in his Figure 2. My sense is that if he did, the DICE-2013Ra curve would be in bounds, not out of bounds albeit on the high side.

        In short, I think he’s over-egged the pudding.

    • brandonrgates

      “Your Figure 2 purports to compare results to your Figure 1”
      No it doesn’t. It compares, at the same ECS and TCR values, DICE results with those based on using ocean layer depths from 2-box models that were used in two published studies, Boucher & Reddy and Berntsen & Fuglestvedt. The text comments on how the resulting shapes of these curves compare with the results in Figure 1. A visual comparison between the two figures is easy.

      “Divide by two we get 2.43 K for a doubling of CO2, which is much closer to the DICE-2013Ra value of ~2.6 K.”
      “Apples to apples comparisons with some uncertainty bars in the plots would be helpful when making claims that one model result lies outside the range of a model ensemble result.”

      It is your comparison that is not apples-to-apples. The CMIP5 models have a mean ECS above, and a TCR well above, that of the DICE climate model, so the relative shapes of their temperature responses cannot fairly be compared with that of DICE by looking at their mean GMST rise after 100 years. Also, you have divided by two to convert 4x CO2 GMST rises given in Caldeira & Myhrvold to 2x CO2 results that I give, but if you study the CO2 forcing formula that they used (SI, S1) you will find that the correct ratio is 2.13.

      I didn’t attach a statistical significance to the statement of fact (not a mere claim) that the DICE results referred to lay outside the ranges for the Figure 1 AOGCMs. To do so would require that results from a model ensemble could be viewed as independent random draws from a probability distribution, which is questionable.

      • Nic,

        The CMIP5 models have a mean ECS above, and a TCR well above, that of the DICE climate model, so the relative shapes of their temperature responses cannot fairly be compared with that of DICE by looking at their mean GMST rise after 100 years.

        RTFM, I find that ECS for DICE is 2.9. I don’t know what it is off the top of my head for the CMIP5 ensemble mean.

        Also, you have divided by two to convert 4x CO2 GMST rises given in Caldeira & Myhrvold to 2x CO2 results that I give, but if you study the CO2 forcing formula that they used (SI, S1) you will find that the correct ratio is 2.13.

        Thanks. That gives 2.28 K at 100 years, against ~2.6 K for DICE, which is still closer to DICE than it is to Boucher and Reddy.

        I didn’t attach a statistical significance to the statement of fact (not a mere claim) that the DICE results referred to lay outside the ranges for the Figure 1 AOGCMs.

        It’s not clear to me what you think the AOGCM ranges should be. I reiterate, it would be nice to see some indication of that in your Figure 2.

        What’s abundantly clear to me is that DICE runs hotter than than Boucher & Reddy and Berntsen & Fuglestvedt, but why I should take those two models as the better estimates of reality isn’t readily apparent.

        To do so would require that results from a model ensemble could be viewed as independent random draws from a probability distribution, which is questionable.

        Indeed, however we’re already talking about a model ensemble range and mean, so discussing a standard deviation doesn’t seem wholly inappropriate.

      • brandonrgates

        “It’s not clear to me what you think the AOGCM ranges should be.”

        The range of a set of numbers is the interval from their minimum to their maximum. For the Figure 1 AOGCMs that I was referring to, I stated the percentage of equilibrium warming reached after 10 and 100 years and compared with that the same percentages for DICE and the two alternative 2-box model variants I use. Those are comparable figures. It would not be possible to show ann apples to apples comparison range in Figure 2, since its y-axis is the absolute rise in GMST, which depends on model ECS values as well as the percentage of equilibrium warming they reach at each point in time.

      • Nic Lewis,

        If it’s possible to compare the shapes in percentage terms, it seems to me that it ought be possible to scale the absolute values accordingly. But first things first, here’s a percentage plot:

        https://4.bp.blogspot.com/-lt3YKCFm6a0/V7P0Aa30ZzI/AAAAAAAABDM/UsAa1euNXF4b0-wi9cYbRxpaByRjmfCeQCLcB/s1600/Abrupt%2BCO2%2BDoubling%2BFractional%2BTemperature%2BChange.png

        Light blue shading is the Caldiera & Myhrvold ensemble range using the 2-exp fits, and for the heck of it the dark blue shading is one standard deviation of the ensemble memebers from the mean. As advertized, DICE is outside the ensemble range on the high side, but so also is Boucher & Reddy on the low side.

        Here’s the absolute temperature plot obtained by dividing the 4xCO2 equilibrium temperature for each model in Caldiera & Myhrvold by 2.13, and multiplying that result by the fractional change values from the above plot:

        https://1.bp.blogspot.com/-zYTrpz1TQo8/V7P0AcpUDrI/AAAAAAAABDI/HIFNXgSNPbAhea5SAGop1tm49lz081UEgCLcB/s1600/Abrupt%2BCO2%2BDoubling%2BAbsolute%2BTemperature%2BChange.png

        Color scheme same as previous. As I suspected, DICE is well within the Caldiera & Myhrvold ensemble range, and for what it’s worth, solidly inside the 1-sigma ensemble envelope to boot.

        Niether Boucher & Reddy or Bernsten & Fuglestvedt are out of bounds, but B & R is decidedly low. If I were a rational policy maker, I’d be wondering why you’re talking about a trillion-dollar mistake in DICE when that figure comes from a model lying on the low side of the Caldiera & Myhrvold ensemble mean.

        I’d also want an estimate of damages that would accumulate for each year we continue to dink around arguing about whether proposed SCC estimates are the “actual optimal” values or not.

  10. Nic Lewis:

    “In the DICE 2-box model, the ocean surface layer that is taken to be continuously in equilibrium with the atmosphere is 550 m deep, compared to estimates in the range 50–150 m based on observations and on fitting 2-box models to AOGCM responses.”

    I am confused by the above. A 550 m deep layer would seem to be on average cooler and have more thermal mass than a 50-150 m deep layer. Are you saying it is assumed by DICE that the surface controls the 550 m deep layer or the 550 m deep layer controls the surface?

    • Ragnaar

      “Are you saying it is assumed by DICE that the surface controls the 550 m deep layer or the 550 m deep layer controls the surface?”

      In a 2-box model, the air surface temperature globally is effectively controlled by that of the ocean surface layer (however deep). It assumes that the temperature throughout the surface layer is the same. This is a fair approximation for a mixed layer 50 to 100-150 m deep, but is obviously unrealistic for a 550 m deep layer, which in reality usually has a significant negative temperature gradient below the bottom of the mixed layer. A 550 m deep layer does indeed have more theremal mass than a 50-150 m layer, which accounts for the slow initial GMST repsonse of the DICE model.

      • Ignore the ocean deep at your peril.

        ‘Yea, foolish mortals, Noah’s flood is not yet subsided;
        two-thirds of the fair world it yet covers.’
        Melville. ‘Moby Dick.’

  11. yet another key question is whether atmospheric co2 is responsive the fossil fuel emissions.

    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2642639

  12. can we dial in the desired atmos co2 by changing the rate of fossil fuel emissions?
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2642639

  13. Nic, very nice and precise essay. Congrats. Per above comments, you have already throughen trolls here into illogical panics. Well played.

  14. Kevin Dayaratna presented a nice analysis of the 2014 update of the Social Cost of Carbon to the House of Representatives Committee on Natural Resources on July 23, 2015. http://www.heritage.org/research/testimony/2015/an-analysis-of-the-obama-administrations-social-cost-of-carbon

    Conclusions:
    The SCC is based on statistical models that are extremely sensitive to various assumptions incorporated within the models. The following tables summarize this sensitivity for the year 2020.

    Moreover, the damage functions that the estimates are based on are essentially arbitrary with limited empirical justification. Even if one were to take their results seriously, their use would result in significant economic damages with little benefit to reducing global temperatures. As a result, these models, although they may be interesting academic exercises, are far too unreliable for use in energy policy rulemaking.

    • Danley Wolf,

      Moreover, the damage functions that the estimates are based on are essentially arbitrary with limited empirical justification. Even if one were to take their results seriously, their use would result in significant economic damages with little benefit to reducing global temperatures. As a result, these models, although they may be interesting academic exercises, are far too unreliable for use in energy policy rulemaking.

      Thank you. That is the really important point that seems to be largely unrecognised or ignored. If the damage functions were based on objective evidence I wouldn’t be to surprised to see they show SCC to be around $0 (plus or minus!)

    • Yep. From Dayaratna’s testimony:

      It seems to be a fundamental goal of the Obama Administration to expand regulations across the energy sector of the economy. One of the primary metrics that the Administration has used to justify these regulations is the social cost of carbon (SCC), which is defined as the economic damages associated with a metric ton of carbon dioxide (CO2) emissions summed across a particular time horizon.[1]….

      Moreover, the damage functions…are essentially arbitrary with limited empirical justification. Even if one were to take their results seriously, their use would result in significant economic damages with little benefit to reducing global temperatures. As a result, these models, although they may be interesting academic exercises, are far too unreliable for use in energy policy rulemaking.

      What’s the difference between that and Bush/Blair’s policy making? The program consists of first deciding upon a predetermined political agenda. Junk science is then used, as the infamous Downing Street memo revealed, “to fix the facts and intelligence around the policy.”

      That is the hallmark of practically everything that Bush, Blair and Obama have done.

      • Simply put, the SCC is nothing more than a political tool to achieve a desired agenda.

      • What is the political function of the social cost of carbon? The answer should be obvious but consider how SCC modelers describe the policy implications of their work.

        In a 2013 journal article, Chris Hope, creator of the Page Model, and two co-authors from the Natural Resources Defense Counsel argue that SCC analysis should use discount rates of 1%, 1.5%, and 2% (http://www.globalwarming.org/wp-content/uploads/2013/10/Johnson-J-Environ-Stud-Sci-2013.pdf). That increases the Obama administration’s year 2010 SCC estimates of $52, $33, and $11 per ton to $266, $122, and $62 per ton.

        Armed with those numbers, Hope et al. conclude renewables are always more “efficient” than new coal generation and usually more efficient than new gas generation. Moreover, if the SCC is $266/ton or even $122/ton, switching from coal to solar is more efficient than maintaining an existing coal power plant.

        Permit me to translate. The political function of SCC analysis is to make fossil fuels look unaffordable no matter how cheap, and renewables look like a bargain at any price.

        However diverting as a blackboard exercise, when used to “inform” public policy, social cost of carbon analysis is computer-aided sophistry.

      • Marlo Lewis,

        hould use discount rates of 1%, 1.5%, and 2% (http://www.globalwarming.org/wp-content/uploads/2013/10/Johnson-J-Environ-Stud-Sci-2013.pdf)

        This is a very important point. Why are such low discount rates advocated when they are much lower than what is used for making decisions as to which infrastructure projects to fund? For most of the past 30 years of more, aid agencies have used discount rates of around 10% (and more) to decide which aid projects to invest in. Recently the World Bank and IMF have agree to reduce this to a consistent 5%. They explain this is for consistency across all projects but is not the correct discount rate for estimating NPV and whether or not projects are viable and and justifiable. However, the IMF and World Bank have both become progressively more CAGW alarmist over time. http://www.adaptation-undp.org/resources/relevant-reports-and-publications/handbook-economic-analysis-investment-operations

      • Marlo Lewis,

        I posted the wrong link above. See Slide 5 in Choice of discount rates in World Bank Transport Projects, 2004.

        What discount rates are used in the World Bank
        Transport Sector 12% (14% in Peru, 15% in Philippines)
        Energy Sector 10% or 12%
        Education Sector 10% and cost effectiveness
        Environmental Projects None found with economic evaluation
        Health Sector 10% (mostly cost effectiveness assessment and few economic evaluations)
        PREM 4% (Argentina Documentation System)
        >

  15. I wouldn’t worry too much about this”error”, should it prove to be so; IAMS are pretty much worthless regardless.

    “A plethora of integrated assessment models (IAMs) have been constructed… …These models have crucial flaws that make them close to useless as tools for policy analysis:”

    RS Pindyck

    http://www.nber.org/papers/w19244

  16. The simple carbon cycle/climate models included in DICE, FUND and PAGE are calibrated to mimick global warming over the century as projected by a middle-of-the-road general circulation model following a standard emissions scenario.

    That implies, apparently, that these models do not do a good job representing the effect of a transient impulse of carbon dioxide — and that is, of course, what matters when estimating the social cost of carbon.

    • Richard Tol,

      Thank you for contributing. I was hoping William Nordhaus and you would comment.

      Could you clarify what you mean by this:

      That implies, apparently, that these models do not do a good job representing the effect of a transient impulse of carbon dioxide — and that is, of course, what matters when estimating the social cost of carbon.

      Is your point that the transient impulse is not a realistic scenario and, therefore, the DICE, PAGE and FUND IAMs were not attempting to model that scenario. Consequently, these three models are suitable for theri intended purpose, at least as far as modelling the climate response to increasing CO2 concentrations over the century, and they agree reasonably well with the GCMs on the climate projections. Am I correctly interpreting what you meant?

      On a separate matter, could you comment on the accuracy (verification?) and uncertainty of the damage functions used in the IAMs. AR5 WG3 Chapter 3 says the evidence to support the damage functions used in the IAMs is sparse and a lot more research is needed to improve them and reduce the uncertainties. (I realise you have been involved in this research from the beginning and done as much as just about anyone to quantify the damages from GW).

      Some quotes from AR5 WG3 Chapter 3:

      IPCC AR5 WG3 Chapter 3 mentions ‘Damage Function’ 18 times http://www.ipcc.ch/pdf/assessment-report/ar5/wg3/ipcc_wg3_ar5_chapter3.pdf . Some examples:

      “Damage functions in existing Integrated Assessment Models (IAMs) are of low reliability (high confidence).” [3.9, 3.12]”

      “Our general conclusion is that the reliability of damage functions in current IAMs is low.” [p247]

      “To develop better estimates of the social cost of carbon and to better evaluate mitigation options, it would be helpful to have more realistic estimates of the components of the damage function, more closely connected to WGII assessments of physical impacts.”

      “As discussed in Section 3.9, the aggregate damage functions used in many IAMs are generated from a remarkable paucity of data and are thus of low reliability.”

    • Peter:
      Put differently, IAMs are calibrated on the level of climate change. What matters for the social cost of carbon is the first partial derivative. Calibrating on the level may imply that the derivative is wrong.

      Indeed, there is too little evidence to have much confidence in these impact functions.

    • Richard Tol. Thank you. As you are one of the world’s top authorities on damage functions and IAM’s I have high confidence in what you say.

    • Richard Tol,

      Could you please explain (for a non-specialist) how this apparently large discrepancy could have been missed in, I understand, all three IAMs given the effort that has been given to checking, calibrating and trying to find fault in them?

  17. Reblogged this on TheFlippinTruth.

  18. Geoff Sherrington

    Nic,
    Yours is a most interesting and plausible critique of the present approach, which has serious consequences for society if the SCC is in error as you state. Trillions of dollars are to hard to create in real life, but so easy to create with computer code errors.
    In like vein, we have our Australian Government receiving proposals to spend around $18 billion – not trillions yet – to ‘save’ the great Barrier Reef. There is far to little data to describe what needs to be fixed and no discussion on how 18 billion could be better spent elsewhere.
    Both topics, the Social Cost of Carbon and Save the Barrier Reef are largely theoretical constructs lacking evidence that allows valid computation.
    Thank you, Geoff.

    • $18 Billion buys an awful lot of administrators, managers, lobbyists and hack scientists. Saving the Barrier Reef is incidental.

    • In their book, “Useless Arithmetic; Why Environmental Scientists Can’t Predict The Future”, Orin & LInda Pilkey provide the many detailed examples of environmental computer models that although they have failed in their predictions, are still being used because “it is the only thing we have”. Whether it is coral reefs, the atmosphere, beach erosion, or fish populations, the problem is that these systems are too complex and too poorly understood to be reduced to a set of equations whose “solutions” are derived through computer approximation algorithms. More simply, they are wrong, because the assumptions are wrong and the equations are wrong.

      • Geoff Sherrington

        tonyfromct
        Nic Lewis is correcting errors by others, sometimes using the same input data. Too few people are doing that.

  19. Thanks for this blog post; I really enjoyed it and am definitely recommending this blog to my friends and family. I’m a 16 year old with a blog on finance and economics at shreysfinanceblog.com, and would really appreciate it if you could read and comment on some of my articles, and perhaps follow, reblog and share some of my posts on social media. Thanks again for this fantastic post.

  20. “JC note:  …A PDF version of this article is available [here].”
    The link does not work

  21. Pingback: Flawed Climate Model May Cost U.S. Taxpayers A Trillion Dollar | The Global Warming Policy Forum (GWPF)

  22. Refs 7 and 8 return a ‘page not found’ message when clicking – but copying and pasting the URL works fine.

  23. Another solid critique of the recent white house guidance on SCC was submitted prior to it’s unfortunate issuance. The Coalition letter from CEI and colleagues is here: https://cei.org/sites/default/files/Coalition%20Letter%20-%20CEI%20and%20Free%20Market%20Allies%20Comment%20on%20NEPA%20GHG%20Guidance%20Document%20-%20March%2025%202015.pdf

    Some relevant excerpts:

    In addition to the generic flaws of SCC analysis, specific defects also render the administration’s 2010 and 2013 Technical Support Documents (TSDs) 91 unfit for use in agency cost-benefit analyses:

    1.DICE (Dynamic Integrated Climate Economy) and PAGE (Policy Analysis of the Greenhouse Effect) – two of the three integrated assessment models (IAMs) underpinning the TSDs – contain no CO2 fertilization benefit. As noted above, one recent study estimates a CO2 fertilization benefit of $3.5 trillion during 1960-2011 and projects an additional $11.6 trillion benefit during 2011-2050. It is one thing to dispute those estimates, another to pretend the CO2 fertilization effect does not exist. The DICE and PAGE models are biased by design. As such, they flout federal information quality standards. 92 Those models have no proper place in either regulatory analysis or NEPA review.

    2.The Interagency Working Group chose not to use a 7% discount rate to calculate the present value of future CO2 emission reductions, and not to report separate SCC values for the U.S. domestic economy. Both choices inflate93 the hypothetical value of CO2 emission reductions and conflict with OMB Circular A-4.94

    3.The 2013 TSD does not reassess the 2010 TSD’s climate sensitivity assumptions, borrowed from IPCC AR4. It does not question the DICE model’s revised (lower) estimate of ocean CO2 uptake. Nor does it question the PAGE model’s revised (higher) probability estimate of catastrophic impacts. Recent science indicates that climate sensitivity is lower95 and catastrophic scenarios less plausible than earlier assessments assumed,96 and that ocean CO2 uptake is not decreasing.97

    4. The 2013 TSD does not question the PAGE model’s implausible assumption that adaptation cannot limit climate change damages once warming exceeds 2°C. A little common sense here would go a long way. As climate economist Richard Tol wrote after withdrawing his name from the AR5 climate change impacts report: “Humans are a tough and adaptable species. People live on the equator and in the Arctic, in the desert and in the rainforest. We survived ice ages with primitive technologies. The idea that climate change poses an existential threat to humankind is laughable.” 98

    • David Wojick

      All well said but what is truly preposterous is the very idea that we should base big decisions on 300 year projections. This is especially true for technological progress (but also for climate change per se and economic development).

      300 years ago George Washington was not even born yet. The fact that we have computers, which the folks in 1716 did not have, does not make us any better at predicting the future for the next 300 years than they were then. In fact given that change is faster now we are in even less of a position to make such predictions than they were.

      When the concept is absurd the details are irrelevant.

      • David:
        It is truly preposterous to base big decisions with long-term consequences on ignoring said consequences.

        Climate change is a long-term problem.

      • David Wojick

        Richard, it has yet to be determined that human caused climate change is a long-term problem and it is extremely unlikely given what we know. It may not even exist. So I reject you basic claim that climate change is a long-term problem.

        But even if it were the 300 year consequences would still be unknowable. There is a huge difference between ignoring something which is knowable and recognizing what is not knowable.To recognize that the future 300 years hence in unknowable is not to ignore anything. The IAMs are simply making false claims of knowability. Preposterously false in my view.

  24. I greatly appreciate yet another sharpened pencil analysis of a pillar of the climate consensus by Nic Lewis. Yet whenever I see mention of climate change’s economic or societal impacts centuries hence, I am reminded of 999-year English leaseholds at the rate of one peppercorn per year. Our capacity for very longterm economic predictions is unimpressive.

    All evidence suggests that a person alive today would not recognize the technologies that will be in use 300 years from now as anything less than magic. To claim that a 300 year projection of the SCC has any economic validity is absurd. Nevertheless, I am impressed with Nic’s willingness to engage the claimants on their on (absurd) turf. Well done, sir.

    • Exactly– for example, a serious study would include factors like the fall of Western civilization, the implosion of the EU and that the dominant languages will be those spoken in places like China, India, Brazil, Russia…

  25. Nic Lewis, thank you for the essay.

  26. what about the LOSSES (or the missed opportunities) associated with NOT EMITTING carbon dioxide that is directly linked with 87% of the energy used in the World?
    According to this diagram it is 150 US$ (at constant 2005 value) per ton of carbon (or 0,1496 trillion $ per Pg C)
    http://climate.mr-int.ch/images/graphs/gdp_vs_carbon.png

  27. Nic Lewis,

    Thank you for this enlightening post. Well written.

  28. A Court Ruling That Could Save the Planet
    The Planet is doing just fine. It will do just fine with any Ruling.

    This Court Ruling can and will continue to destroy energy production and the Economy.

    All the more reason “I WILL VOTE FOR TRUMP”.
    Democrats and Republicans are against Trump because he does and will oppose their get rich at the taxpayers expense schemes and he is not obligated to them. Way too many people are getting richer with windmills, solar and ethanol while the taxpayers pay and the public debt climbs out of reasonable sight. This has gone on in all administrations for a long time. We need a change. We need something that is opposed by Democrats and Republicans. I say we try Trump this next term. If that does not work, a different outsider the next time. We must stop repeating what has not worked.

  29. David Wojick

    It is extremely suspicious that the range of US SCC discounted dollar damages per ton of CO2 emissions is pretty much the same as the range of proposed carbon taxes. Given the sensitivity (or flexibility) of the IAMs it strongly suggests that they have been tuned to give these supposedly plausible, and certainly politically attractive, numbers.

  30. David Wojick

    The number I have never seen is the cumulative (undiscounted) dollar damages, summed over the next 300 years, for a ton of CO2 emissions today. I suspect this number is absurdly high, because that is the only way to keep discounting from making the various discounted numbers other than zero.

    It is a simple number and obviously important — total damages per ton. What is it? Has anyone published it?

    • Not to my knowledge and I have done a fairly exhaustive lit search on it and its main subcomponents like crop yields and SLR. Closest ‘bad’ estimate (as in silly wrong) is UK’s Stern report. Way too low a discount rate, way too high a ‘damage function’.

    • I can’t imagine that any economist would agree to drop the discount rate. There would seem to be more than enough shaky assumptions built into SCC calculations already without eliminating discount rates entirely.

      For others on the thread who may not be familiar with these concepts, the discount rate essentially compresses the damage estimates into near future years, at the expense of damages sustained 100, 200 years in the future. In other words, the further into the future you go, the more you “discount” the loss (or gain) relative to today’s value. And the effect becomes more pronounced the larger the discount rate applied.

      For example,

      …at higher discount rates we expect that a greater proportion of the SCC value is due to damages in years with lower temperature increases. For example, when the discount rate is 2.5 percent, about 45 percent of the 2010 SCC value in DICE is due to damages that occur in years when the temperature is less than or equal to 3 °C. This increases to approximately 55 percent and 80 percent at discount rates of 3 and 5 percent, respectively.

      ​ ​https://www3.epa.gov/otaq/climate/regulations/scc-tsd.pdf

  31. The conclusion of the study (Twitter link) on overcoming buffering volatility imposed by the energy revolution envisaged by Germany’s greenies relates here so well…

    This will make it difficult for Germany to pursue its energy revolution beyond merely replacing nuclear fuel towards a territory where it can also crowd out fossil fuel.

    –i.e., we’ve wasted valuable resources chasing a pipe dream and need a way to spin this so it’s not obvious that an ounce of scientific skepticism is worth a ton of expensive Leftist propaganda.

  32. Pingback: Dialog On Nature | …and Then There's Physics

  33. stevefitzpatrick

    Nic Lewis,
    Very nice post. I am puzzled by the strange depths of the Nordhaus “mixed” and “deep” layers. Could it be that he simply (accidentally) switched the two? The two depths, if switched, still would have problems (not close to the physical system), but are a lot closer to plausible than a very deep “surface layer” and a shallow “deep ocean”.

    • stevefitzpatrick,
      Thanks. Nordhaus’s parameterisation of the 2-box model is unusual, and doesn’t directly specify the ocean layer depths or heat capacities. I’m not at all sure he would have realised what ocean surface and deep layer depths the parameter values adopted in DICE2013Rv2 corresponded to

  34. stevefitzpatrick

    Nic,
    If you swap K1 for K2 (or tau1 for tau2) in the specification, does that not imply more realistic ocean layer depths?

    • stevefitzpatrick

      Nic,
      I withdraw that last comment, swapping the constants derived from the Nordhaus response doesn’t help. It’s a mess.

  35. Question to Nic Lewis (or any Others that can answer) — Nic, as a layman most of your discussion is way over my head. This month, Regulators in New York provided a subsidy to existing nuclear units based on a current social cost of carbon of ~$37 per ton (using, I assume a composite of the 3 models).

    Question: If we used your methodology/assumptions as discussed in today’s posting — the SCC price would change to from ~$37 to what? (like a decrease of $10 to ~$27 to existing nuclear units in NY?)

    Thanks.

    • Stephen Segrest

      Yes, the $37 US SCC estimate is based on the three models – a simple average of their estimates, I believe.

      It is difficult to say how much the $37 SCC would change if all three models used 2-box models with what I consider realistic ocean parameters, since I have not run the PAGE and FUND models and they differ from DICE in a number of ways – including as to their damage functions, which have a major impact on SCC estimates. But if all models showed the same ~25% reduction in the SCC as DICE does when using the Boucher & Reddy ocean parameters, then the $37 would fall to ~$28.

    • SS:

      The New York state subsidy to a few nuke plants (@ $500 million per year) is another example of rising energy costs due to climate change policies. I wonder if you (or someone else) could provide a guest post on this topic, as it is a growing issue that few are able to follow?

      The politics of New York’s situation are entertainingly complex (as long as you are not a resident). They must shut down one nuke plant but subsidize another even as they ban fracking for low-carbon methane. All of these individual decisions may be “correct” but it shows what a tangled web climate change can weave.

      http://www.nytimes.com/2016/08/02/nyregion/new-york-state-aiding-nuclear-plants-with-millions-in-subsidies.html?_r=0

  36. Looking at the shape of these curves, it appears the carbon tax tracks what I believe will be the price imposed by the market, which will force fossil fuel prices up as resources deplete.

    I realize my position is an outlier, and I see a lot of optimism in a broad set of publications which unfortunately don’t seem to grasp the magnitude of the problem we face. The reduced availability of fossil fuels implies that by 2050-2075 we will see the yearly CO2 concentration increase starting to flatten out.

    I’m a bit lost when it comes to how to program a carbon cycle model given all the variables, and I see very little discussion about this topic. The carbon cycle model will be critical once emissions plateau and start a gentle decline. One possibility is that atmospheric CO2 concentration will start dropping once emissions reach ~ 40 % of the peak level. This implies there will be no need to reduce emissions to zero, or have “negative emissions” (which I believe are impossible).

    I hope this gets posted, and somebody will be encouraged to discuss the carbon cycle models in the future. They seem to be a critical component of the overall system model.

    • Carbon pricing will not succeed. This explains why: http://anglejournal.com/article/2015-11-why-carbon-pricing-will-not-succeed/

      We need to get over the ‘command and control’ type approach. It’s been failing for 25 years. Surely that’s enough to recognise its the wrong approach.

      • Peter, I believe resource depletion will lead to higher prices. This has the same effect as carbon taxes when applied by all nations. The climate debate always ignores the uncertainty in the resource volumes.

        Unfortunately, the “right wing” tends to be cornucopian because it fits the “drill baby drill” motto.

        The left wing is simply interested in income redistribution, de-development, and increased taxation, so f course they discourage any discussion about resources being limited (they discuss it in other settings when it fits their agenda).

        Thus whenever I point out that market forces will eventually lead to higher prices at the consumer end point, I tend to be ignored. What I state doesn’t fit anybody’s agenda.

        My other point was that future CO2 concentrations will be driven in part by the carbon cycle behavior. I see very little discussion about this topic. Since I do believe the market will lead to lower emissions, I’m interested in understanding what that implies long term. I suspect that resource depletion->market prices->lower CO2 concentration and a slightly lower TCR will keep temperatures from climbing beyond the 2 degree threshold. This also depends on methane emissions.

        It seems to me the global warming phenomenom could end up being manageable by using market forces, encouraging technology development, some enforced energy efficiency rules, and possibly a bit of geoengineering.

        We could just use more white paint, crush rocks, do some ocean fertilization, things like that.

      • A carbon tax used with voluntary international “trade clubs” could work.

      • Thus whenever I point out that market forces will eventually lead to higher prices at the consumer end point, I tend to be ignored. What I state doesn’t fit anybody’s agenda.

        The reason it’s ignored is that its your belief, not fact. The time until fossil fuels become a serious constraint is a very long way off. Australia has about 700 years of brown coal and 300 years of black coal with current technology. Much more will become available as technology improves. Nordhaus estimated 6000 Gt C (over 20,000 Gt CO2) fossil fuel could be extracted eventually (as technologies improve).

        if you want to advocate for an achievable solution, start advocating to remove the impediments on nuclear power that have been imposed as a result of 50 years of irrational scaremongering (largely by the Left since you mentioned political alignment). Nuclear could be 1/10th of current cost if not for the consequences of the anti-nuclear protest movement https://judithcurry.com/2016/03/13/nuclear-power-learning-rates-policy-implications/ . Over the past 30 years it could have avoided 4.5-9 million deaths and 75-170 Gt CO2 emissions. That’s the consequences of the irrational polices that blocked development and deployment of nuclear power since the mid 1960s.

        If you want to make a difference, start learning and advocating for policies that can succeed. Here’s my suggestion as to the policy approach we should be supporting and advocating for: https://judithcurry.com/2016/08/16/cop21-developing-countries/#comment-804539

    • fernandoleanme:

      I hope this gets posted, and somebody will be encouraged to discuss the carbon cycle models in the future.

      Guido van der Werf is an expert on the carbon cycle. Perhaps he could be encouraged to expand on his recent post in response to Murry Salby.

      Looking at the shape of these curves, it appears the carbon tax tracks what I believe will be the price imposed by the market, which will force fossil fuel prices up as resources deplete.

      It’s unclear which carbon tax curve you refer to — although as Nic Lewis pointed out:

      Results from IAMs are used by governments when deciding what carbon taxes to impose and/or levels of emission reductions to target.

      The concept of a carbon tax is attractive precisely because it is more economically efficient than a command-and-control approach. By adding a cost (tax) to carbon sources, the market will respond by increasing efficiencies, reducing consumption, etc. If we are forced to “do something” the carbon tax concept is superior to most, probably all, others.

      However, the “problem” with waiting for market forces to raise the price of fossil fuels enough to slow/stop emissions growth is illustrated by recent history. The price of oil was near $100/bbl for an extended period and while this encouraged conservation to some extent it also stimulated increased exploration, production and improved extraction technologies. Efforts by governments to constrain production directly ultimately failed because new sources were developed.

      You are correct that, eventually, each hydrocarbon source will be depleted. But waiting for that inevitability strains the patience of climate consensus adherents. They want the money, and they want it now.

    • fernandoleanme

      “One possibility is that atmospheric CO2 concentration will start dropping once emissions reach ~ 40 % of the peak level. This implies there will be no need to reduce emissions to zero, or have “negative emissions” (which I believe are impossible). ”

      I’ve done some work on this, using a simple Earth system model, the carbon-cycle of which uses obsevationally-based paramater values. A stable atmospheric CO2 concentration following RCP4.5 or RCP6.0 scenario emissions this century, without requiring a zero or negative level of emissions thereafter, seems likely to be possible if ECS and TCR are fairly low.

    • [repost in correct place]

      Thus whenever I point out that market forces will eventually lead to higher prices at the consumer end point, I tend to be ignored. What I state doesn’t fit anybody’s agenda.

      The reason it’s ignored is that its your belief, not fact. The time until fossil fuels become a serious constraint is a very long way off. Australia has about 700 years of brown coal and 300 years of black coal with current technology. Much more will become available as technology improves. Nordhaus estimated 6000 Gt C (over 20,000 Gt CO2) fossil fuel could be extracted eventually (as technologies improve).

      if you want to advocate for an achievable solution, start advocating to remove the impediments on nuclear power that have been imposed as a result of 50 years of irrational scaremongering (largely by the Left since you mentioned political alignment). Nuclear could be 1/10th of current cost if not for the consequences of the anti-nuclear protest movement https://judithcurry.com/2016/03/13/nuclear-power-learning-rates-policy-implications/ . Over the past 30 years it could have avoided 4.5-9 million deaths and 75-170 Gt CO2 emissions. That’s the consequences of the irrational polices that blocked development and deployment of nuclear power since the mid 1960s.

      If you want to make a difference, start learning and advocating for policies that can succeed. Here’s my suggestion as to the policy approach we should be supporting and advocating for: https://judithcurry.com/2016/08/16/cop21-developing-countries/#comment-804539

  37. Nic: Using instantaneous 2X or 4X runs from climate models to characterize the response time for various ocean compartments in IAMs could be a serious mistake. In a typical 4X run (7.8 W/m2 initial imbalance), the initial rate of warming would be about 1.5 degK/yr, if all the heat remained in a 50 m mixed layer. Within a few years, these scenarios produce a planet with a thin layer of ocean that is roughly 5 degK warmer than before and less dense than it used to be. Therefore the ocean will be more stably stratified and vertical convection (Eddy diffusion?) will need to overcome an unreasonably strong density gradient. A gradual increase in forcing (currently 0.5%/year) will produce less stratification, which should facilitate vertical diffusion, thereby producing even less stratification, and so on. AOGCMs – presumably – reduce the rate of vertical flux against a density gradient.

    Do the time constants derived from 4X runs for a 2-box model do a reasonable job of replicating the temperature vs time output from a 1%/yr forcing run? If not, they shouldn’t be used.

    (The real world is currently burning enough fossil fuel to increase CO2 by 1%/yr, but experiencing only an 0.5%/yr increase in CO2 in the atmosphere due to sinks. There are adjustable time constants in the Berne model that control how fast various carbon sinks saturate. They have been chosen to fit the past, but the past tell us little about saturation. )

  38. franktoo

    Your point is a good one in theory, but it is contradicted by the behaviour of virtually all AOGCMs. They behave almost exactly as linear time-invaraitn systems, so far as GMST response to forcing is concerned.

    “Do the time constants derived from 4X runs for a 2-box model do a reasonable job of replicating the temperature vs time output from a 1%/yr forcing run?”

    Yes, as stated in the paragraph before Figure 1, they do an excellent job, provided that CO2 forcing is treated as increasing very slightly faster than with the log of concentration – as is supported by recent research and as per the IPCC 1990 formula used by Caldeira and Mhyrvold. See their Figure 5 for fits to 1pctCO2 simulations. Ignore the GFDL-ESM2G/M deviations after year 70 – they failed to keep increasing CO2 at 1% pa as specified – and the FGOALS-g2 plot – it looks highly dubious (a number of that model’s runs were withdrawn from CMIP5).

    • Nic: Thanks for the replies. If the rate of vertical transport of heat doesn’t slow down in response to the increasingly stable stratification associated with 5 degC of warming localized at the top of the ocean, then we desperately need models with more sophisticated physics describing heat flux in the ocean!

      The same flux that carries heat from the surface to the deeper ocean also transfers CO2 to the deep ocean sink. That process is also modeled by similar time constants that have been empirically adjusted to fit the past, but may not fit the future.

      This figure from Caldeira & Myhrvold was interesting: “Figure 4. Results for climate feedback parameter (λ, horizontal axis), effective vertical thermal diffusivity (κv, vertical axis), and fraction of climate change (contours) realized (a) 10 years and (b) 100 years after a step-function quadrupling of atmospheric CO2 content:

      Large difference in thermal diffusivity aren’t correlated with climate sensitivity. In the real world, thermal diffusivity must be related to dQ used to calculate ECS with energy balance models.

      ECS = F_2X * (dT/((dF-dQ))

      I wonder if the data from Argo could be used to invalidate some of these models

      • franktoo

        “we desperately need models with more sophisticated physics describing heat flux in the ocean”

        Yes. I beleive that one of the problems is that a much finer resolution is required to represent ocean dynamics, compared to representing atmospheric dynamics to the same level of detail.

        “The same flux that carries heat from the surface to the deeper ocean also transfers CO2 to the deep ocean sink. That process is also modeled by similar time constants that have been empirically adjusted to fit the past, but may not fit the future.”

        Yes; indeed carbon isotope data has been used to calibrate ocean models used to estimate its heat uptake.

        “Large difference in thermal diffusivity aren’t correlated with climate sensitivity.”

        No, but using a 2-box model and all 33 CMIP5 models for which the necessary data appears available, there is a strong (~ -0.5) negative correlation between ECS and the advective flux F that replaces thermal diffusivity. Its correlation with TCR is even higher (~ -0.65), as one might expect.

        “I wonder if the data from Argo could be used to invalidate some of these models”

        I imagine so. I think it is known that CMIP5 ocean models have significant weaknesses, if not more serious problems.

      • Nic: “No, but using a 2-box model and all 33 CMIP5 models for which the necessary data appears available, there is a strong (~ -0.5) negative correlation between ECS and the advective flux F that replaces thermal diffusivity. Its correlation with TCR is even higher (~ -0.65), as one might expect.”

        Are you referring the difference between thermal diffusivity (a, where dT/dt = a*d2T/dz2) in one dimension and a heat transfer coefficient (k, where q = k/L).

        As best I can tell, nothing that happens inside the ocean should change ECS – changing heat flux within the ocean simply changes the rate at which equilibrium warming is approached. TCR, of course, is different. To calculate ECS from current warming, you need know how much heat is currently disappearing into the ocean. Therefore, if your model predicts too much warming over the instrumental period, you can bring it into agreement with observations by increasing thermal diffusivity. A nice supplement to tuning with aerosol forcing.

      • franktoo:

        “Are you referring the difference between thermal diffusivity (a, where dT/dt = a*d2T/dz2) in one dimension and a heat transfer coefficient (k, where q = k/L).”

        I refer to the difference between thermal diffusivity in 1D (more usually designated kappa_v) and the advective mass flux of water from the surface ocean layer to the deep ocean in a 2-box model, designated F in Berntsen & Fuglestvedt 2008.

        “As best I can tell, nothing that happens inside the ocean should change ECS”

        That isn’t quite true in a 3-D AOGCM, since ocean dynamics can redistribute heat and surface temperature changes between regions that have different local climate feedback strengths.

        ” if your model predicts too much warming over the instrumental period, you can bring it into agreement with observations by increasing thermal diffusivity.”

        Yes indeed, although I haven’t seen any evidence that modellers have tuned their ocean models for this purpose.

      • Frank wrote: “if your model predicts too much warming over the instrumental period, you can bring it into agreement with observations by increasing thermal diffusivity.”

        Nic: “Yes indeed, although I haven’t seen any evidence that modellers have tuned their ocean models for this purpose.”

        Frank added: Obvious signs of tuning with aerosols allegedly disappeared between CMIP3 and 5. Above you just told me that the advective flux parameter has a significant (negative) correlation with ECS. Is there a simple way to ask if ECS can be explained by a combination of these two factors, neither of which should have a large direct effect on ECS (with the exception of your caveat about horizontal redistribution).

  39. Pingback: The Advancement Of Science | Transterrestrial Musings

  40. Of greater practical concern is the loss of utility if the higher carbon tax that is optimum when using the DICE climate module standard settings is imposed, when the actual climate response is in line with a more realistic 2-box model. For the Berntsen & Fuglestvedt based variant, this utility loss is $1.1 trillion

    Nic provide the following utility present values:
    DICE_std base case: $2678.6 trillion
    DICE_std with carbon tax: $2707.0 trillion
    DICE_B&F with carbon tax: $2726.0 trillion
    He didn’t provide a value of the DICE_B&F base case.
    DICE_B&F means the DICE model with the Berntsen & Fuglestvedt 2-box ocean layers.

    The DICE_std optimum carbon tax results in a utility increase of $28.4 trillion, which is the difference between the with carbon tax value and the base case value. If this causes a loss $1.1 trillion loss of utility compared to the B&F variant, I assume the difference between the DICE_B&F with carbon tax and base case is 28.4 + 1.1 = $29.5 trillion. Therefore the DICE_B&F case case is 2726.0 – 29.5 trillion = $2696.5 trillion. Please confirm that this is correct or did I misunderstand.

    • Ken Gregory
      The DICE_B&F base case utility is $2711.7. This is higher than in the DICE_std base case, since the rise in GMST is lower and thus climate damages are less. The difference between the two base case utilities is greater than the difference between their optimum carbon tax utilities, since estimated climate damages (which rise sharply when the GMST increase exceeds 2 – 3 C) are much larger in the base case than in the optimum carbon tax case, and therefore the lower GMST response in the B&F case has a greater impact on utility than in the base case.

  41. Oops, the blockquote was supposed to be the first paragraph only!

  42. Nic,
    OK, your reply makes sense. The difference between the base case and optimum tax case is $28.4 trillion using the standard DICE settings and is $14.3 trillion using the B&F setting. So, how did you calculate the $1.1 trillion loss?

    • Ken Gregory,

      Could you run some sensitivity analyses starting with the DICE-2013R default settings for the ‘Copenhagen’ policy and varying these parameters:

      1. Damage function
      2. ECS (and TCR?)
      3. RCP
      4. Climate module parameters as per Nic Lewis
      5. Finally, a run with all the above inputs set to what you believe is the most likely value for each.

      (see my chart with 1/2 Copenhagen participation rate upthread: https://judithcurry.com/2016/08/15/abnormal-climate-response-of-the-dice-iam-a-trillion-dollar-error/#comment-804148 )

    • Ken,
      $1.1 trillion is the difference in utility between (a) when the standard DICE carbon tax is imposed, and (b) when the [lower] carbon tax that is optimum if the B&F climate response function is used, in both instances where the true climate response matches the B&F climate response function. That gives an apples-to-apples comparison and isolates the effect of imposing a suboptimal carbon tax.

      The loss is similar (slightly higher) if the true climate response matches the DICE climate module response, in which case the higher DICE carbon tax gives the higher utility. Utility is lower in that case, whichever carbon tax is imposed, than if the true climate response matches the B&F climate response function, since the rise in GMST and resulting estimated climate damages are higher.

      • Nic,

        It would be great if you would post a follow up with charts showing sensitivity to the key variable (e.g. the ones I listed above, and also participation rate which I forgot to mention). This could clarify how significant each parameter is.

  43. O/T It seems to me, the damage function (called Impact function by Richard Toll in a comment up thread) is likely to be responsible for the greatest uncertainties in the IAM outputs. I’ve just read this:
    Roson and Datori, 2016. Estimation of climate change damage functions for 140 regions in the GTAP9 database
    http://www.unibocconi.eu/wps/wcm/connect/5909305e-e73d-48d1-87d4-92f93c64ece4/WP+86.pdf?MOD=AJPERES
    It finds high impacts for 3C and 5C global warming. However, I am concerned that it may be overestimating the negative impacts. For example:

    1. It doesn’t seem to take into account adaptation.

    – During last century, CSIRO (Australia) greatly increased the yields for wheat and made them more resistant to high temperatures and droughts. The grain growing areas expanded to much drier land. It is just one of many examples of what adaptation will do for crops and animals in the future.

    – The world has effectively unlimited supply of what is inherently cheap, clean energy (nuclear). With cheap energy we can have fresh water for irrigation. It will also supply clean water and sanitation to over a billion more people. This will reduce diarrhea and other diseases. Energy costs for heating and cooling will become much cheaper.

    – We’ll develop better vaccines and altogether better health services.

    2. All health outcomes are negative. It disregards the positives and sets them to 0.

    3. It doesn’t seem to take into account the increased area of land that becomes productive for agriculture in mid and high latitudes as GMST rises.

    4. Changes in tourism seems to be the largest negative impact – however, it seems to me the money not spent on tourism in one country would be spent elsewhere in the global economy (but I am not an economist so I do not fully understand the analysis).

    Without properly allowing for adaptation this seems analysis (although very comprehensive) seems to be another rather alarmist assessment.

    I’d welcome comments from Richard Tol and others who have an in depth knowledge of the empirical evidence on which the damage/impact functions are based.

  44. Pingback: Weekly Climate and Energy News Roundup #237 | Watts Up With That?