by Judith Curry
I’ve been invited to write an article on climate uncertainty and risk.
It’s been about 5 years since I’ve written a new article on this topic; this article provides my current perspectives on this topic.
This article is in draft form; I will submit it in a few weeks. I would appreciate your suggestions and constructive comments.
CLIMATE UNCERTAINTY AND RISK
Research scientists focus on the knowledge frontier, where doubt and uncertainty are inherent. Formal uncertainty quantification of computer models is less relevant to science than an assessment of whether the model helps us learn about how the system works.
However in context of the science-policy interface, uncertainty matters. There is a growing need for more constructive approaches to accountability about the different dimensions of uncertainty in climate change as related to policy making– what may happen in the future and what actions might be appropriate now.
Risk is the probability that some undesirable event will occur, and often describes the combination of that probability and the corresponding consequence of the event. Economists have a specific definition of risk and uncertainty that harkens back to Knight (1921). Knightian risk denotes the calculable and thus controllable part of what is unknowable, implying thatrobust probability information is available about future outcomes. Knightian uncertainty addresses what is incalculable and uncontrollable.
This essay on climate uncertainty and risk integrates perspectives from climate modeling, philosophy of science and decision making under uncertainty, extending previous analyses by the author (Curry and Webster, 2011; Curry, 2011). The objective is to explore the kinds of evidence and reasoning that can help inform decision makers as to whether and how they should use climate models for different applications.
There are numerous categorizations and hierarchies of risk and uncertainty, which are further complicated by different disciplines using terms in different ways (for a summary, see Spiegelhaler and Rausch, 2011). The categorization presented here discriminates among three dimensions of uncertainty in context of model-based decision support (Walker et al, 2013): nature, location, and level of uncertainty.
The nature of uncertainty relates to whether the uncertainty is in principle reducible, versus uncertainty that is intrinsic and hence irreducible.
- Epistemic uncertainty is associated with imperfections of knowledge, which may be reduced by further research and empirical investigation.
- Aleatory uncertainty is associated with inherent variability or randomness, and is by definition irreducible. Natural internal variability of the climate system contributes to aleatory uncertainty.
The location of uncertainty refers to where the uncertainty manifests itself within the model complex:
- Framing and context identifies the boundaries of the modeled system. Portions of the real world that are outside the modeled system leave an invisible range of other uncertainties.
- Model structure uncertainty is uncertainty about the conceptual modeling of the physical system, including the selection of subsystems to include, often introduced as a pragmatic compromise given limited computational resources.
- Model technical uncertainty arises from the implementation of the model solution on a computer, including solution approximation and numerical errors.
- Input uncertainty relates to uncertainty in model inputs that describe the system and the external forces that drive system changes.
- Parameter uncertainty includes uncertain constants and other parameters that are largely contained in subgridscale parameterizations.
- Model outcome uncertainty, also referred to as prediction error, arises from the propagation of the aforementioned uncertainties through the model simulation.
- Uncertainty quantification error arises due to Monte Carlo sampling used in the error quantification procedure itself (for both epistemic and aleatory uncertainties).
The level of uncertainty relates to where the model outcome uncertainty ranks in the spectrum between complete certainty and total ignorance:
- Complete certainty: deterministic knowledge; no uncertainty
- Statistical uncertainty (Knightian risk): outcomes are not known precisely, but precise, decision-relevant probability statements can be provided.
- Scenario uncertainty (Knightian uncertainty or ambiguity): A range of plausible outcomes (scenarios) are enumerated but with a weak basis for ranking them in terms of likelihood.
- Deep uncertainty (recognized ignorance): the scientific basis for developing outcomes (scenarios) is weak; future outcomes lie outside of the realm of regular or quantifiable expectations
- Total ignorance: the deepest level of uncertainty, to the extent that we do not even know that we do not know.
If the policy making challenge is defined in context of the response of climate to future greenhouse gas emissions, the uncertainty level is characterized as ‘scenario uncertainty.’ In this context, scenario uncertainty arises not only from uncertainty in future emissions but also from uncertainty in the equilibrium climate sensitivity to CO2(ECS). According to the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (IPCC 2013), “there is high confidence that ECS is extremely unlikely less than 1°C and medium confidence that the ECS is likely between 1.5°C and 4.5°C and very unlikely greater than 6°C.” Thus, we know a range of values within which the climate sensitivity is very likely to fall, with values better constrained on the lower end than on the high end. The AR5 further states:“No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.” Despite the fact that we know quite a bit about the value of ECS, we do not have grounds for associating a specific probability distribution with ECS.
If the policy making challenge is defined in the context of the actual evolution of the 21stcentury climate (such as for vulnerability and impact assessments), then the uncertainty level increases to deep uncertainty. Apart from the issue of unknown future greenhouse gas emissions, we have very little basis for developing future scenarios of solar variation, volcanic eruptions and long-term internal variability. The likelihood of unanticipated outcomes (surprises) needs to be acknowledged.
Epistemology of climate models
The IPCC Fourth Assessment Report provided the following conclusion about climate models:
“There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above.” (Randall et al. 2007)
Based on expert judgment provided by the IPCC, policy makers have been assuming that climate models are adequate for purposes such as setting emissions reductions targets and developing regional climate adaptation plans.
Is this level of confidence in climate model projections justified?
The most common ways to evaluate a climate model are to assess how well model results fit observation-based data (empirical accuracy) and how well they agree with other models or model versions (robustness) (e.g. Flato et al. 2013). Parker (2011) has argued that robustness does not objectively increase confidence in simulations of future climate change. Baumberger et al. (2017) address the challenge of building confidence in future climate model predictions through a combination of empirical accuracy, robustness and coherence with background knowledge. Baumberger et al. acknowledge that the role of coherence with background knowledge is limited because of empirical parameterizations and the epistemic opacity of complex models (Winsberg and Lenhard, 2010).
With regards to empirical adequacy, the climate modeling community is beginning to apply uncertainty quantification (UQ) concepts to climate models (Qian et al. 2016). These endeavors are focused on exploring parameter uncertainty (towards optimizing model parameter selection) and on evaluating prediction error.Additional efforts are identifying which model variables to focus on in prediction error analyses (Burrows, 2018) and evaluating models at shorter weather timescales and process levels.
A broader perspective on this issue is provided by recent scholarship on the epistemology of simulation, including how simulation models are confirmed. Lloyd (2009) describes how observational data are used in the evaluation of climate models and suggests new ways of viewing the significance of these model-data comparisons. However, attempts to confirm climate models through demonstrating empirical accuracy are fraught with challenges: inadequacy of data, selection of variables to confirm and on which time and space scales, a vast and multi-dimensional parameter space to be explored, and concerns about circularity with regards to data used in both model tuning and confirmation.
Parker (2009) argues that known climate model error is too pervasive to allow climate model confirmation to be of use. Parker proposes a shift in approach from confirming climate models to confirming their ‘adequacy for purpose.’ Adequacy-for-purpose assessments involve estimating what the degrees of accuracy of simulations of a wide variety of observed climatic quantities imply about the correctness of uncertain model assumptions and results. Assessing adequacy-for-purpose hypotheses is a daunting task owing to the epistemic opacity of complex models that results in confirmation holism (Lenhard and Winsberg, 2010).
Assessing the adequacy of climate models for the purpose of predicting future climate is particularly difficult. It is often assumed that if climate models reproduce current and past climates reasonably well, then we can have confidence in future predictions. However, empirical accuracy may to some degree be due to tuning rather than to the model structural form. Further, the model may lack representations of processes and feedbacks that would significantly influence future climate change. Hence, reliably reproducing past and present climate is not a sufficient condition for a model to be adequate for long-term projections, particularly for high-forcing scenarios that are well outside those previously observed in the instrumental record.
Given the above concerns, and the unaddressed concerns about uncertainty in model structural form and framing, Katzav (2014) argues that useful climate model assessment does not aim to confirm the model assumptions or prediction outcomes, but rather should aim to demonstrate that the simulations describe real possibilities. A simulation is taken to be a real possibility if its realization is compatible with our background knowledge and that background knowledge does not exclude the realization of the simulated scenario over the target period.
Developing scenarios of climate futures
The possibilistic view regards the spread of an ensemble as a range of outcomes that cannot be ruled out. However, Stainforth et al. (2007) argue climate models cannot be used to show that some possibilities are not real. Further, owing to structural limitations, existing climate models do not allow exploration of all the theoretical possibilities that are compatible with our knowledge of the basic way the climate system actually is. Some of these unexplored possibilities may turn out to be real ones.
Smith and Stern (2011) argue that there is value in scientific speculation on policy-relevant aspects of plausible, high-impact, scenarios even though we can neither model them realistically nor provide a precise estimate of their probability. A surprise occurs if a possibility that had not even been articulated becomes true. Efforts to avoid surprises begin with ensuring there has been a fully imaginative consideration of possible future outcomes.
When background knowledge supports doing so, additional scenarios can be generated by modifying model results so as to broaden the range of possibilities they represent. Further, the possibilist view extends to scenarios other than those that are created by global climate models. Simple climate models, process models and data-driven models can also be used as the basis for generating scenarios of future climate. These alternative methods for generating future climate scenarios are particularly relevant for developing regional scenarios (for which global models are known to be inadequate) and impact variables such as sea level rise (that are not directly simulated by global climate models).
The potential problem of generating a plethora of potentially useless future scenarios is avoided if we focus on scenarios that we expect to be significant in a policy context. Smith and Stern (2011) make an argument for estimating whether a scenario outcome has a less than 1-in-200 chance, which is a focus of financial risk managers.
There is also an important role in policy making for articulating the worst-case scenario that would be genuinely catastrophic. The worst-case scenario is judged to be the most extreme scenario that cannot be falsified as impossible based upon our background knowledge (Betz, 2010).
The scientific community involved in predicting future sea level rise has expended considerable effort in articulating the worst-case scenario (e.g. LeBars 2017). Sea level predictions are only indirectly driven by global climate models, since these models do not predict the mass balance of glaciers and ice sheets, land water storage or isostatic adjustments. Hence estimates of the worst-case scenario integrate climate model simulations, process model simulations, estimates from the literature, and paleoclimatic observations.
Integrated Assessment Models
Integrated Assessment Models (IAMs) are widely used to assess impacts of climate change and various policy responses. In assessing the social cost of carbon, IAMs couple an economic general equilibrium model to an extremely simplified climate model. According to expected utility theory, we should adopt the climate policy that maximizes expected utility — the extent to which an outcome is preferable to the alternatives.
The climate science input to IAMs is the probability density function of equilibrium climate sensitivity (ECS). The dilemma is that with regards to ECS, we are in a situation of scenario (Knightian) uncertainty—we simply do not have grounds for formulating a precise probability distribution. Without precise probability distributions, no expected utility calculation is possible.
This problem is addressed by creating a precise probability distribution based upon the parameters provided by the IPCC assessment reports (NAS 2017). In effect, IAMs convert Knightian uncertainty in ECS into precise probabilities. Of particular concern is how the upper end of the ECS distribution is treated—either by assuming symmetry or fitting a ‘fat tail.’ The end result is that this most important part of the distribution drives the economic costs of carbon using a statistically-manufactured ‘fat tail.’
Subjective or imprecise probabilities may be the best ones available. However, over-precise numerical expressions of risk are misleading to policy makers. Frisch (2013) argues that such applications of IAMs are dangerous, because while they purport to offer precise numbers to use for policy guidance, that precision is illusory and fraught with assumption and value judgments.
Policies optimized for a ‘likely’ future may fail in the face of surprise. At best, policy makers have a range of possible future scenarios to consider. Alternative decision-analytic frameworks that are consistent with conditions of deep uncertainty can make more scientifically defensible use of scenarios of climate futures.
For situations of deep uncertainty, precautionary and robust approaches are appropriate. Stirling (2007) has emphasized that precaution arises as part of the risk assessment, and is not a decision rule in itself. A precautionary appraisal is initiated when there is uncertainty. A robust policyis defined to be one that yields outcomes that are deemed to be satisfactory across a wide range of future plausible outcomes (Walker et al. 2016). As such, robust policy making interfaces well with possibilistic approaches that generate a range of possible futures. Flexible strategies are adaptive, and can be quickly adjusted to advancing scientific insights and clarification of scenarios of future outcomes.
While climate models continue to be used by climate scientists to increase understanding about how the climate system works, most of the investment in global climate models is motivated by the needs of policy makers.
There is a gap between what climate scientists can provide versus the information desired by policy makers. Spiegelhalter and Rausch (2011) state that it is important for scientists to avoid the attrition of uncertainty in the face of an inappropriate demand for certainty from policy makers. Betz (2010) reminds us that the difficulties of the problem must not serve as an excuse for scientists to simplify the epistemic situation, thereby pre-determining the complex value judgments involved.
The root of the most significant problem at the climate science-policy interface lies not in the climate models themselves but in the way in which they are used to guide policy making. Climate scientists have helped exacerbate this problem. Both climate scientists and policy makers need to accept the limits of probabilistic methods in conditions of ambiguity and deep uncertainty that characterize climate change. Encouraging overconfidence in the realism of current climate model simulations or intentionally portraying recognized ignorance incorrectly as if it was statistical uncertainty (Knightian risk) can lead to undesirable policy outcomes.
Smith and Stern (2011) provide this insight into the climate science-policy interface. When asked intractable questions, the temptation is to change the question, slightly, to a tractable question that can be dealt with in terms of probability, rather than face the ambiguity of the original, policy-relevant, question. Science will be of greater service to sound policy making when it handles ambiguity as well as it now handles statistical uncertainty (Knightian risk).
Does this analysis make climate science and climate modeling less relevant to policy making? Not at all, but it does raise questions as to whether the path we are currently on for developing and evaluating climate models (NRC 2012) is the best use of resources for supporting policy making. Exploring alternative model structures is a rich and important direction for climate research, both for understanding the climate system and for supporting policy making. This analysis also emphasizes new challenges for climate scientists to develop a broader range of future scenarios, including worst-case scenarios and regional scenarios.
How climate science handles uncertainty matters.
Betz, G. (2010) What’s the worst case? The method of possibilistic prediction. Analyse & Kritik01, 87-106
Baumberger, C, R Knutti, GH Hadorn, (2017) Building confidence in climate model projections: an analysis of inferences from fit. WIREs Clim Change, 8:e454. doi: 10.1002/wcc.454
Burrows, SM, A Dasgupta, S Reehl, L Bramer, PL Ma, PJ Rasch, Y. Qian, 2018: Characterizing the relative importance assigned to physical variabiles by climate scientists when assessing atmospheric climate model fidelity. Adv. Atmos. Sci,35, 1101-1113.
Curry, J.A., P.J. Webster (2011) Climate science and the uncertainty monster. Bull. Amer. Meteorol. Soc., 1667-1682.
Curry, J.A. (2011) Reasoning about climate uncertainty. Clim. Change, 108: 723. https://doi.org/10.1007/s10584-011-0180-z
Flato, G, J Marotzke, B Abiodun, P Braconnot, S C Chou, WJ Collins, P Cox, et al. 2013. Evaluation of Climate Models. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Climate Change 2013 5: 741–866.
Frisch, M, (2013) Modeling Climate Policies: A Critical Look at Integrated Assessment Models. Philosophy & Technology, 26, 117–137.
IPCC (2013) Summary for Policymakers. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Katzav, J. (2014) The Epistemology of Climate Models and Some of Its Implications for Climate Science and the Philosophy of Science. Studies in History and Philosophy of Modern Physics, 46, 228–238.
Knight, F. H. (1921) Risk, Uncertainty and Profit.Boston, MA: Hart, Schaffner & Marx
LeBars, D. (2017) A high-end sea level rise probabilistic projection including rapid Antarctic ice sheet mass loss. Environ. Res. Lett.,12, 044013.
Lloyd, E. (2009) Varieties of Support and Confirmation of Climate Models. Aristotelian Society Supplementary Volume, 83, 213–232. https://doi. org/10.1111/j.1467-8349.2009.00179.x.
NRC (2012) A National Strategy for Advance Climate Modeling. National Academies Press, https://doi.org/10.17226/13430
NAS (2017) Valuing Climate Damages: Updating Estimation of the Social Cost of Carbon Dioxide. Washington, DC: The National Academies Press. https://doi.org/10.17226/24651.
Parker, W.S. (2009) Confirmation and adequacy-for-purpose in climate modeling. Aristotelian Society Supplementary Volume, 83, 233-249.
Parker, WS 2011. When Climate Models Agree: The Significance of Robust Model Predictions. Philosophy of Science 78 (4): 579–600.
Qian, Y, C. Jackson, F. Giorgi, B Booth, Q Duan, C Forest D Higdon, ZJ Hou, G. Huerta (2016) Uncertainty Quantification in Climate Modeling and Projection. Bull. Amer. Meteorol. So.c,821-824 DOI:10.1175/BAMS-D-15-00297.1
Randall, D.A., R.A. Wood, S. Bony, R. Colman, T. Fichefet, J. Fyfe, V. Kattsov, A. Pitman, J. Shukla, J. Srinivasan, R.J. Stouffer, A. Sumi and K.E. Taylor (2007) Climate Models and Their Evaluation. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Smith, L. A. and N. Stern (2011) Uncertainty in Science and Its Role in Climate Policy. Phil. Trans. R. Soc. A,369.1956 (2011): 4818–4841.
Spiegelhalter DJ and Riesch H (2011) Don’t know, can’t know: embracing scientific uncertainty when analyzing risk. Phil Trans Roy Soc.A, 369, 4730–4750
Stainforth D.A., M.R. Allen, E.R. Tredger, L.A. Smith (2007) Confidence, uncertainty, and decision-support relevance in climate prediction. Phil. Trans. Roy. Soc.A, 365, 2145-2161.
Stirling, A (2007): Risk, precaution and science: toward a more constructive policy debate. EMBO Reports,8, 309-315
Walker, W.E., P. Harremoes, J. Rotmans, J.P. van der Sluijs, M.B.A. van Asselt, P. Janssen, M.P. Krayer von Krauss (2003) Defining Uncertainty: A conceptual basis for uncertainty management in model based decision support. Integrated Assessment, 4, 5-17.
Walker, W.E., R.J. Lempert, J.H. Kwakkel (2016) Deep Uncertainty. Encyclopedia of Operations Research and Management Science. Eds SI Gass and MC Fu, Springer
Winsberg, E., and J. Lenhard, (2010). Holism and Entrenchment in Climate Model Validation. In Science in the Context of Application: Methodological Change, Conceptual Transformation, Cultural Reorientation, Carrier, M. and Nordmann, A., eds., Springer.
Good discussion of overconfidence in the field. If one has a model with a plausible 600% variation in the value of a crucial variable, the model is probably not yet ready to use for engineering on anything that matters.
It’s worse than a concern with the level of uncertainty. If we’re talking about inputs, then the common assumption is that CO2 acts on other molecules to raise the temperature of those molecules is a model input that’s wholly unconfirmed by experiment.
Recently someone mentioned to me that it was silly of me to question warming by CO2 just because I can’t see it or haven’t seen it, because we can see this same mechanism happening when microwave energy heats food; therefore, the mechanism in question exists. No doubt the mechanism exists; the question is, is the mechanism applicable to CO2 warming in the manner we believe it is? I pointed out the we can graph the warming caused by microwaves in a microwave oven; however, we see no such graph for warming caused by CO2 in its action on molecules of N2 and O2. We need to see this graph to confirm that what we believe about CO2 warming, and what we’re so convinced is true, actually is true. Otherwise it’s no more than an assumption, although in this case it’s been acting as an established and irrefutable fact. So here we have an fundamental input– an essential input — that could be verified, but curiously never has been. This strike me as a more serious problem than “uncertainty.”
Don132: No doubt the mechanism exists; the question is, is the mechanism applicable to CO2 warming in the manner we believe it is?
Sounds like an easy experimental test can be conducted, doesn’t it. First, you build a large cylinder, say 100 ft in diameter and 1000 feet tall; the top is transluscent, the bottom is shaped like the coast of california, say, with a deep pool of water beside a deep extend of land — the surface area of the pool about 3 times as great as the surface are of the land. At several places inside the tube you place sensitive thermometers. Think about what the side and bottom of the cylinder are made of, and what the apparatus rests upon. The land and the water are not really the “bottom”, rather a land-like plug of earth takes up a fourth of the volume near the bottom, and water us put in beside it; there has to be enough water to have much greater mass than the eventual atmosphere..
Second, you put in the N2 and O2. You can run experiments in alternation, some with no water but a plug of land in its place, to find the effects of the water.
Note that, for real verisimilitude you might require the cylinder to be at least 30,000 meters tall so that there is a discernible cloud condensation layer. And you might require the outer surface of the cylinder to be insulated so that the only possible exit route for the energy from the sun is radiation out the top of the cylinder.
Third, you illuminate the contents of the cylinder with broad spectrum light like the sun at the top, alternating day and night conditions, until the night-time and day time temperatures are approximately what they are someplace on earth, say Hawaii.
Fourth, you add CO2 to a concentration of 280 ppm, but keeping the pressure at the land and water surface at 1 atmosphere. At equilibrium, the mean temperature of the gas just above the land and water surface will be close to 300K
Fifth, you gradually raise the CO2 concentration to 400 ppm, and record the temperature.
I have skirted some issues, such as the material for the cylinder and the cylinder mass. An actual experimenter would have to address those details. Although I called it a cylinder, it would have to be thicker at the bottom than at the top.
There are some obvious questions:
Would an experiment less elaborate than this be adequate to test the hypothesis about CO2 warming of the atmosphere?
Why doesn’t somebody do this?
According to experiments on CO2 and estimates of its effects to date, what would be the expected results of the experiment?
You have (partially) proposed an experiment to measure the phenomenon of bulk atmospheric warming by CO2. That’s not what I asked for. I asked for a measurement of the temperature rise in a volume of N2 and O2 caused by a set amount of CO2 at a set temperature and pressure: the molecular mechanism behind the experiment you propose. We’re not concerned with water vapor at this point.
For example, in climate models, how do they figure how much 400 ppm CO2 heats the atmosphere at 20C and one atmosphere pressure? Do they guess? Is it all theory-based? I would expect there would be a table of all sorts of measurements (and not merely calculations) of temperature rise caused by CO2 at various starting temperatures and pressures. These types of experiments can easily be done in a modest-sized container in the laboratory, and would outline not how the bulk atmosphere is heated by CO2, but simply how a volume of N2 and O2 are heated, by way of confirming the molecular mechanism behind the theory and producing a series of useful data points. I have no doubt that this mechanism exists, but I have doubts that it amounts to much at all, even though the alarmists assert that this effect is powerful.
Don132: That’s not what I asked for. I asked for a measurement of the temperature rise in a volume of N2 and O2 caused by a set amount of CO2 at a set temperature and pressure: the molecular mechanism behind the experiment you propose. We’re not concerned with water vapor at this point.
“We” are actually quite concerned with water vapor at this point.
But, describe an experiment whose result you would accept if it showed the warming of N2 and O2 by CO2 with in the presence of radiation from above and ground effects (advection, convection) from below. Don’t ignore the dimensions of the apparatus, the gross energy flows within, and the likely amount of warming that the CO2 might cause.
You’re not listening so there’s no point in continuing this discussion. And no, we aren’t worried about water vapor: has the predicted feedback from water vapor occurred? In any case in order to determine the effect of CO2 it’s best to leave water vapor out and isolate CO2.
If you can’t measure it, then does it exist?
don132: I asked for a measurement of the temperature rise in a volume of N2 and O2 caused by a set amount of CO2 at a set temperature and pressure: the molecular mechanism behind the experiment you propose.
So you did. And I asked you for an experimental set up for such a set of measurements whose result anyone would believe.
Consider: an increase from 400 ppm CO2 to 800 ppm CO2 in a column in excess of 30,000 meters high above every square meter of earth surface is calculated to increase mean Earth temperature by “around” 3C and increase LWIR at the surface by “about” 4 W/m^2. In physical terms, that is a tiny effect. (though potentially nonnegligible biologically) So imagine trying to warm a comparable cylinder of O2 and N2 about 1 meter tall, illuminated by broad spectrum light from the top with an “Earth like” surface at the bottom. If the theory is correct, how much heating do you expect will result from the addition of the CO2? Can it be measured in a dynamic “atmosphere”? The most likely result, if the theory is true, would be a change too small to measure; what analytical chemists call “below the limit of detection”, and statisticians call too small to reject the null hypothesis. Would anybody think the results of such an experiment would be informative about the Earth?
As I understand your posts to date, the only part of the mechanism you doubt is the transfer of energy from the excited electron shells of CO2 to kinetic energy in neighboring N2 and O2 by collision. Even though you grant that thus excited H2O electron shells can thus transfer energy to neighboring foodstuff molecules.
This problem of detecting a small effect plagues pharmaceutical research. A drug that would have an important effect in a population of 100,000,000 people (aspirin, for example, in men aged 35-45; or vitamin C supplements), may show no reproducible effect in a small clinical trial with perhaps 45 people per group.
It isn’t enough to imagine an effect you would like to see (or not see) from an experiment, you have to imagine the effect that you are likely to see in an experiment that you can actually do.
No feedback CO2 climate sensitivity follows directly from Stefan-Boltzmann law
CO2 Forcing is calculated from infrared properties eg Modtran and Myhre(1998)
Don132: If you can’t measure it, then does it exist?
Hans Erren | July 9, 2018 at 8:11 am |
“CO2 Forcing is calculated from infrared properties eg Modtran and Myhre(1998)”
I don’t doubt the infrared properties of CO2. What I doubt is that these properties affect O2 and N2 as we assume they do. If so, then we should be able to measure directly the temperature change caused by CO2 colliding with N2 and O2– just that, and nothing more.
So I suppose the question is, shouldn’t we be able to directly measure this? And again, has anyone? If not, why not?
matthewrmarler is reading way too much into what I’m asking for.
Don132: If not, why not?
In short, it isn’t as easy as you think to get a meaningful result.
“What I doubt is that these properties affect O2 and N2 as we assume they do”
Well, that’s where you are mistaken, more CO2 narrows a heat venting channel of the atmosphere, so the surface increases temperature and the radiation increases through akternative channels, at least that is the theory.
So CO2 doesn’t heat O2 N2 directly, the hotter surface does.
Hans Erren | July 9, 2018 at 2:49 pm |
“Well, that’s where you are mistaken, more CO2 narrows a heat venting channel of the atmosphere, so the surface increases temperature and the radiation increases through alternative channels, at least that is the theory.
So CO2 doesn’t heat O2 N2 directly, the hotter surface does.”
See Happer, Koonin, Linzen amicus brief 2018, questions 3: “So any infrared radiation absorbed by CO2 molecules almost instantaneously heats the surrounding air through “inelastic” molecular collisions.” https://tinyurl.com/ycrn8vth Has this heating been measured?
Later on they say that “heat is radiated to the ground by molecules at various altitudes, where there is usually a range of different temperatures.” So has this mechanism (the actual heating of the ground) been measured?
Are we back to “back radiation” then? Nasif Nahle has demonstrated that this does not heat a container. http://www.biocab.org/Experiment_on_Greenhouses__Effect.pdf
And then we have those who propose that the surface temperature increases because the emissions height is increased.
Confusing. I think some experiments are needed.
Hans Erren: So CO2 doesn’t heat O2 N2 directly, the hotter surface does.
Multiple processes are occurring concurrently, in different amounts at different altitudes and densities, at different rates. At low altitudes, advection/convection, evapotranspiration, and absorption of radiation by CO2 followed by collision all heat the nearby O2 and N2. That is part of what makes it hard to measure the rate of energy transmission via collisions.
matthewrmarler | July 9, 2018 at 4:43 pm |
“Multiple processes are occurring concurrently, in different amounts at different altitudes and densities, at different rates. ”
Then let’s keep it simple.
Infrared cooling models are used in climate models to describe the distortion in the atmosphere’s temperature profile according to the theory of CO2 warming, distortions such as would cause a tropospheric “hot spot” that’s the signature of CO2 warming, and which has not yet been found in any definitive manner. Yet the Connollys have demonstrated that no atmospheric distortion by CO2 warming can be found. Their analysis of balloon data was a test of the validity of infrared cooling models. So, if things are happening as we suppose, then let’s get out there and show how the atmosphere’s temperature profile is distorted by CO2, as predicted, and contrary to the Connollys’ findings. The Connollys’ finding was also corroborated by Christy’s findings of tropospheric temperatures.
There. Simple. Is it happening as we suppose it’s happening? Or not?
I think we’ve confused ourselves with way, way too much reliance on modeling rather than direct measurements.
I’ve made my point, I don’t need to keep going back and forth, it gets old real fast.
Don132: Then let’s keep it simple.
The effect of CO2 is small, and measuring it in realistic settings is hard.
Don132, you may find the analogy with a window useful, if i close a window or draw a curtain, that window or curtain does not heat the room, instead it prevents the room from leaking heat. In the same way the closing co2 infrared window sn preventing heat loss from the earth surface.
Hans Erren | July 10, 2018 at 4:25 am |
“[Y]ou may find the analogy with a window useful, if i close a window or draw a curtain, that window or curtain does not heat the room, instead it prevents the room from leaking heat.”
If I have a room that’s open on all sides, and I put a small curtain (the infrared bandwidth that CO2 absorbs) at the top of the room, that’s hardly going to prevent heat from leaving the room.
If you had a “window” that could close off convection to the upper atmosphere that might be more plausible.
Doesn’t any heating of the surface or the atmosphere immediately convect upwards? Is any heat really “trapped”?
Is what we assume really happening? How do we know?
Do we jump from radiative balances to the assumption of “temperature change in bulk atmosphere”? Is the assumption justified, and is it experimentally confirmed?
Don132, I’m sorry but your last reply is nothing more than qualitative arm waving. You are entitled to your opinion but if you can’t make a quantitative rebuttal then our discussion ends here.
Don132: Yet the Connollys have demonstrated that no atmospheric distortion by CO2 warming can be found.
What’s in their set-up and procedures that you find especially compelling? Especially sensitive thermometers? Realistic atmosphere model (what I called “verisimilitude”)? The null result of the MIchelson-Morely experiment was believed because the experimental method used an exceedingly accurate measurement of the speed of light (among other details.) Null results for the benefits of aspirin in preventing heart attack and stroke were much less compelling because the studies had low power to detect clinically relevant effects.
Christy and others have shown that model projections don’t match observations– I suppose this would count as quantitative rebuttal to the theory of CO2 warming. No tropical hotspot– another quantitative rebuttal to the idea that CO2 is actually doing anything to the atmosphere. Connolly balloon analysis corroborates Christy’s et al findings. Nahle’s experiment has shown no effect from “back radiation.” Now, Nahle may be wrong, but to prove it we need to replicate the experiment instead of declaring that it can’t be right. We have no evidence that the action we suppose CO2 is doing– heating the atmosphere– is actually happening. We have evidence of CO2 absorbing and emitting infrared around 15 micrometers: is that the same thing as heating the atmosphere? If it is, prove it through controlled experiments instead of declarations. I have no doubt that bright minds can devise and execute the requisite experiments that break down alleged mechanism and test them.
So I don’t know the answers to any of this for certain, but what I do know for certain is that it’s very easy to trick ourselves into believing things that “must” be true but are not. Maybe I’ll tell you my “black dog” story sometime; perhaps too many people are too set in their ways and their theories? I would argue that climate model uncertainty is based on over-reliance on theory and models and under-reliance on hard experiment to test our assumptions. I think we easily get caught in self-consistent paradigms and take offense when these are challenged, mistaking self-consistency for hard-core experimental evidence.
We could drill a 2 meter diameter, 1 km deep well outside Fairbanks, put casing inside, with four small pipes set on the side to carry gases as well as a heating system and pressure/temperature sensor strings. Shine sunlight using mirrors from the top, and take measurements. These measurements can be used to verify a very fine scale model (say with 1 mm cells).
I think the hardest thing to accept in this discussion is the possibility that an effect potentially of biological importance is small in physical measurement terms, and hard to measure.
A balloon? What is the expected size of the effect of CO2 and what is the probability that it will be disclosed by the measurement apparatus?
I agree that the effect of CO2 is probably very small and hard to measure. If it’s so small, how can it have any real impact and overwhelm things like convective currents, or lapse rate functions?
But, my argument is measure what you can and extrapolate from there. If the question is, does CO2 excitation by infrared affect the translational energy of N2 and O2, we may not be able to detect that in a 3-meter cubed enclosure at 400 ppm, so maybe we use 5000 ppm? What would that tell us? If it’s still too small to measure then I say everyone go home and find some real problems to worry about.
Same rough setup if we want to know how CO2 back-radiation is heating a surface compared to a non-CO2 atmosphere. We should be able to measure it directly, without any assumptions whatsoever! In a 3x3x3 meter box! If it ain’t happening then it ain’t happening. If it is, then let’s see it.
We’re talking about basic mechanisms that should be measurable. If we constantly say that it’s real but we can’t measure it, then we’re well on the road to pseudoscience.
So the certainty problem is a secondary issue, IMHO.
Don132: I agree that the effect of CO2 is probably very small and hard to measure. If it’s so small, how can it have any real impact and overwhelm things like convective currents, or lapse rate functions?
I don’t think it is said to “overwhelm” anything. It may be important if it affects biological organisms. A 6F increase on a base temp of about 100 F is also small on the physical scale, but possibly of biological significance. A lot of time went into developing reliable thermometers with sufficient sensitivity. You wouldn’t use a cooking thermometer to monitor a fever.
Will a 4C increase increase drought intensity and duration in California and the Sahel? I don’t expect doubling of CO2 concentration to effect a 4C increase in global mean temp, but I think your arguments are superficial and ignorable.
Re: “Infrared cooling models are used in climate models to describe the distortion in the atmosphere’s temperature profile according to the theory of CO2 warming, distortions such as would cause a tropospheric “hot spot” that’s the signature of CO2 warming”
Please stop spreading fabrications you saw online.
The hot spot is not the signature of CO2-induced warming. It is a sign of the the negative lapse rate feedback, resulting from the release of latent heat by condensing water vapor in the tropical upper troposphere. It occurs with any substantial warming of the tropical near-surface (especially warming of above tropical oceans), regardless of the cause of that warming.
“In the tropics, moist thermodynamic processes amplify surface warming […]. Such tropical amplification occurs for any surface warming; it is not a unique signature of greenhouse gas (GHG)-induced warming, as has been incorrectly claimed (Christy 2015)”
(Page 383 of: “Comparing tropospheric warming in climate models and satellite data”)
(Page 707 of: “Climate change 2001: The scientific basis; Chapter 12: Detection of climate change and attribution of causes”)
Re: “and which has not yet been found in any definitive manner”
The hot spot has been found multiple times. So you can drop that myth of your’s.
In satellite data:
#1 : “Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends”
#2 : “Temperature trends at the surface and in the troposphere”
#3 : “Removing diurnal cycle contamination in satellite-derived tropospheric temperatures: understanding tropical tropospheric trend discrepancies”, table 4
#4 : “Comparing tropospheric warming in climate models and satellite data”, figure 9B
In radiosonde (weather balloon) data:
#5 : “Internal variability in simulated and observed tropical tropospheric temperature trends”, figures 2c and 4c
#6 : “Atmospheric changes through 2012 as shown by iteratively homogenized radiosonde temperature and wind data (IUKv2)”, figures 1 and 2
#7 : “New estimates of tropical mean temperature trend profiles from zonal mean historical radiosonde and pilot balloon wind shear observations”, figure 9
#8 : “Reexamining the warming in the tropical upper troposphere: Models versus radiosonde observations”, figure 3 and table 1
#9 : “Detection and analysis of an amplified warming of the Sahara Desert”, figure 7
#10 : “Westward shift of western North Pacific tropical cyclogenesis”, figure 4b
#11 : “Influence of tropical tropopause layer cooling on Atlantic hurricane activity”, figure 4
#12 : “Estimating low-frequency variability and trends in atmospheric temperature using ERA-Interim”, figure 23 and page 351
Since you did not specify wavelength Edith Foote’s work in the 1850s will do
Otoh there are lots of YouTube videos showing mid IR absorption of co2 followed by heating of air.
The process is called V-T transfer. Lots of experimental and theory. Try the search string
V-T transfer collision dynamics
Yes, there is something fundamentally wrong with the consensus understanding/description of the greenhouse effect. It is usually something like this:
The cooling to space by surface radiation is inhibited by gases that absorb radiation and essentially “trap the heat” from directly radiating and escaping to space. However, N2 and O2 do the same essentially – they “absorb” the heat from the surface by convection, which would otherwise be radiated directly to space by the surface. Furthermore, N2 and O2 cannot radiate to space as easily as CO2, water vapor and co.
The heat that enters the earth near the atmosphere, is transported by convection to 23° north and 23° south, there it is radiated out from the deserts.
“then the common assumption is that CO2 acts on other molecules to raise the temperature of those molecules is a model input that’s wholly unconfirmed by experiment.”
What assumption is that? If you are claiming it is “wholly unconfirmed by experiment”, you need to say what it is, and why you think it is unconfirmed.
The largest uncertainty is in emissions. It could be anywhere in the 1000-8000 GtCO2 range between now and 2100. To cancel out this uncertainty, the most useful policy-relevant measure is GtCO2 per degree. For example, I estimate 2000 GtCO2 per degree C, and this allows for natural sinks to be more effective than 50% for lower emission scenarios making it almost linear instead of logarithmic. For round numbers this suffices as guidance.
As for ECS uncertainty, from observations alone, with no models, we can get an effective TCR quite accurately that serves as a lower bound of ECS. The effective TCR whether you start in 1750 or 1950 is 2.3 C per CO2 doubling with quite small error bars. Add a little for ECS to effectively ~3 C per doubling which goes into the 2000 GtCO2 per degree number. A range of possible emissions (from a 2% annual reduction rate to a continuation of our 2% growth rate) spans about 3 C in warming. This is useful to quantify how important the policy going forwards is.
“The effective TCR whether you start in 1750 or 1950 is 2.3 C per CO2 doubling with quite small error bars.”
That’s not the effective sensitivity, that js the apparent sensitivity, the effective sensitivity is only 1.3.
2.3 C is effective TCR because the other anthropogenic factors just add in proportion. Unless they will cease to add on as much in the future you can’t just ignore them when doing projections. There is no reason to believe that proportionality will not hold, or even get larger as aerosols reduce. Ignoring them since 1950 would have severely underestimated the warming we got. You have to account for them, otherwise you just end up with wrong projections.
Which “other anthropogenic factors” are you referring to? The low ECS above includes all antropogenic factors, it’s the non-anthropogenic factors in the form of stadium waves that create a high apparent climate sensitivity, resulting in an overestimate of warming for every scenario until 2100.
Lewis and Curry extract the CO2 from other GHGs and aerosols. When you add those back in, because they have been changing in proportion, you have to add about 50% on to their number. This is why, when you actually plot CO2 with temperature, you derive 2.3 C per doubling, nothing like what they say.
You are repeating yourself, but not explaining, can you please elaborate? Because I think you are mistaken, in resustivity measurements there is also a thing called apparent resistivity which has nothing to do with effective resisitivity.
I believe the reference is to methane. From a policy standpoint methane is a really different gas. Maybe the IPCC will want us to change to a chicken and noodle diet and kill all the world’s termites.
Hans, you need to look at Lewis and Curry and see why they get a low value that does not explain all the warming we have seen since 1950. It falls short by almost half if you use the posty-1950 CO2 change with their number. Go figure why that is. I explained already.
I think you should use a better definition of “risk”: the frequency and amplitude of variances from the norm. In simple good-bad terms, it’s not the odds something bad will happen. It’s how often things better OR worse than average *have* happened and how much better or worse.
Further, risk applies not only to good and bad, but to hot and cold, wet and dry, high or low, etc.
I think you’ll find this makes “risk” a more useful concept, in what promises to be a marvelous essay.
Thanks for hanging in there, for real science.
“Formal uncertainty quantification of computer models is less relevant to science than an assessment of whether the model helps us learn about how the system works.”
One has learn about how the system works before one can hope create a realistic model.
“Aleatory uncertainty is associated with inherent variability or randomness, and is by definition irreducible. Natural internal variability of the climate system contributes to aleatory uncertainty.”
The mantra of ‘internal variability’ stops people even looking for how the Sun could be driving the AO/NAO, the AMO, and ENSO.
Is it bad table manners to mention that our greatest climate risk going by historical conditions is from low solar periods?
Very good points.
“One has learn about how the system works before one can hope create a realistic model”.
IMO a very good place to start is the temperature anomalies. They are simply accepted. Just numbers that appear on a daily – monthly basis without question. Only the accuracy of the measurement process is queried along with potential historical manipulation.
What causes the very big swings in hemisphere anomalies over a few days in both hemispheres of up to one degree C and back again?. Neither the sun or CO2 has that immediacy of influence.
If I was reporting to a board of directors with a sales – business plan for the future, and I could not explain very large spontaneous movements without an explanation, what value is my model forecast for future prosperity. I would be told to go away and find the reason, or replaced.
Understand the driver or key influence over the anomalies and you get closer to understanding the drivers of climate. Understand the daily detail and the mechanisms that control it before anyone can paint the big picture.
What you say is essentially correct. I have been regularly forecasting such anomalies with solar base forecasts. Like the heatwave of March 2012, forecast to start around 6-7th March, or the cold of Jan-Feb 2014, forecast to start around 7th Jan 2014.
But the elephant in the climate room is how these shorter term atmospheric anomalies then drive the ocean modes over time. And how the ocean phases then drive changes in cloud cover. Which from the solar frame of reference, ocean phase and cloud cover changes are decidedly powerful negative feedbacks.
The standard model has no frame of reference, the ocean phases are regarded as internal and unforced. This specious model of the climate system only leaves room to evaluate changes in non-condensing greenhouse gases, which are then falsely attributed with positive feedback responses in cloud clover, which are in reality negative feedbacks to changes in solar forcings via the ocean phases. Serving as an accounting game rather than useful climate modelling science for the future of humanity.
Judith, Excellent article. By coincidence I am working on related material more broadly. The title will be -The Power of Uncertainty in law, science, negotiations, religion and life. My stories show the power of uncertainty as your compass.
“Certainty breeds arrogance and hubris. It smothers nuances and complexities. It blinds us to possibilities.3”
https://heleo.com/the-power-of-uncertainty/16579/s . I believe “The discipline of science is uncertainty and doubt unlike religion and politics” – I am sure some famous person said that.
What about adding the proven uncertainty of global temperatures to your piece? See TAKEN BY STORM, The Troubled Science, Policy, and Politics of Global Warming. by Christopher Essex and Ross Mckitrick 356 pages writers with most outstanding credentials in statistics and math unlike most climate scientists. It seems the IPCC and alarmists place far too much emphasis on the fictional single global temperature statistic which is surely most uncertain globally.
Of course there is no global temperature just as there is no global weather. It is a made up statistics. Therefore when experts in statistics and math point out the foibles of climate scientists as statisticians and math gurus I am impressed.
“Even though global warming is expressed as a single figure – the average temperature rise of the whole planet’s surface – the effect will not be spread evenly. Higher temperatures, fresh water shortages, higher sea levels and extreme weather events will each affect regions differently.
A region’s vulnerability will depend not only on the nature and level of climate change, but on the capacity of local systems and populations to adapt to change.”
Physical, mathematical and observational grounds are employed to show that there is no physically meaningful global temperature for the Earth in the context of the issue of global warming. While it is always possible to construct statistics for any given set of local temperature data, an infinite range of such statistics is mathematically permissible if physical principles provide no explicit basis for choosing among them. Distinct and equally valid statistical rules can and do show opposite trends when applied to the results of computations from physical models and real data in the atmosphere. A given temperature field can be interpreted as both “warming” and “cooling” simultaneously, making the concept of warming in the context of the issue of global warming physically ill-posed. Short title: Global Temperature?
The climate therefore is all about TEMPERATURE FIELDS not a single temperature number.
Right. And imaginary numbers don’t exist either, yet they are both useful concepts. It is clear that the Earth can exist in colder or warmer states, and that water solidifies at 0°C and forms huge ice-sheets during glacial states. So indeed the planet has an average surface temperature. It is just that it is not possible for us to determine it. But just an approximation is extremely useful, and since we are better at measuring differences than absolute values, we settle for temperature anomaly.
Saying that it is something that doesn’t exist or can’t be determined doesn’t make it less useful. That is why we use it, or the imaginary numbers.
” imaginary numbers don’t exist…”
You seem to have a misconception about imaginary numbers . It is probably due to their unfortunate name.
+1, -1, and the square root of -1 to be called
direct, inverse and lateral units
instead of positive, negative and imaginary
Imaginary numbers have been proven to exist in that they present solutions to naturally
Alternating circuits in electrical engineering for example.
These cannot be solved without imaginary numbers.
The global temperature of the Earth is an arbitrary number determined in different ways
for different times, in that sense it is imaginary because it is not a directly
measurable quantity. Whether it is useful or not is to be determined, and is it
useful for Science or politics ?
It must have been warmer if the tree lines were further up the mountains and further North, but how much?
I believe global temperature is a political number used to prove a theory, that there is a direct correlation and causation effect between Co2 levels and global temperature.
The output of this hypothetical equation is the climate sensitivity.
Assuming it could be determined. from the thousands of variables in a climate model let alone the real World climate that is the most complicated system we know.
Who is to say it is a constant ?
From your series, I would guess it is variable.
Thanks for your very informative series.
I have been longed intrigued by some of the estimates in a variety of papers in terms of the level of uncertainty and degree of precision of those estimates. At random I just looked at a couple of papers out of hundreds that I’ve bookmarked. The following don’t represent the worst or extreme. They were just literally the first ones I came across.
.19+- 2.17 mm/yr .60+- 4.38 mm/yr. 0.0014+- 0.0006 mm/yr
When the levels of uncertainty are multiples of the initial estimate how valuable is that information? When the estimates are of such infinitesimal amounts how much confidence does that engender?
This is not a criticism of the individual scientists or their attempts at doing science. I’m just questioning if the answers that are being sought are out of reach in spite of the most laudatory attempts. Are we fooling ourselves that we can gain the knowledge we seek?
I guess you were unable to get anything from the article. Did you even read it or you just come to comment?
We calculate we have emitted 454 GtC between 1870 and 2014. As we have discussed, our best estimates indicate fossil fuel production should reach its peak over the next few decades. If true, we face somewhere around 1000-1500 GtC total unless we find another way of producing CO2.
But you should know that the effect is logarithmic and decreases in proportion. If 454 GtC produced an increase of 120 ppm, 500-1000 GtC more shouldn’t take atmospheric CO2 beyond 130-250 ppm above present levels. And those 120 ppm have been accompanied by an increase of ~ 0.9°C of which only a part is likely due to the increase in CO2, since temperature has been rising for the past 350 years.
As you can see the uncertainty does not add. It multiplies. We are not able to predict in a meaningful manner the warming that will be produced in the 21st century. Could be as little as zero or as much as 2°C. However we have seen in the past 30 years that no country has been able to transition to a lower carbon economy. Renewable energy growth has only supplied part of our energy growth. There is no reason to be optimistic about decarbonization. We clearly should plan for the scenario where we continue using fossil fuels until we can’t, because that is the most probable one right now.
Your comment leaves very little room for the huge uncertainty we face.
This was an answer to Jim D that didn’t appear where I expected.
What I got out of the article was it missing the biggest uncertainty completely, which is the total emissions between now and 2100, uncertain to a factor of five. As a result, we are the source of our main risk. TCR is very easy to estimate from observations and is only uncertain to 20% when computed that way since those are such long historical records to constrain its value. When you multiply something that is less uncertain by one that is more, the greater uncertainty dominates and we can’t ignore it. Also, I accounted for it being log and additionally have estimated that at the lower end, only about 20% stays in the atmosphere by 2100 due to natural sink rates. This brings it back to more linear by reducing the effect at the low end.
Also, before 1900, the main cause of warming was landuse change, but also via CO2 growth, and landuse change still remains up to 20% of the total CO2 growth.
Under the assumption that the warming is due in its entirety to CO2. Assumption that is most likely wrong.
Another assumption. It was probably the main cause of the CO2 increase, not of the warming.
CO2 forcing easily dominates the net forcing tota which is why the warming follows the CO2 forcing so well over time even as its rate of change has tripled since 1950. Opponents don’t like this, but can’t come up with their own numbers on time-varying forcing components which is very telling.
Oh, but it doesn’t. CO2 forcing has grown hugely since the 1950’s, while temperature has been growing at a similar rate since the 1900’s
We already discussed this sometime ago. So it is really an assumption that the warming is dominated by CO2. And a likely incorrect one. Coming up with numbers for it doesn’t mean in any way that those numbers are correct.
I think you can see that 75% of the change has occurred since 1950, which is also when most of the CO2 forcing change has occurred. This component is a degree superimposed on a natural variability with only a tenth of a degree amplitude, and the 30-year temperature shows no sign of slowing down now. This only surprises those who don’t know its cause.
Not true. 1900-2000 warming is not significantly different from linear increase. It is actually ± 0.2°C from linear. 1900-2000 CO2 increase however is significantly different from linear with a clear break point at ~ 1960.
Not true. Since Feb. 2016 the 13-month average temperature has lost over 0.2°C. This is such a big amount compared to the 1900-2000 warming that it is certainly affecting the rate of warming. Almost all the temperature increase post 2002 has been erased.
This is climate, a 13-month average means nothing. It is not even longer than a solar cycle that you have to average out at the minimum.
Also how do even explain a linear ramp if not with a forcing change? Since 1980 the CO2 forcing change has been 0.3 W/m2 per decade due to CO2. This is a rate that drowns out natural decadal fluctuations like those from the sun.
There are forcings that we can’t explain. How do you explain that the medieval warm period was as warm as the mid-20th C according to proxies? Clearly it wasn’t CO2.
Proxies were not global, and you can’t rule out solar activity. The mid-20th century had more solar activity than average for sure.
Well, that’s exactly my point. Changes in TSI are just too small to produce a change of climate, yet the paleo record screams that prolonged low solar activity strongly correlates to colder, drier climate. It is clear that the solar effect is underestimated in models at the expense of CO2 being overestimated.
You assume that our knowledge and accounting of climate forcings is adequate. There is no evidence that such assumption is correct, and there is circumstantial evidence that it is not.
Solar forcing is currently lower than average, and the temperatures are at a high, so we can rule that one out.
You are assuming how that one should work. Solar forcing is currently lower than average, so that means cooling, and cooling we are having.
Not true. Since Feb. 2016 the 13-month average temperature has lost over 0.2°C. This is such a big amount compared to the 1900-2000 warming that it is certainly affecting the rate of warming.
So much for your nose.
to the end of the 15-16 El Niño, 2016.5 – .175327 ℃ per decade
to 2017 – .173538 ℃ per decade
to 2018 – .183412 ℃ per decade
to present – .189184 ℃ per decade
The 30-year trend has grown larger during you puny, meaningless temp drop. Why? The higher the tower, the bigger the dive. This was explained to somebody – Walter – on WUWT, but nobody would listen to the guy. It was hilarious. You need a huge temp drop to nullify the warming, caused by the enhanced greenhouse effect, that killed the PAWS. There has not been one.
to the end of the 15-16 El Niño, 2016.5 – .19495 ℃ per decade
to 2017 – .192631 ℃ per decade
to 2018 – .209041 ℃ per decade
to present – .211715 ℃ per decade
To know what temperature is doing the best way is to join the minima with a line.
It works better because about every 20 years we get a huge El Niño causing a too big deviation of maxima and affecting also the central measure. Since they don’t affect the minima, they give a clearer picture.
Your problem is that the CO2-hypothesis does not explain the clear inflection points marked by the red line, and the red and blue lines are clearly divergent.
The evidence is unsupportive of the CO2-hypothesis as currently formulated and coded in the models. It needs a serious re-appraisal, but the climatariat is too invested in it to do it.
Thta is not a significant change in gradient because it does not show up in 30-year temperatures. The reason you see these things is solar variation where the last 15 years have been low.
Precisely. If solar is responsible for an important part of the warming, we can cancel out the alarmism about climate change.
Solar forcing cannot be much higher that it was in the second half of the 20th century, but it has a lot of room going down. And fossil fuels are reaching peak production in short time. We have already seen the worst global warming can do. It is very close to being defeated. Afterwards we will have to deal with global cooling, perhaps in the 22 or 23th C.
So, CO2 and GHGs have had a forcing change of >1 W/m2 since 1980, and you want to worry about a solar change of -0.1 W/m2, and you don’t know whether it will go up or down next. It’s already as low as it has been in a century. The more likely direction is therefore up, adding to the GHG effect that increases by 1 W/m2 every few decades. The relative sizes of these trends are important to pay attention to.
The solar forcing only takes into account changes in TSI, that are small, but it does not take into account changes in ozone, particle rain, or electric and magnetic changes, whose effects are very poorly known.
While decisions cannot be based on what it is unknown, it is unwise to base decisions on a known poor understanding.
There’s always unknowns. That way always leads to frozen inaction, no exceptions. There is no evidence that the sun has done anything special at any time.
It shows you haven’t read much paleoclimatology. It is full of evidence on the correlation between low solar activity and climate worsening. A lot of paleoclimatologists are convinced because they see the evidence.
For example at 2700 BP:
Plenty of evidence.
The sun can do tenths of a degree, you show. It can’t do multiple degrees which is what we see now and soon from GHGs. Obviously you don’t believe adding multiple W/m2 from GHGs can ever be a dominant forcing factor even though we see warming from tenths of W/m2 in a solar cycle or short-term cooling from similar values from volcanoes. I think your block is because it is GHGs providing the forcing. This is typical selectivism that we see here all the time. Forcing is forcing, no matter the source.
We don’t see degrees from GHGs, just tenths of a degree, as with solar. You just have faith that most of the warming is coming from GHGs and a lot more will come, but you have zero evidence on that. So it is not me who is showing selectivism. I there was evidence for what you say I would accept it, but there is not. Models do not constitute evidence.
If you read any papers on explaining paleoclimate, sensitivity estimates from observations, or climate projections, it is really degrees we are talking about here for the thousands of GtCO2 we will add. That’s why it is a policy debate.
Those are imaginary degrees as there is no evidence they will ever occur.
The problem with sensitivity estimates based on paleo data is that they assume the CO2 produced the warming and not the other way around. Silly mistake.
They are also based on the observational record, and that is not imaginary (see energy balance models).
Your silly mistake is to assume CO2 changes from geological processes cannot produce warming and cooling.
Models do not constitute evidence, ever.
All the observational record can say is that temperature and CO2 covariate. It is reasonable to assume CO2 must cause some warming due to its properties. It is unreasonable to assume CO2 is the main temperature control from available evidence.
What the geological record says is that CO2 has been decreasing over hundreds of millions of years (negative trend), while temperature has been cyclical (no trend). This is evidence supporting CO2 has a smaller role than assumed.
I think you are saying you have ruled out that CO2 can have much effect despite the fact that GHGs have 33 C of warming in total, much being from CO2. You also don’t care much for observation-based energy balance models that form the basis of papers like Lewis and Curry’s.
Well, the glacial cycle shows that CO2 can’t have much effect. Otherwise we would be toast already.
And no, I don’t care about climate sensitivity studies, because they all assume that unassigned warming belongs to CO2, and therefore they all constitute an upper limit. I understand it has to be done, but in my opinion it means very little. Climate has a low sensitivity to CO2. This is obvious since planetary CO2 has gone from 15,000 ppm to 180 ppm. You CAGW guys are so wrong that it is almost pathetic. But by the time it is all sorted out it won’t matter as fossil fuels will be a residual of what they are now.
But there is exceptional behavior now just as the CO2 increased and much as expected from physics.
The Physics only says that there should be warming, not how much, because that depends on the climate.
And no. There is no exceptional behavior. It was accepted by the authors of that graph that the last part had a low confidence due to proxy drop out. Other reconstructions, like Moberg et al. 2005, show the NH being as warm in the late 20th C as the Medieval Climate Anomaly. The biology says there is no exceptional behavior. What happens is that we have changed the way we measure. Despite the huge increase in CO2, the mountain treeline is still well below Holocene Climatic Optimum levels. And trees measure climate the way they always have.
The last part is observations from HADCRUT4 that has more certainty than proxies. It is an exceptional warming rate and magnitude that goes along with the exceptional CO2 rise rate. The warming lags due to the oceans, and the trees lag even more.
how much did the oceans warm the last 2 decades, lag?
The average ocean trend in recent decades has been 0.15 C per decade. The land is warming twice as fast, so the ocean lags the global average rate.
It is an apples to oranges comparison. Good only to make people like you happy.
Trees don’t have a lag. Every year tree seedlings attempt to grow at higher altitude and fail. It is too cold for them to reach the same levels they reached a few thousands of years ago, despite the help from a lot more CO2.
The idea that the warming is exceptional is bonkers and unsupported.
OK, how much and how quickly do you expect the tree line to change for a 1 degree warming, and how sure are you that it hasn’t done it as quickly as you expected?
Regarding actual data, we see sharp rises in CO2 and temperature in a period when we emitted nearly 2000 GtCO2. You are calling this a 3-way coincidence because you still have no idea at all how they could even have happened together.
Europe is being reforested due to farm abandonment, higher temperature and very high CO2 levels. I’ve seen with my eyes an abandoned orchard being reclaimed by trees in 15 years. I saw the first young trees already 4 years after it was abandoned. The first seeds must have started growing the same year.
If temperature had gone up as much as during the Holocene Climatic Optimum trees would have already reclaimed past heights. More so with increased CO2 levels. If they haven’t done so already is because they can’t survive the winters. There’s plenty of water in mountains were trees are already growing, and trees don’t have a problem during summers most years.
The treeline is going up in most places it is studied, but still hundreds of meters below were it was a few thousands of years ago.
And there is no 3-way coincidence. There was warming before we emitted CO2 and there continues being warming as we emit more. How much warming the increased CO2 levels are producing is not known. Assuming all of it is silly. Assuming half of it means we can forget about the whole thing. It will never be a problem.
From the Marcott graph it would have only been since 1980 that the temperature exceeded the Holocene Optimum, and only since 1950 that we exceeded the average of the last 1000 years. Sea levels are rather higher than back then and rising faster than at that time, so you may be interested in looking at that.
As I already mentioned, there was also a CO2 increase prior to 1900 that was more landuse change than emissions, but warming matches CO2 not just emissions. Its rate of change only started easily dominating natural fluctuations some time after 1980 when it reached 0.3 W/m2 per decade.
The energy provided by the CO2 forcing, currently 2 W/m2 and rising, integrating to 3 GJ/m2, easily accounts for all the warming we have seen and more to come. This is a quantification of its effect that can be compared with changes in the ocean heat content, for example.
The Marcott graph doesn’t show that. Here is a different statistical treatment of Marcott data:
The treatment by differencing corrects the effect of proxy drop out at the end of the assemblage due to most proxies not reaching the end of the reconstruction.
As you can see it never gets even close to HCO temperature. It is the instrumental temperature attached that shows that, but the instrumental data does not reach the HCO, so we are left with an apples to oranges comparison that does not allow to draw any conclusion. However the biology, the glaciers, and the Arctic ice, they all show a significantly warmer HCO. Nobody knows what sea levels were back then, but in most places there is a Holocene high stand during the HCO, in many cases significantly higher than now. Your extraordinary warming is not such.
Nobody knows how much warming CO2 causes. You are just accepting uncritically one guess. The global-warming/climate-change scare is already 30 years old. You are just one of the few that still cares about that. The rest are just living their lives without caring much.
So you are worried about the warming at the end exceeding the HCO. If it did, would that make you believe CO2 has an effect? And if it doesn’t, you don’t associate the rise with CO2? I am not sure I see your reason for your obsession with the HCO yet. I am pointing to the rate of rise, and a few more degrees to come. Does this not seem possible to you from this graph?
I already think CO2 has an effect, and have said so in my articles, but not because I believe it, but because I see evidence of it in the cryosphere retreat. I am not obsessed with the HCO, just pointing that present warming is neither exceptional, nor unprecedented. It is just atypical for the time. I see no evidence that there are a few more degrees to come. Perhaps 0.5-0.75°C more to peak temperature. Nothing to be worried about.
The present warming is 1 C in about century while the HCO was about 0.6 C in a thousand years with the cooling being over 5000 years, typical of the slower time scales associated with Milankovitch cycles. There is a major difference that I can see in these two cases with the recent one being rather exceptional. Simultaneously CO2 has abruptly done a half doubling in this same period. You want to disconnect these two even though the warming, 75% of it since 1950, is quantitatively explained by the energy supplied from the CO2 forcing change. From the forcing change we can expect about another degree for every 2000 GtCO2 that we emit in the 21st century. On that scale a degree more warming is a lot when you look at it, let alone several. What we’re doing is not peanuts which is why it shows so abruptly in the long view, both in CO2 and temperature.
this is not the few of climatology: “the HCO was about 0.6 C in a thousand years…”, climate changes about +/-0,5°C over decades and centuries are well known on hemisphere scale many times over the Holocene.
You don’t have other global datasets over the Holocene to back that up with. We have Marcott that I showed. What else do you have? PAGES2k also gives you the last few thousand years with a resolution of a few decades and shows no such thing.
Notice how it says NH. The SH MWP was at a different time, so the newer global averages like PAGES2k don’t show much of an MWP.
That one at the end stands out and is only half of what it is by now.
stupid? If you use proxies, you have to stay with proxies, you do not know?
Don’t you consider thermometers proxies, just good ones? Those would show a degree of warming at the end. These look like 25-year averages. BEST with 30-year averages looks like this for a like-with-like comparison.
ok, i see, you have no idea what it is about, sorry, but Thermometer data has nothing to do with Proxies. I stop this now.
You could use tree rings for the last 50 years instead, I suppose, but no one does because we have thermometers and lots of them.
No. Present warming is ~ 0.9°C from preindustrial (1850-1900 average), 170 years. And it is not correct that HCO was only 0.6°C. That answer is wrong because we know that HCO was significantly warmer than now. Depending on which proxies you use you get different answer, so the uncertainty of Marcott et al. is very high. With a different set of proxies the answer is different, and some of Marcott’s choices are questionable.
As I explained in my last article you cannot compare rate of temperature change over different time scales. You can get 0.4°C change in just three weeks, but it took a couple of neoglaciation millennia to lose that much. The longer the timescale the more difficult it is to change Earth’s temperature.
That you can explain the temperature change with the CO2 change, doesn’t mean that the CO2 change caused it. Having an explanation doesn’t mean that it is the correct one. You have to have the evidence to prove it.
Oh yes, the CO2 change is tremendously abrupt, but the temperature is not. There was warming before the CO2 change, and there is warming now, and the rate is not very different. This is very strong evidence that the role of CO2 in climate change has been hugely overblown.
The temperature change is abrupt. BEST has over a degree in the last century.
You ask for evidence for AGW, presumably besides the rapid temperature change and all the global energy budget studies that you either dismiss or don’t understand. What kind of evidence would convince you, if anything at all?
I don’t care much about what BEST says.
It is clear to me that there is warming and that there is a anthropogenic contribution, but beyond that the rest is built on assumptions.
What would convince me a hypothesis is correct is that its predictions that differ from other hypotheses predictions are correct.
So far since I made that picture temperature has gone down.
If there is another hypothesis, there has not been any energy-balance quantification done to present it. What do you have in mind when you say that? You can only distinguish between hypotheses that actually attach numbers to them, and something without numbers is just speculation rather than a hypothesis.
That’s just your opinion. Charles Darwin presented his hypothesis without any number attached to it. Alfred Wegener presented his hypothesis without any number attached to it. Both were accused of speculating and had a hard time having them accepted, yet they were both essentially correct.
The only thing that matters is being correct, not having a tidy quantification done.
And it is very simple really. If the 2005-2035 period is characterized by low warming, then it is the Sun. If it is characterized by high warming it is the CO2. I am evidence-driven and the only convincing evidence is the amount of warming during a low solar activity period.
thx, as my words!
It is definitely not clear, if the human part since about 1900 is 20 or 80%.
So your competing hypothesis is the sun. Compare the beginning of the 21st century with the beginning of the 20th century when the sun was in much the same state and it is about a degree warmer. What’s the difference? It’s the CO2.
i think you have no idea how uncertain the knowings about the insulation variations are. IPCC goes down close to zero, other´s are about 10 times! higher between the LIA and today. So, you believe in everything what makes the human part extreme, try to become a little more sceptic about the dumb alarmism claims and you will learn much more about climatology.
There are quantitative explanations for why earth’s surface temperature and heat content are both rising in response to a forcing change dominated by GHGs, not the sun.
It’s data in support of the theory. That’s how science works.
Jim, it looks very likely, you understand almost nothing about climatology. There is no data which can show, that most of the recent warming is due to GHG emissions. This might be hard for every hysteric layman believer in CAGW, but that we call real climatology.
Look at any of the many studies on the energy balance.
i have looked at them and i know most of them, so what do you layman try to teach me?
If you have read them, you will know that 2 W/m2 from CO2 can give an easily measurable temperature change and heat content rise, and that those have been seen. Furthermore, depending on emissions the CO2 forcing could go up to 6 W/m2 by the end of the century, and that is what this is about. It’s about measurable causes and effects and about future trends of the forcing. Even if you think solar effects are important, you will see that 6 W/m2 is about ten times what the sun can do even in extreme changes.
your thinking how climate could work is much too easy and mostly totally wrong. First, some IR forcing due to CO2, which starts in the upper Troposphere must not result in the same forcing at the surface or through the troposphere. Each CO2 doubling makes about +1°C response to global mean T again with some uncertainties between 0,7 and 1,3°C. High sensitivity can be ruled out, because you can never simulate the last interglacial with that, the sharp T increase was followed by a sharp GHG increase, but the fast T decrease at high CO2 levels over some centuries show, that negative feedbacks are much stronger than claimed by the AGW industry. It is boring to talk with you about that, because we all know for many years what you refer to, but we do not think that easy about some numbers you do not really understand. That we call climatology!
The glacial periods depended a lot on positive feedbacks from albedo, and positive feedbacks still matter, only some of which is albedo. It’s forcing and feedback, and by any measure 6 W/m2 would be a lot of external forcing in climate terms. We can even see the 11-year solar cycle of forcing in the temperature which has an amplitude near 0.2 W/m2. Skeptics have to pretzelize their arguments against the effects of increasing GHGs around this fact.
Thanks for your input.
I go with mainstream science on those too, like Milankovitch, initiated by lower CO2 levels promoting a colder climate than in the last 10 or more million years where CO2 levels were mostly higher. For you, it would be complete mystery why the Milankovitch cycles started when they did because you have ruled out the importance of lowered GHG levels.
what do you try to say about Milankovic and CO2?
The Ice Ages only started after the earth became cool enough. Why did the earth become cool enough? CO2 has been declining for the last 50 million years due to geological factors from a high value nearer 1000 ppm in the warm iceless mid-Eocene to Antarctic glaciation 35 million years ago as it dropped near 600 ppm, to finally Arctic glaciation as it dropped below 400 ppm prior to the Ice Ages. Paleoclimate is rather reliant on CO2 levels for understanding climate change and the effects are those supported by physics and our recent experiment with the earth. Without this aspect of GHG physics, all of this would be a mystery.
i do not know again what you are trying to say with that. First it should be clear, that the CO2 concentrations mostly reacted to climate changes in the past and it could act as a so called stabilisator in best case. Second you should not try to explain climate is dominated by CO2, this is mainstream nonsense driven by the AGW industry.
No, geological processes affect CO2 levels on time scales of millions of years. Volcanic periods add CO2 such as at the end of the icy Permian, mountain-building reduces it through weathering and there are others like ocean sedimentation and soil burial that sequester carbon deeply to become limestone and carboniferous rocks. Net loss from these has been the dominant geological process since 50 million years ago. If you want to dispute that geology affects CO2 that’s a whole new argument you have with the paleoclimate scientists.
no, id do not, but that does not change anything. I´m not interested about climate 100mio years ago, my interest goes back the ice age cycles and the more interest is about the Holocene and the recent climate and, of course, the next decades to centuries. CO2 goes down to really dangerous low concentrations about 180 ppm during the last glacials and up to about 300 ppm in century mean in between, driven mostly by ocean warming and cooling. Now we have some human forcing, yep, but we know nothing about the feedbacks, äämmm, sorry, except of you of course…:-)
Read more about the way CO2 explains the last few hundred million years. It’s geology that affects this all. Once you have that appreciation, you can see how what we are doing to the GHG forcing is comparable with what geology can only do in millions of years.
now the last ice ages did almost the same we did. What are you talking abou? From 180 to 300 is close to 200 to 400 and because of the log dependence, the modern forcing is less. So do not claim this increase takes geological time scales and it is not about the number, it is about the forcing and much more it is about the feedbacks. I´m not afraid about +1°C (300-600ppm) without feedbacks, but you are, we know…:-)
Actually the Ice Ages have some very large swings due to albedo feedbacks that are non-existant prior to that period, and CO2 responds to these maybe as much as 15 ppm per degree. Before the Ice Ages, or NH continental glaciers that hep them, these types of feedbacks did not exist and nor did the climate swings.
and? changes nothing.
Which part do you not understand?
These are all not being posted probably because of your eruption earlier.
Darwin and Wegener had actual evidence for their hypotheses, and the competing hypotheses didn’t, making them easily disprovable. It is very hard to overturn a hypothesis that also has evidence and explanatory power. We now measure continental drift routinely and can understand evolution in terms of genetic biology. Similarly with general relativity. The hypotheses became better founded by subsequent tests because they provide predictive power for further experiments.
haha, Wegener had “actual evidence”…do you live behind the moon? Wegener had all the so called evidence, the scientific consensus against him. If you are not aware about climate physics, please even try to understand history, ok?
He had the shape of the African and South American coasts as evidence. What are you talking about? What was the contrary evidence that continents are stationary? That was a default steady state theory like what existed before evolution. Things change and people discover not only the change but also the reason.
haha, this “evidence” (Africa, SA) says nothing at first. It´s like you believe, there is a warming, there is a CO2 increase, it must be CO2. That is just layman stupidity.
OK, you are like the people who didn’t believe Wegener and Darwin then.
fist you should learn, science, climatology is nothing about believing. But If you are a layman in climatology, like you, you must live with this limits, others can verify and are sceptic anyway.
That’s a common mistake. Solar activity determines whether there is warming or cooling. The resulting temperature depends on the initial state and other factors that also affect temperature. There is a 350 year warming trend. The same level of solar activity at different times corresponds to a different temperature but to a similar amount of warming or cooling. Solar activity has been high most of the time since the Maunder Minimum.
And there is actual evidence for the solar hypothesis. Read my articles instead of just commenting there. Everything I say is supported in the scientific literature with citations.
And Darwin and Wegener’s theories were rejected by a majority of experts. Darwin’s theory was finally accepted in the 1940’s after discovery of genes and mutations, when neodarwinists reinterpreted the theory, while Wegener’s theory was accepted after discovery of tectonic plates when reinterpreted into plate tectonics in the late 1950’s.
It is extremely common in science that the correct theory supported by evidence is rejected by the consensus.
Darwin and Wegener were rejected because they proposed changes occurred where people had assumed they didn’t. The status quo was steady state and it was wrong. There’s always a conservative resistance to ideas like these.
As for the sun, if you know of a difference between 2010 and 1910 (and even 1810) lulls, you need to say what it is because from the perspective of solar activity they are the same, long cycles, low activity, similarly the maxima between them.
again, the abou 11y sun cycles do not matter, decadal and century long trends are the point and of course most of the “sun physicians” do not think that those long term trends are close to zero like claimed by the AGW lobby.
Why is the sun different now from what it was in 1910 or 1810 that both had similar sunspot cycles? It’s a simple question, but no answer yet. Is there even a testable hypothesis?
i do not understand your question, sorry.
Look at the graph. Why would you say the sun is doing something different in the current cycle from all those other cycles around 1910 and 1810. The sun has not changed outside its previous range, but the climate has. It’s CO2 that has changed, and not by a little.
sorry, are you blind? Don´t you see, the forcings increases from about 341 to 342W/m² over that time in this study i show you now the last time.
Why do they say it is stronger now than when there was a similar weak sunspot cycle in 1810 and 1910? Also if you were to plot CO2 forcing over the last century it would rise steadily to 2 W/m2 more than prior to 1900 with a potential further rise to 4-6 W/m2 depending on policy choices. If you average over sunspot cycles or 30-year periods, you won’t be left with much net effect by comparison.
forget the short cycles as i told you and do not believe that you can simply count and find insolation changes. I`m no solar physic, so take the study and the forcing like it is. If you accept the almost 3 times positive feedback (as claimed to CO2 emissions), the warming up to 1950 is almost driven by the sun. Please do not claim about those policies which should beware the earth from catastrophic climate. Not only harvard scientists know, that with about 9 mio $/y (same US volks spend for chewing gum/y) you can bring out some O3 friendly aerosols in the lower stratosphere to regulate with minimal reflection changes the completely CO2 forcing, if it would come sometimes really this bad, climate fanatics claim. All this CAGW claims are totally naive propaganda!
75% of the warming and CO2 forcing have been since 1950, mostly since 1980, in which period the sun has done nothing except decline to today’s weaker state. The change rates of temperature line up with the accelerating CO2 forcing change that tripled since 1950, and it still adds 0.3 W/m2 per decade.
totally wrong again! And, you must consider a 1-3 decades lag in T peak because of the ocean inertia. What do you know about climatology? What is your profession, if you would be so kind? I´m a meteorologican with some works in regional climatology (Austrian Alps) for about 20y now, just for info.
No, because the majority of the response is within years and only the last 20% of it takes decades. Meanwhile the land responds almost immediately and its fastest warming rates are now.
show me the study which shows the Peak a few years later!
Like the NH is warmest at 1.7. not at end of August? Even the IPCC claims, the T Peak at now on constant CO2 takes some decades.
In the 11-year cycles the temperature responds very quickly. The ocean’s response function is a decay function with most of it being very quick. Here’s one example.
The largest response is to the immediate forcing. If the sun halved in strength today, we would not have to wait decades to see its effect.
nonsense again, completely nonsense!
A large fraction of the response to solar forcing is immediate and you disagree. I gave an example. Fine.
if this small and short variation is able to produce the claimed T response, go back to the long term study and you must accept, that the hole warming must be solar forced.
If the sun goes up and then down on decadal scales, most of the response is also up and then down following it. Recently the sun is down and it would have a net cooling effect because recent values far outweigh values from decades ago in the response function.
your logic is fantastic. Once you claim, we lag about 20y due to IR forcing, but in case of the sun radiation, which penetrates the oceans much more deeper, we should see a abrupt cooling after some years low activity. Fantastic illusions…:-)
We lag when the forcing is increasing rapidly, which is the case with CO2, not at all with the sun that goes up and down and averages not much over a century.
You don’t know much about history of science. In mid 19 century every naturalist knew species changed. There were plenty of hypotheses because nobody knew how. Even Darwin’s grandfather Erasmus Darwin was an evolutionist. Before Darwin, Lamarck and Geoffroy Saint-Hilaire were quite famous evolutionists. Even the idea of natural selection had been proposed long before Charles Darwin. So no, you are wrong, Darwin hypothesis was not rejected because people had assumed evolution didn’t happened. Quite the contrary.
There is a very clear difference between 1800, 1900, and 2000. Each was significantly warmer than the previous. Because similar low solar activity came afterwards you shouldn’t expect the same temperature.
Same problem as with CO2, you make lots of wrong assumptions and you are not even aware you are making them. This is a very common mistake for people not trained as scientists. You have to question the evidence supporting what you believe it is true. But obviously if you did that you wouldn’t be sure of CO2 role, so you can’t.
In Darwin’s case it was controversial that evolution applied to humans, and the resistance to that was biblically based. Others who already believed in evolution and natural selection may have had an easier time accepting Darwin, but those are not the ones we are talking about.
What was the actual difference in the sun that causes this putative increase in forcing while leaving the sunspot cycles the same? With the Maunder Minimum there was a clear change. What happened to it since? The sun’s compositional evolution causes it to strengthen by about 1% every 100 million years which is very slow.
For CO2, it is just about the energy budget. You add 2 W/m2 to earth’s forcing, it has to respond with warming at a rate consistent with that by energy conservation, and not surprisingly it does. Energy conservation is a basic fact of science. This is all measurable and verifiable that way.
It is quite simple. When solar activity is above average it causes warming. When it is below average cooling. Increasing solar activity since the Maunder Minimum is responsible for a part of the warming observed.
And you continue with your wrong assumptions. The Earth is not in equilibrium. As important is the energy gained as the energy lost. The planet responds by increasing the energy loss at the Arctic winter. That is why the Arctic winter has warmed. That is increased energy loss, compensating the energy gained. Your calculations are incorrect because they don’t include everything.
Yes, the earth is not in equilibrium and the energy imbalance is measurable from two independent sources, satellites and the ocean heat content. That imbalance means we are currently in a state of lagging the forcing by ~0.7 W/m2. That is, we are colder than equilibrium and with the rate of change of forcing, we lag it by over 20 years meaning the current temperature would have been more in equilibrium with the GHG levels 20 years ago.
As for your figure, the fastest warming has been since 1980 just as the CO2 forcing rate of change reached new highs, and your sloped line looks rather subtle giving no reason why we didn’t see such large warming in the 19th or 18th centuries. CO2 is the difference, of course.
Satellites have the problem of having to detect a very small difference between two very large nearly identical fluxes. The calculations have a huge uncertainty.
OHC has a different problem, as it requires to measure very small changes in temperature in a huge volume with sparse measurements with instruments that have a larger measurement error than the amount to be detected. OHC is measured as temperature degrees but reported as energy, so the very large numbers inspire a confidence that it just isn’t there.
OHC says the Earth is warming, something we already knew. This means there is a positive imbalance. However as the warming is a lot less than it should, it is clear that the forcing calculations are off.
I agree that CO2 might be responsible for a higher warming rate in the 2nd half of the 20th century, but solar appears to be responsible for at least 50% of the warming, as once solar went down ~ 2005 CO2 warming disappeared. Since then the only warming has come from a very strong El Niño.
With solar being responsible for a large part of the warming we can cancel the alarm. Solar is cyclical. We will never see the predicted warming even if we continue increasing our emissions, which we won’t. Fossil fuels are nearing their peak, so climate will not be an issue for centuries.
The OHC warming rate is less than the CO2 forcing because some of the forcing is offset by the surface warming itself. The CO2 forcing is large enough to account for the sum of the surface warming seen already and the warming rate of the ocean (see energy balance calculations). Solar changes in the last century are far below what is required for these changes, and if you accept them you have to also decide why CO2 which is many times stronger has less effect.
Of course there is an explanation for everything. Whether it is the correct explanation is a different matter. In the old days some people explained fossils as the animals that didn’t make it to Noah’s Ark. A more likely explanation is that CO2 forcing is lower than calculated.
Again you assume that TSI is all there is to changes in Solar forcing, despite paleoclimatic records showing a huge effect when solar activity is below average for a long time. That we don’t know about something doesn’t imply that we can ignore the evidence that it exists. And I have no idea why CO2 has less effect than it should. It just does. I suspect negative feedbacks are a lot stronger than expected.
Actually the numbers support that CO2 has about 2 C per doubling, just like AGW says, so this is why the skeptics are scrambling to say it can’t be so, and is something mysterious instead that just happened to grow as CO2 did with an acceleration since 1950. It’s a tough case for them to make, and they are struggling with the raw data at hand.
You can always make two curves look similar by playing with the scales. One of the problems is temperature has been increasing almost linearly since at least 1900, probably since 1850, or more. While CO2 increased with a much lower rate until 1960. That alone says that only a part of the warming can be due to CO2. So the numbers support a 1.5-2°C ECS only if we assume all the warming was due to CO2, which is very unlikely. Always your assumptions. If it was only 70% ECS falls like a stone. If it was 50% or less the effect is very small.
So it is very simple really. CO2 has continued increasing. If temperature continues increasing as predicted by models then the theory is correct. If not, then the theory is wrong. The problem is since 2002 the only significant warming has coincided with a super-El Niño. I know AGW defenders appropriate any warming in the name of CO2, but that won’t do it. For the theory to be correct there has to be a lot of warming, because we have a lot of CO2. Already we are in a 10-year pause in Arctic sea ice melting, when just the opposite was predicted. It is clear CO2 does not control sea ice melting.
When a theory doesn’t predict correctly, then it is time to dump it.
The tripling of the CO2 forcing rate of change since 1950 has coincided with the warming rate accelerating to be consistent with it. The basic numbers over the past 60 years support the theory, even as the trends have both grown significantly and consistently with each other. There is a similar plot by Lovejoy that shows this matching goes back to 1750 (again at 2.3 C per doubling) which the skeptics just dismiss as coincidence.
Greenland has not paused in its ice loss. Year on year it is a very large loss these days, especially in the last decade. Similarly Arctic sea ice no longer returns to the averages it had in the previous decade. Each decade has less.
Jim, I have already showed that the warming rate has not changed significantly. It is essentially the same with a ±0.2°C 60-year oscillation, while CO2 forcing has increased enormously.
And this year and the past year Greenland has barely lost any ice if at all.
At the rate Greenland loses ice it will take thousands of years to make a dent. Warming should stop by century end, and cooling should return in 2-3 centuries. Then Greenland ice should grow again and it won’t be pretty.
We can look at the 30-year temperature. Here it is. It is an accelerated rise of about a degree superimposed on oscillations of about 0.1 C in amplitude.
Greenland’s melt rate has accelerated in the last two decades accounting for why the sea-level rise rate has accelerated by 1 mm/yr in that same period. This is just the beginning because the temperatures are always hitting new records.
That isn’t rates of warming. These are rates of warming from MetOffice:
As you can see the maximum rate of warming has not increased, contrary to what would be expected from CO2 increasing the rate of warming.
Greenland’s melt rate is not well known and two decades is very little time. Your problem is that you have lineal thinking. If a trend is increasing you can only predict further increases. Trends continue until they don’t, and then they change. There is nothing to worry about a multi-centennial warming trend. They are common in the past and they are always followed by a multi-centennial cooling trend.
The 9-year rate is all over the place because it contains internal variability. Best to look at 30-year temperatures for actual climate change, and that is rising without pause.
Greenland responds to the temperature of the Arctic and that is warming faster than anywhere, probably because of the albedo feedback. Prospects are not good for Greenland or for sea level for that matter.
What the 9-year rate of warming shows is that peak decadal warming rate has not increased for the past 150 years. It was 0.4°C/decade in the 19th century, and 0.4°C/decade in the 21st century. It is not affected by the change in CO2 concentration. Your idea that the warming is accelerating is not correct.
Greenland was under an ice sheet during the Holocene Climatic Optimum and will continue being under an ice sheet when the Holocene ends. Whether that is a good prospect or not I don’t know. The melting of Greenland is anecdotal and temporary. It will stop when the warming stops. Your worries are misplaced.
The 30-year temperature shows today to be rather exceptional compared to any time in the measured record. This is not random chance because the CO2 is also at record levels and still rising.
Greenland had no ice last time the earth had 400 ppm. This is not a stable situation for it.
The measured record is extraordinarily short, as we cannot calculate 30-year warming rates for the past with any certainty. That Greenland had no ice at 400 ppm doesn’t mean it was due to CO2. The situation for Greenland is typical for a Pleistocene interglacial and with plenty more ice than in the Eemian, with or without extra CO2. As sea level rise is not accelerating it is clear that Greenland melting isn’t either, and at present rates it would take many thousands of years to melt it. But trends change and in 100 years Greenland could be gaining ice.
JimD, a look at Greenland is always interesting. Over summer time, when there are some weeks of big melting, we should know about the isolation. Over high northern latitudes we see very big changes due to orbital parameters over milenia. We see times during the Holocene with about 30W/m² more summer insolation over Greenland and now you compare that with a few W/m² IR forcing over centuries. One time in your next life you may understand that CO2 does not dominate our climate…:-(
If you look at Judith’s sea-level post you will see that the sea-level rise rate accelerated by 1 mm/yr since 2000, and the change was attributed mostly to Greenland.
The measured and paleo record make a 1 C warming rather exceptional in the Holocene, especially rising that much in a century. There are no such 1 degree steps in any prior warm period of the Pleistocene either. And 1 degree is only the beginning of this warming phase.
If you look at NASA data you won’t see any acceleration in sea level rise:
And it is becoming even harder to defend an acceleration since from Oct 2015 sea level has been increasing at the spectacular rate of 1.2 mm/year. There are already 5 mm missing from the expected rate.
We are measuring the present warming differently, so it is difficult to compare, but coming out of the 8.2 Kyr event does look comparable or even bigger in proxies.
You don’t know if 1° is the beginning or the end. You don’t know the future. You think you know why it warmed 1°, but as Mark Twain said, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”
You think it is coincidence that record warmth coincides with record CO2 levels when science also explains to you quantitatively why. Oh, well.
LOL, JimD, your logics are extraordinary. Did you ever thought about your personally record breaking slow working brain?
Thanks for your input.
What I think is that providing an explanation is easy. The hard part is demonstrating that is the right explanation, and in science most of the times it isn’t. That’s why there are so many failed or modified hypotheses and theories and so few correct ones.
Present climate is the consequence of many causes. The simplification that it is all (or nearly all) due to CO2 is clearly incorrect. That is why all climate science predictions and models are failing. CO2 is clearly contributing, but it is one of several factors and not the most important. This will become clear as we move towards 2035 and it will become clear that for the first third of the 21st century the warming had been well below what the CO2 hypothesis required.
CO2 has provided enough energy to explain all the warming and more in the pipeline. It’s clear from the energy balance that everyone uses. Energy conservation is fundamental.
A quarter of the energy the Sun provides to the Earth is used in moving water molecules from liquid to gas phase and then that energy is distributed through the system or sent out of the planet. The energy CO2 provides is a small drop in that ocean and its destiny depends completely on what the rest of the energy does. There is nothing in the physics of climate that says that the energy provided by the increase in CO2 must determine climate change. This is a case where the feedbacks are a lot more important than the forcing and their response is not well constrained. Your calculations don’t explain anything. They are just a part of an unknown equation.
I think you don’t understand what a sustained 2 W/m2 and increasing forcing can do to the earth’s temperature. The warming is no surprise except to the skeptics who don’t really quite understand how to do the energy budgets for themselves yet.
JimD, and we see, that you do not understand how marginal a CO2 forcing about 2W/m² could be and was over the climate past. Look at the Eem, after a sharp T increase the CO2 followed, with min. the forcing we see since the LIA today. This “dramatic forcing” did not stop the T decreasing relative fast over the first centuries and milenia during the Eem, because some negative feedbacks on work are relativ strong, as we should know, they exist not only in the past. Without the strong positive feedbacks parametrized today you can put your hysteric CO2 forcing somewhere…
But you’re the one left struggling mightily to understand this when it is obvious to science.
Jim, i told you about the Eem and you come again with your primitive correlation graph between recent CO2 and T increase. You like kidding, ok :-)
You seem to have a deep belief in some foggy distant past point-derived paleo changes, but don’t believe the currently globally measured ones. This is inconsistent at the least. You’re the one changing the subject from hard data to dodgy past assertions.
there is nothing foggy, ok, for you maybe. It shows, that the climate sensitivity due to GHG is much lower, than political driven mainstream climatology claims since the 2nd-3rd IPCC report.
No, you still refuse to look at the last 60 years when we have good data on both CO2 and global temperatures because the numbers don’t work out for you.
oh no Jimmy, they work pretty well for me, but you have some misunderstandings again. The forcing due to CO2 is well known and approximately calculated by 5,35ln C/Co. You call the last 60y (from about 320 to 410 ppm) so this makes 1,3W/m². If you start with global Tm 288,00K you reach over S&B law about 288,24K. The CO2 forcing over 60y is max. about 0,2-0,3°C, every layman should know.
Than you come over with your extremely strong positive feedback claims and i tell you once again, look at the climate past and you will see, that the climate sensitivity due to CO2 can not be this high, claimed by the alarmists!
The temperature change is 0.9 C in this period, so you can work out the sensitivity for yourself. If these lines fit, it is 2.3 C per doubling.
Actually 2 C per doubling only surprises you because you don’t understand why there are positive water vapor, cloud and albedo feedbacks that observations also support.
You attribute it to a massive coincidence that is just so unfortunate for your argument.
open your “correlation = causation mind” JimD :-)
CO2 has been increasing since the Ice Ages. Sea levels have also been increasing while it has been cooling. Go figure.
and? What surprise? After the glacials we have plenty of ice over the NH and it takes some milenia to melt it. The NH is not hottest at the 21.06. you forgot?
The melting rate has increased again since the last century, and this is ice that survived the Holocene Optimum. In the last 12000 years, the Milankovitch precession forcing has switched to more favor Arctic ice now, and there was cooling for most of this period, but what is happening there recently is opposite and twenty times faster according to PAGES2k. I think you are interested in why this switch from slow cooling to rapid warming occurred and are trying very hard to explain it.
again, you have no idea what you are talking about!
That is consistent with what I said. The trend is what you get from the precession change, and it should favor Greenland now more than before. Even more surprise for you that Greenland is now melting, right?
Greenland had some big calvings over the last 20y which made the overall mass balance negative over a short period, we do not call climate.
The balance without calving is still positive, there is no net melting you forgot? And the last year ist was overall positive, because of much snow accumulations and less melting. This year looks pretty good again.
It turns out it is now contributing most of 1 mm/yr to sea-level rise that it wasn’t before 2000. This when Milankovitch should help it more than ever. However CO2 levels of 400 ppm are detrimental to it.
nonsense again, and sorry, i do not worry about see level rise which was now for 100y or so between 2-4mm/y
That’s an interesting number. The 20th century average was 1.5 mm/yr, and since 1990 it has been 3 mm/yr when you look at actual numbers.
Your graph shows what I said. 150 mm from 1900-2000, and 75 mm since 1990.
I would use at least 30-year averages which is why I used the periods from 1900-2000 and 1990-now. In the last 25 years sea level rose as much as in the two 50-year periods, 1900-1950 and 1950-2000. This much is clear.
Solar irradiation is 340 W/m^2. The energy from CO2 increase is so small that any effect from feedbacks can completely overcome it. CO2 increase is something like 0.54 W/m^2. Considering the heat fluxes in the system, unless you can account with precision for everything, you can’t be certain of the effect of the small change caused by the increase in CO2.
You are always pretending a lot of certainty when you really don’t know how the system works.
The feedbacks turn out to be positive. Note that the Ice Ages rely on extremely strong positive feedbacks to relatively small forcing changes.
A better comparison with the sun is that its 11-year cycles have a forcing change only about 10% of what CO2 has done so far, and those can be seen in the temperature record. This also has to have positive feedbacks to show up that much.
You cannot demonstrate that the feedbacks are positive.
Milankovitch forcing is huge at the latitudes involved. The entire insolation gradient of the planet is changed by obliquity. We can see the sapropels deposits in the Mediterranean changing according to the effect obliquity and precession have on the West African Monsoon. Where is the evidence of the effect of CO2? We see the effect of the warming, but what does it show that it is the CO2? Your assumptions and calculations?
Are you asking whether a greenhouse effect exists? It does exist, and its effect is 150 W/m2 otherwise the surface would be a lot colder. This is not a small thing.
Of course the greenhouse effect exists, but nobody knows the effect of increasing the CO2 because it is too complex and there are too many factors involved that are not known. So all those numbers that you put, carefully calculated mean nothing. All we know is that doubling the amount of CO2 should warm the Earth about 1°C. All the rest is just assumptions over assumptions. The only way to know is to observe and measure, and what we are observing is a lot less warming than predicted. In fact the warming is not very different than it was before we started emitting in earnest, so the effect of CO2 is SMALL despite all your calculated W/m^2. The hypothesis is faulty.
Observed for the last 60 years is effectively 2.3 C per doubling which may be less than you expected, but is actually about what AGW says it should be. A similar number is obtained for warming and CO2 changes since 1750.
Nature does this experiment for us. We have an 11-year solar cycle to look at. The CO2 forcing is ten times the 11-year solar cycle forcing change and the warming is also ten times larger.
Not according to Lewis & Curry, 2018. They show 1.5°C/doubling. The problem is that you account every warming, even El Niño or volcanic reduction, as CO2 warming. Silly mistake.
Lewis and Curry can account for all the warming since 1940 anthropogenically, and that is about a degree. For them 1.5 C per doubling is only 2/3 of the net anthropogenic effect, but they won’t tell you that part, of course. You have to deduce it from their assumptions that downplays the CO2 fraction relative to everyone else.
Their assumptions and those of all the other scientists that calculate ECS from the energy balance calculations that you like so much, might be better than the assumptions that go into the models.
Without being a scientist you talk with certainty about things nobody else is certain.
Since 2001 we have emitted over one fourth of all CO2 we have ever emitted, and discounting a natural variability El Niño, we have seen almost no warming. That’s the evidence that can’t be refuted that is incompatible with CO2 having a big effect.
EB models give you TCR, but have to make some dubious simplifying assumptions to extrapolate to ECS. The oceans are not warming as fast as the land which they assume for one thing. The warming is not monolithic.
If you do as LC who consider that CO2 forcing is only 2/3 the total, you have to add that extra 50% warming back in to convert their sensitivity to something you can use for policy which is the real temperature change per emissions. Then you end up with what the IPCC has anyway.
You talk about dubious simplifying assumptions. That is a correct diagnosis for the entire CO2-hypothesis of global warming. The problem is that the calculations supporting it require that all the warming is due to CO2, and none to natural variability, but then the expected warming does not materialize and natural variability has to be invoked to explain it. Such hypocritical use of natural variability is the trademark of a failed hypothesis.
Millar et al., 2017 article about emission budgets and pathways is quite clear. Even Paris 2015 has all the “dubious simplifying assumptions” wrong. It doesn’t matter what we do, we are not going above 1.5°C before 2050. All the models are wrong. The hypothesis is wrong.
The energy balance is a tight constraint on the long-term mean. Even after an El Nino the energy balance quickly restores the global temperature towards a long-term mean. Internal variability has the character of up and down fluctuations that last only a year or so because of this restoring force.
“Internal variability has the character of up and down fluctuations that last only a year or so”
You just invented this. There is no shortage of articles defending an explanation of the Pause as a result of internal variability. By definition internal variability is anything that affects the system and is not a result of an external forcing. There is no time limitation. It could last decades or centuries.
That could be solar or aerosol forcing which are not internal. Have you dismissed those? You think it is ocean circulation changes or what? Who else thinks that?
Changes in oceanic circulation and oceanic oscillations are internal variability. How else do you explain that summer Arctic sea ice has not melted in 11 years despite warmest ever years?
I would dispute that.
JimD, the cherry picking queen again, boring…
Javier said it stopped in 2007. What are you saying?
Yes, by using the wrong month, obviously.
Since 2007 the trend is slightly up, and it is likely to be confirmed next September.
Surprising considering that global warming was popularized by Al Gore in 2007 on account of a melting Arctic that isn’t melting since then.
2007 was anomalous. A single year does not make climate change, but a 30-year trend does and the models underestimated this rate of loss. You can also use the annual average.
Of course 2007 was anomalous. It was the year Arctic sea ice turned the corner.
You are unable to look beyond a linear trend.
Looking at the amount melted every year (March extent minus September extent) shows very clearly the tremendous melt increase in the 1995-2007 period, and the melting decrease since 2007.
Whether you accept it or not, it is in the data. The Arctic stopped melting in 2007 and nobody has a clue about when the pause in ice melting will end.
The winter max trend that I showed first means there is less to melt each year, so you could be misled by that.
listen to Judith (new Topic), maybe you feel affected jimmy?
In some cases a sense of identification with the group, akin to identification with a religious faith or political platform.
A strong sense of the boundary between the group and other experts.
A disregard for and disinterest in the ideas, opinions, and work of experts who are not part of the group, and a preference for talking only with other members of the community….
No, because since September 2007 the summer extent trend has been up, and the melt trend is down. At the same time the end of the melting season has been taken place earlier.
People like you that accept unquestioning the AGW hypothesis do not accept that the trend in Arctic melting has stopped. But the evidence is in the data. Something changed in the Arctic 11 years ago. Some scientists predicted it because those changes have taken place before, but they were ignored.
I showed you the annual average, and there is no way to see an upward trend since 2007. The lowest average was in 2016, for example. 2012 was also lower, but not as low as 2016. The low years just keep getting lower.
The annual average is not representative because what drives the winter amount and the summer amount are not the same processes. There is no danger that winter sea-ice will disappear, the alarm is about summer sea-ice. And summer sea-ice is not melting. It was predicted multiple times that summer sea ice was going to disappear. The prediction was clearly wrong. Summer Arctic sea ice is not going to disappear. As usual the alarmism is unfounded.
The summer ice is more noisy than winter ice because of how fragile it gets. Just interannual weather variations can have a big impact with no correlation to the next year. The winter or annual mean values are a more robust measure of actual climate change, being less noisy and just downward trending. The loss of winter ice is more telling about the health of Arctic sea ice.
That’s goalpost moving. It is the Summer Arctic sea-ice that was defined as ice-free when less than 1 million sq. km. And it was predicted to become ice-free in just a few years. The same type of silly calculations you like to throw around as if they mean anything were used to show that albedo decrease was going to drive an Arctic spiral of death.
They were wrong. You were wrong. The hypothesis is incorrect.
The models underestimated the loss of Arctic sea ice. 2007 was well below what anyone expected. Each decade has much less ice than the previous one whether you want to measure in the summer, winter or annual.
Yes. We know the models don’t work either way. When there is melting they underestimate it and when there isn’t they overestimate it.
NOAA recognizes there hasn’t been any significant change in Arctic ice in 10 years with their 10-year graph.
Your graph didn’t work, but maybe it should be this annual mean that no one in their right mind would think has paused.
Then you can visit the US National Ice Center and see their graph there:
And the data is very clear:
The Arctic is at less risk of losing its ice than 11 years ago.
Jimmy, the global mean forcing due to orbital cycles are small, but there, where the climate system reacts sensitive, over summertimes at high NH, they are everything else than small. What do you think would happen, if you have about 50W/m² insolation over arctic regions, where snow and ice could melt or not?
I expect large albedo feedbacks enhanced by GHGs and these are all positive feedbacks.
I believe science.
I believe what scientists write about in their published journals. You seem not to. Perhaps you favor bloggings.
boto, for Greenland this is supposed to be a period that favors Arctic ice according to the orbital cycles, but it is melting instead. That should tell you something.
We have decades of experience with the enhanced greenhouse effect versus NV. The enhanced greenhouse effect has won every single round, including the face-off with the big bruiser of the climate: the Pacific Decadal Oscillation.
to 2006 – .17813 ℃ per decade (before the PAWS)
to 2013 – .162233 ℃ per decade (the best the PAWS could do0
to present – .189184 ℃ per decade (including back-to-back La Niña events)
The above is NV getting its clock cleaned by the control knob of the current climate: ACO2.
Professor Curry’s current forecast is an El Niño is possible late 18/early 19.
She is almost certainly correct. I looked at solar-ENSO correlation based on the work of Leamon & McIntosh.
A pattern has been repeating since the 1960’s. At the solar minimum there is a shift from La Niña to El Niño, to be followed by a shift to La Niña when solar activity starts to increase fast.
I checked the chances just of the first shift at the solar minimum:
“Let’s take the monthly sunspot dataset 6-month averaged and define the months belonging to a solar minimum as months with less than 25 ssn.
For each solar minimum we divide the period in half.
Now we take the same months corresponding to each half of each solar minimum from the ONI dataset and we average the ONI value (positive El Niño dominated, negative La Niña dominated).
The result is:
1st half: -0.63 (Niña)
2nd half: 1.17 (Niño)
1st half: -1.15 (Niña)
2nd half: 0.42 (Niño)
1st half: -0.57 (Niña)
2nd half: 0.72 (Niño)
1st half: -0.63 (Niña)
2nd half: 0.26 (Niño)
1st half: -0.49 (Niña)
2nd half: 0.05 (Niño)
We assume the pattern will repeat in the SC24-25 minimum where the 15 months so far are Niña dominated, and an El Niño is forecasted.
Assuming 50% chance for negative or positive value, the chances of this particular pattern (Niña-Niño) at a single minimum are 1/4 = 0.25 (25%). The chances over six consecutive minima are 1/4^6 = 2.4E^–4 (0.024%)
So the probability of this pattern resulting from chance is 1 in 4000.
I guess we can reject the null hypothesis. The probability that at the solar minimum a La Niña will turn into an El Niño is very high, and indicates ENSO is affected by solar activity.”
This is only considering part of the pattern. It is clear that ENSO is affected by solar activity, and this is ignored by most climate scientists.
So Judith can safely predict the next El Niño and the next La Niña by watching solar activity. I expect the solar minimum by early 2019, so a late 2018-early 2019 shift to El Niño should be expected.
But that won’t help you. El Niño should be a small one and should be followed by a very big La Niña in 2020-2021 when solar activity starts increasing fast.
In terms of temperatures this is approximately what I expect.
It should increase next only to decrease more latter. And then your climate religion will be in trouble.
An El Niño starting in December 2018 virtually assures the warming rate for the first two decades of the 20th century will be equal to or greater than .2 ℃ per decade.
I somehow doubt Professor Curry smoked any solar voodoo in her ENSO forecast.
You are really bad at predictions.
No voodoo required. The statistics are what they are. You can ignore them, but you can’t deny them.
Lol. I predicted the positive phase of the PDO would kill the PAWS. Before the pdf went positive.
PAWS UP – the pose of a dead animal. When they die they bloat and often roll over on their backs and the legs go straight up.
Yes, you showed a lower trend that you can attribute to lower solar activity on the background of a strong warming. Have you ruled that out?
The last two decades of ocean warming have seen El Nino events have evolved into commonly being record warmest years and La Nina events commonly set records for warmest La Niña event.
2nd 1/4 OHC will out soon. Up or down? Odds are, solidly up.
Re: “better look:”
You should stay up-to-date on PAGES2K and their confirmation of the hockey stick pattern:
(Figure 7: “A global multiproxy database for temperature reconstructions of the Common Era”)
Lol. Waste of time.
Correlation is not causation, but it is evidence supporting a theory. Some people don’t see how adding GHGs causes warming and they are pretty determined about their lack of understanding on this to the extent that graphs showing it are dismissed as just coincidence.
your simple believings about the climate system are a lack as well…
Your objection is trite, and shows that you likely don’t understand how causal attribution is done in science, including with respect to attributing warming to increased CO2 levels.
This is instead:
1) correlation +
2) well-evidenced mechanism +
3) primacy, where the proposed cause occurs before the effect +
4) robustness of the correlation under multiple tests/conditions +
5) experimental evidence that adding the cause subsequently results in the effect +
6) exclusion of other likely causes +
7) specificity, where the effect having hallmarks of the cause (ex: the observed tropospheric warming and stratopsheric cooling, is a hallmark of greenhouse-gas-induced warming, not warming from solar forcing)
I’ve gone over this in detail here:
“Myth: Attributing Warming to CO2 Involves the Fallaciously Inferring Causation from a Mere Correlation”
And see the following:
“Our study unambiguously shows one-way causality between the total Greenhouse Gases and GMTA. Specifically, it is confirmed that the former, especially CO2, are the main causal drivers of the recent warming.”
I found the article difficult to read because it seemed to be written by a PhD, for other PhD’s.
I never went further than a Masters Degree, so it was difficult to read, for me.
I think communication is better when I try to write at a level that a high school student could understand.
I feel that a person who understands a subject well, can communicate it with simple, concise writing that would not be misunderstood.
In my opinion, your article failed to define what a climate model really is:
– A GCM is a very complex version of the consensus personal opinions of government bureaucrat scientists. They have programmed their models so they “predict” rapid warming from CO2, based on their assumption that CO2 is the primary ‘climate controller’, and has a high ECS.
In my opinion, your article failed to reveal how far from reality the “predictions” made by the models have been in the past three decades
(a chart would help a lot):
– Predicted warming double to triple of actual warming,
– Predicted warming at high latitudes happened in the Arctic region,
but not in the Antarctic region, and
– Predicted warming did not happen between the 1998 ENSO temperature peak and the 2015/2016 ENSO temperature peak.
If a so-called “model” represents personal consensus opinions on the physics of climate change, and its predictions (projections / simulations)
are far from reality, then what we really have is a failed prototype model
— not a real model of the climate change process on this planet !
The underlying basic climate physics used for the models must be wrong.
The government bureaucrats who use the models to make wrong predictions of the future climate are not acting as real scientists.
What we need is a new model, based on the correct climate physics,
most likely unknown at the present time, so it can make decent predictions / projections / simulations of the future climate.
In your conclusion you wrote:
“The root of the most significant problem at the climate science-policy interface lies not in the climate models themselves …”
I strongly disagree.
The so-called climate models make wrong predictions, but are still used
by politicians as the primary “scientific evidence” for the fantasy of a coming climate change catastrophe.
A real scientists would not allow politicians to do that with models that make grossly inaccurate predictions.
There is absolutely no scientific evidence that adding CO2 to the atmosphere has caused any harm, and strong scientific evidence that it has ‘greened’ our planet — a large benefit.
The worst case TCS, estimated by assuming all the warming since 1950 was caused by CO2 (with no scientific evidence that any of the warming was caused by CO2), is about +1 degree C. per doubling of CO2 = CO2 is harmless, even with worst case warming assumptions.
The so-called climate models must use an ECS about about +3.0 = a level that can not be justified by real science.
My climate science blog
for non-scientists, with
over 18,000 page views so far:
Oh boy, will I get brickbats for this.
I agree with double sixsixman wholeheartedly.
This is where I diverge from science, and object to it on the grounds that it’s science, disappearing up the backside of science, for science’s sake.
And whilst I understand it’s a discussion document for scientists, each and every citation in this document is probably littered with innumerable other citations, making the discussion completely impossible to make sense of.
One single hypothesis, formed into a peer reviewed paper is complicated enough, riven with innumerable contradictions, arguments and competing, peer reviewed papers with their own contradictions, arguments etc. makes this call for comment unfathomable. It’s an incitement to argument, not rational debate.
And no, I’m not a scientist, but my belief is that science is here to deliver solutions to mankind, not tie the entire world up in endless, tedious debates over the minutiae that delivers nothing meaningful.
Can we solve climate change? If it’s driven by CO2, no we can’t. We know that, why are we debating it?
If it’s not driven by CO2, and therefore insolvable because we don’t know what’s driving it, why are we bothering with the science?
Science will not solve the debate. It’s now a political mechanism, not a scientific conclusion. Sceptics were overtaken by politics even before they became sceptics.
For Pete’s sake, the sceptical community really needs to set it’s own political agenda, deliver it’s own media messages, stake it’s own claim against fake science, instead of perpetually fielding a defence.
How about fielding an offence? Assemble a selection of irrefutable messages, even if they’re not irrefutable, for delivery by a competent PR company to combat the politicisation of science.
Do we have a scientific study that has examined the number of credible papers that empirically demonstrate CO2 causes the planet to warm?
Has anyone bothered to recreate the Oregon petition, having learned from it’s mistakes, which demonstrates thousands of scientists don’t agree with the underlying tenet that CO2 causes global warming? Has the Oregon petition itself even been revisited after the initial attempts to correct the ‘Spice Girls’ infiltration by the greens?
The 97% meme persists, yet we let the Oregon petition wither on the vine.
We admire the attempts of Anthony Watts, Paul Holmwood, Christopher Booker and James Delingpole to operate in their own vacuum, yet offer them scant support.
The vast majority of voters out there don’t have a degree, many struggled to complete a secondary school education, yet the sceptical scientific community continues to discuss subjects with no meaning to them, in a language they don’t understand, delivered by a means few bother to access.
As a layman, I endeavour to persist, not because I’m clever, but because I’m not. And I can promise you that the reason the alarmists are even ‘credible’ in this debate is because most people don’t want to learn, but simply accept what’s delivered.
So deliver it.
There are sympathetic PR firms out there. There are journalists looking for a scoop, there are TV companies looking for new material, there is a public bored with the same old misery.
Stop navel gazing, please, and deliver what the public want’s.
When briefing the Queensland State Premier and Cabinet on complex economic issues, I had to do so in one page. When writing policy papers, I had to include a one-page summary which was all that the politicians would read. A highly-experienced economic policy researcher and adviser told me that I should write the one-pager as if for “the Queensland grandmother,” who deosn’t read newspapers. The majority in Australia support even higher targets than our already absurd 26-28% emissions reduction (with a rapidly growing population), at the same time as they bemoan severe rises in the once of electricity and increasing blackouts in a once cheap and reliable system. The politicians, who in large part (along with the media) have caused the widespread acceptance of dangerous warming and the emissions-reduction “solution,” know that a a rational policy will lose votes. That’s what drives them. Changing policy will not be done by papers aimed at experts, but by simple clearly explained and supported messages which will puncture the dangerous warming belief and demonstrate the huge damage caused by emissions reduction policies and the consequent reduction in our capacity to deal with a future which we know will surprise us. I am not hopeful.
… rises in the price of electricity ….
Yes to much in this string.
To say that clarity is lost in the discussion about risk assessment for climate change is an understatement.
It’s accurate to say that offensive PR has played the largest role in sculpting policy and public opinion on the subject. It’s this same divisive methodology used to ascribe risk in its many defining incarnations.
Society has done less risk assessment, and certainly less mitigation, on certain known catastrophes that are likely much worse than climate change risk for events that are guaranteed to occur, 100%; the only missing question is when. A catastrophic asteroid collision for example.
Lack of conclusions surrounding so many open questions remaining about the science makes risk assessment for climate change as malleable as paint on a canvass; paint whatever picture one wants, tease out emotion, coerce any conclusion; it’s really all art craft with so many questions still open.
Science isn’t art, why can’t we take the paint brushes away from scientists and pundits? I don’t want them defining risk for anything with so many questions remaining for which there’s so much emotion, and especially when there’s reputations and money at stake. The largest part of the risk equation I see is personal risk. I fear scientists own fears of not being branded with scarlet letters more than I fear the unknown risks of climate change; their fears effects more than their own reputation. Ultimately collective hysteria is its own catastrophic event that could put all society in grass huts. I state this rhetorically on a holistic level, I don’t imply Dr. Curry fears “a letter” she’s already received in many circles. But it’s time the emperor is seen as naked, whoever is brave enough to wade into deadly incoming fire.
Like the hippocratic oath, do no harm, the least risky policy is adaptation and a reliance on the continuance of the technological revolution for which there’s no end in sight. I guess if there’s any good news it’s that the technological revolution isn’t waiting around on policy wonks and artists.
Hotscot, how about this?
“We consume a huge amount of fossil fuels, cut down trees, eat rice and beef, and this causes emissions which warm the planet. But we don’t know very well how much it will warm in the future, this depends on how much fossil fuel remains in the ground which we can afford to extract, how cleanly we burn it, how greenhouse gas concentrations actually increase, and what effect that has on the climate. Unfortunately renewables can’t be used to replace a large portion of the fossil fuels we burn, and nuclear power isn’t popular. To make matters worse the predictions we make about the future tend to be wrong most of the time, and there’s no obvious solution to what may turn out to be a small problem”
And the most important point is that the underlying assumption that global warming is harmful may be wrong. It may be beneficial.
Haha.BREAKING: New Tech Just Unlocked A Trillion Barrels Of Oil
“When asked intractable questions, the temptation is to change the question, slightly, to a tractable question that can be dealt with in terms of probability, rather than face the ambiguity of the original, policy-relevant, question.”
When one is seized by this temptation, it is not long thereafter that one’s ego prevents a person from drawing back one’s heart felt conclusions. Coincident with such temptations it seems, comes the bad behavior which seems justified at the time. Give us Stephen Schneider.
I have been told and do not know this as fact, that climate models do not “work”, as much as they do work unless CO2 is added to the paradigms used. Currently, CO2 is added to models and these model outcomes substantiate CO2 role in climate change as a control knob.
I am reminded historically that Copernicus made observations of planetary movements that informed his calculations for a heliocentric theory, which at the time was contrary to the geocentric view held by the Vatican and its mathematicians. At the time of the 16th Century, another contrary paradigm was in flower, Protestantism. Galileo Galilei some years after Copernicus death became a heretic for publicly articulating a heliocentric theory.
Current environmentalism and the CO2 control knob paradigm, gifted to the world as a new religion, seems at odds, in the same way Copernicus’s calculations, that climate change occurs in discrete “steps” and not a linear response to forcings. Hence, assumptions, and models based upon those assumptions requiring equilibrium as in Equilibrium Climate Sensitivity may be similar to the Papal mathematicians prior to Copernicus.
I have returned to moderation after a long hiatus. Is it something I said?
You brought this up:
in the part about fat tails if I understand what a fat tail is. My link shows expected returns but it all works for expected costs. One takes each ½ degree of the distribution and assigns a probability and a cost to it. They are all weighted and added together. You get the expected return or cost.
Working specifically with expected returns, I’d throw out the high returns, the ones on the extreme upper side in the tail. Why? Because I like my job. If I did the work and said I was a CPA as I did the work, I’d throw them out. They can distort things. Some good Google type success represented by a 2% possibility of happening with me buying my own island in the Pacific is not part of the work I will show you. I’d throw it out.
Now flipping from returns to costs, it gets complicated. Above I’d be comfortable doing what I did throwing stuff out. I am not sure to do when it’s costs of global warming. One approach is to work with policymakers and ask them what to do with that, once they understand the thing, which isn’t that complicated.
And I figured out what to do. Disclose. Always disclose. Always disclose that a ton of the costs are caused by very low probability outcomes. Failure to discloses this by a CPA is very bad.
Judith, You say: “Formal uncertainty quantification of computer models is less relevant to science than an assessment of whether the model helps us learn about how the system works.”
I’d be careful here. “Understanding how the system works” is not really science unless its quantified in some way. Often these “understandings” are just vague verbal formulations that aren’t very meaningful.
“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.” IPCC 2001
“Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation (see ref. 26).” James McWilliams 2007
“Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic.” Julia Slingo and Tim Palmer 2012
There is no doubt that climate models are intrinsically unable to provide deterministic predictions of century scale climate evolution. Nor is the prediction of abrupt change in the Earth system a tractable scientific problem.
“The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change…” Wally Broecker
Yet there remains a certainty – from either intrinsic or anthropogenic triggers – of abrupt change at scales of moments to ages that are potentially extreme and adverse – regardless of whether the change is in temperature, hydrology or biology.
“Research should be undertaken to identify “no-regrets” measures to reduce vulnerabilities and increase adaptive capacity at little or no cost. No-regrets measures may include low-cost steps to: slow climate change; improve climate forecasting; slow biodiversity loss; improve water, land, and air quality; and develop institutions that are more robust to major disruptions.” NAS 2002
Major uncertainty with climate metaphors. More from ESSEX and McKitrick
“Using the word ‘greenhouse’ to describe a process that is not a greenhouse. The metaphor is wrong. “Science by metaphor is always a risky business, …The greenhouse metaphor is the secret mental model that many retire to when they become overwhelmed by the complexity and definitive uncertainty of the climate science. …Most public discussion of global warming in the past few years has been built on incoherent cliches and misleading metaphors. Here’s an example from a 2002 Environment Canada website. Under the heading “The Earth is a Greenhouse” we read: “ As you know greenhouses use glass to keep the heat in. And just as the glass in a greenhouse holds the sun’s waramth inside, so the atmosphere traps the sun’s heat near the Earth’s surface.” …..Official or not, it is bunk thate only serves to confuse the public and reinforce the Doctrine of Certainty.
“Of course such problems are endemic. A global temperature stastistic is endlessly discussed despite the factd this is is not a temperature. It has little to do with rising sea levels, or the length of glaciers yet its every movement is said to control these things. Water vapour is the king among infrared gases, yet it is rarely mentioned, even though the behaviour of the other infrard gases cannot be understood unless you can figure out everything due to water vapour first. Carbon is not carbon dioxide.
“There are no controlled experiments in climate, above all, there is no theory for climate. This pretty much crosses everything off the list of things that we could do to treat prediction climate definitively. “ Page 124- 130 in TAKEN BY STORM – See text below of these pages.
True, true, the ‘greenhouse’ analogy about all life on Earth is like a dog in a car with the windows rolled up is not accurate. What we’ve learned is that when it comes to metaphor-making, the 2nd law of thermodynamics violates Leftist thinking.
More like insulation than a greenhouse, but a warming effect it is anyway. I don’t think they dispute that part.
My question is, does uncertainty really matter in climate? Sure we have best guesses on ECS, limited understanding of clouds and generally Albedo, but does that really matter? Effectively the root question surrounding this subject is about energy policy in the end.
So let’s do a little experiment. Let’s take away all CO2 producing energy sources and see where we stand and how we can adapt for policy achievement through total CO2 mitigation.
EIA 2016 total world energy statistics state that basically 80+% of the world energy supply comes from Fossil fuels. Approximately 5% from Nukes, and lets just call it 15% for what is referred to as renewable energy sources. Within that renewable category, Hydro provides a bit over 2%, Biomass and waste 10+%, and other renewables (Solar, Wind, geothermal, Tidal combined) just over 1% of the world energy needs. Now keep in mind burning biomass (wood) and burning waste (dung) produces CO2 also and needs to be removed from the equation. So we now have removed about 90% of our global energy production due to potential climate change threats. How do you replace that for 7 billion plus people. The answer is you don’t and can’t.
The real answer is to develop action plans to rapidly respond and adapt to changes in climate no matter if it ends up being RCP 8.5 or not.
World energy demand has grown by roughly 2% a year for decades. That is not going to change anytime soon, and it is brutally obvious that wind and solar are not going to be anything but a miniscule fraction of the energy supply in the end.
So, do climate models and uncertainty even really matter in the end?
Just my 2 cents
Judith: One problem with Integrated Assessment Models is the choice of an appropriate discount factor for comparing the cost of future adaptation to future mitigation. An economic theorem tells us that the optimum discount rate depends on the time value of money and the rate of future economic growth. This applies that no global consensus on an appropriate discount rate is possible, because different players live in very different worlds.
Rich liberals (especially academics) in developed countries see a world where pollution, depletion of resources and lack of sustainability mean that his descendants are going to struggle to maintain their current standard of living. Economic retreat, not continuing economic growth, demands that citizens pay a very high price to protect their descendants from the horrors of climate change. With expectations of low, no, or negative growth, an appropriate discount rate mathematically will be very low.
Conservatives and the non-affluent see a very different world were human ingenuity will continue to solve the problems society faces and economic growth will continue. They still believe in the “American Dream” that each generation will be better off than the previous one. With continuing economic growth, it makes far less sense to protect our far more affluent and capable descendants from the cost of climate change.
The developing world sees the last few decades of Chinese economic growth as their model for the future. With annual growth in GDP between 5% and 10%, it makes absolutely no economic sense to invest in mitigation. Chinese policies with respect to horrendous conventional pollution illustrate this problem clearly. Another 1.5 billion Indians (and ? billion others) were willing to pledge nothing more than business-as-usual unless the West was willing to pay their incremental cost to mitigate as they grow.
You could include this “uncertainty in discount rate” among the many uncertainties above. Actually, I’d describe it as a “certainty” to disagree. Such disagreements make international agreements about a global problem far more challenging than we currently pretend.
I’ll try to attach some references later today.
The National Academy of Science has issued an interim report on Estimating the Social Cost of Carbon Dioxide and setting an appropriate discount rate (is discussed in Chapter 6. Uncertainty in this process is a topic.
An optimal discount rate (r) was mathematically proven by Ramsey to be given by:
r = δ +η.g,
where δ is the discounting of the utility of future generations or “pure time preference” rate [how much less $1 is worth to you a year from now after correcting for inflation]; η is the change in the value of an additional dollar as society grows wealthier (the absolute value of the “elasticity of marginal utility of consumption”); and g is the growth rate of per capita consumption.
In Table 6-1, you’ll see the assumptions made by climate economists. They may be appropriate for elite academics from the developed world, but there are obvious reasons they don’t apply to the developing world (and might not be accepted by the less affluent in the developed world – if they understood what was going on.)
The NAS report doesn’t appear to discuss why different groups mathematically or intuitively apply a different discount rate when calculating the SCC. I don’t have any references that do so.
A liberal would not evaluate societal damage solely in terms of GDP which discounts the value of people in developing countries that have less than 10% the GDP per capita of those in the developed world. GDP highly skews the actual effects and is rather a one-dimensional view of the problems with climate change.
IPCC economists are the ones working on the social cost of carbon. Economists (liberal or conservative) think in terms of “utility” which they usually quantify in terms of dollars. To some extent, anything that can’t be converted into dollars is ignored.
Societies and political parties that don’t agree with the assumptions used to calculate the low discount rates in Table 6-1 and unlikely to agree mathematically or intuitively with the “consensus” discount rate. They logically shouldn’t be expected to implement the mitigation policies the liberal consensus believes are appropriate.
That’s OK, as long as you don’t conflate the dollar amount with true damage in human cost.
I believe economists have devised the concept of “utility” and “revealed preference” to deal with the philosophical implications of reducing everything to dollars. However, I don’t understand the practical implications of these terms.
Unfortunately we are running out of fossil fuels, therefore a bet on business as usual as projected in rcp8.5 is total nonsense.
In engineering and project management risk is defined as the probability of occurrence multiplied by the consequence of an even or condition if it occurs. Policy design and implementation is a project, therefore this is the definition that should be used,
Consequence can be measured in units of cost, deaths, work-days lost, years of life lost, etc.
“Consequence can be measured in units of cost, deaths, work-days lost, years of life lost, etc.”
Just worth noting in the interests of completeness that it can also be measured in units of benefits. There is always the risk that one might end up better off.
True (I meant cost as both +$ and -$). Perhaps I should have said units of money ($) to be clearer.
I… am a professional engineer and environmental scientist who has done risk analysis – and management – on projects to a value of $10 billion. I am yet to find a ‘positive risk’ – a term that is in plain English an oxymoron.
Invariably the objective is to identify risk – in the grammatical sense – and to cost effectively mitigate. Perhaps you might like to share your wisdom and experience – rather than playing with definitions and providing links to generic standards – by providing an example of a positive climate risk.
You have stated that there is a risk that we might be better off messing with chaotic systems we don’t understand. Please – elaborate in a rational risk assessment.
Robert I Ellison I am yet to find a ‘positive risk’ – a term that is in plain English an oxymoron.
When Harry Markowitz invented quantitative portfolio management theory, he identified the “risk” of an asset as the variance of the asset price, thus mixing gains and losses together (i.e. “risks” and “benefits” subsumed under “risk”). Unfortunately the language caught on, which led to the invention of “downside risk” to denote what everyone else had previously understood as risk.
So much for history, and what might be called the “economics dialect” of English. For almost everyone else, including you and me, it makes more sense to talk/write of “risk-benefit analysis”
for an intro, try this:
Thank you for that history, showing that the concept of positive and negative (or upside and downside) risk dates from 1952. In project risk management it is also referred to as threats (to achieving the project’s objectives) and opportunities (to improve the project’s outcomes).
Risk is the probability that some undesirable event will occur… By definition.
In practical risk management strategies probabilities and consequences are more often defined broadly. A low probability. high consequence event may thus be defined as having extreme risk. This seems clearly the climate problem we have.
It may help then to think of it as a desirable event foregone.
The reason why one needs to consider both in risk assessment is because the management will involve deciding between different courses of action, with different costs, benefits and likelihood on different time horizons. For example the current policy debate involves trading up foregoing fossil fuel today at a relatively certain high cost (i.e. high risk) vs accepting the uncertain cost of climate change some time in the future.
One thing that makes climate change easier to manage is that it is a progressive risk. The cost of delaying action can be relatively low, it has value (better info on uncertainty and consequences), so it can be less risky than rushing in boots and all and getting it wrong.
In practice this means the optimal response will be to do the low cost high risk reduction stuff early, while delaying action on the high cost items while perhaps investing in reducing their risk (i.e. reducing uncertainty and consequences).
One must determine what the risk is and then decide what to do with that information.
Yes, which is why I distinguished between risk assessment and risk management in what I wrote. If your assessment only considers adverse consequences then you’ll mismanage the risks.
“On the face of it, elevated CO2 boosting the foliage in dry country is good news and could assist forestry and agriculture in such areas; however there will be secondary effects that are likely to influence water availability, the carbon cycle, fire regimes and biodiversity, for example,” Dr Donohue said. https://www.csiro.au/en/News/News-releases/2013/Deserts-greening-from-rising-CO2
I can’t see any definitive ‘negative risk’ – to use the oxymoron – to changing the composition of the atmosphere.
On the other hand there are many high benefit to cost ratio options to reduce risk.
Well Dr Donohue apparently felt these positive consequences warranted further consideration, if only to better understand the countervailing ones. In the next breath he tells us: “Ongoing research is required if we are to fully comprehend the potential extent and severity of such secondary effects.”
Consequences can be negative or positive. If you are an Aussie (quoting CSIRO) go read Australian / New Zealand Standard AS/NZS ISO 31000:2009 Risk Management – Principles and Guidelines.
Dr Donahue’s point is that we don’t know enough about the systems we are meddling with to be definitive. Which is what I said.
Yes I saw that comment above and was surprised no one had introduced you to the people looking at positive consequences on the same projects.
I guess it’s a consequence of specialisation on large projects.
Projects exist because people expect to make profits. The risk is that they won’t. But avoiding discussing potential risks or benefits of changing the atmosphere – and instead prefer passive aggressive, empty nonsense – seems to be the take away message.
And have you ever considered the risk that they under-invest?
Or not your department?
The hard point is the symmetry – for every negative consequence there is a way to reframe it so it is a positive consequence, simply because in the end there is no such thing as a free lunch – for every cost there is a benefit.
That’s why modern risk assessment allows expects both positive and negative consequence.
Back on climate change go back and read my initial comment on this thread. There is something called the risk that you over invest in adaption/mitigation that sits alongside the risk of climate change itself. If you don’t assess both you aren’t doing your job.
Policy development and policy implementation are projects, so the project management definition of risk is appropriate. The Project Management Body of Knowledge (PMBOK) definition of risk is:
Risk: An uncertain event or condition that, if it occurs, has a positive or negative effect on a projects objectives
Free access: http://www.cs.bilkent.edu.tr/~cagatay/cs413/PMBOK.pdf
“On the other hand there are many high benefit to cost ratio options to reduce risk.”
Yep. The many facets of the tech revolution is a major one of those options that comes with minimal cost; minimal in a relative sense as it pertains to climate change in that techs exponential growth is the enterprises competitive “default setting” in modern democracies. Increasingly cheap, clean, efficient, energy would have been a target for innovation even if climate change was never heard of. But it’s this, dovetailed with adaptation, that represents the least risk, thus the preferred policy, IMO.
“On the other hand there are many high benefit to cost ratio options to reduce risk.”
Agreed. Given want we know By far the highest benefit to cost policy is to do nothing to reduce global warming, and do all we can to reduce the probability and the consequences of global cooling.
This is about uncertainty that is applicable for policy making. Risk is probability x consequence of an event or condition. The consequences of global warming may be beneficial or detrimental. The probabilities are required for the spectrum of consequences. In this case the most likely overall consequence could be that global warming is beneficial. This possibility should not be excluded by using an in appropriate definition of risk for risk analysis.
Policy development includes defining the policy objective. The policy objective could be, for example:
• Reduce GHG emissions to a specified target by a specified date
• Keep CO2 concentrations below a specified target by a specified date
• Keep global warming below a specified target by a specified date
• Maximise human wellbeing and environmental outcomes through the 21st6 Century.
• Maximise economic growth and environmental outcomes through the 21st6 Century
The last two may be the best policy objectives (of these five example). They allow for the possibility that global warming and GHG emissions may be beneficial, not detrimental, overall. Whereas the first three exclude that possibility.
In 1990, in preparation for the 1992 UN ‘Rio Earth Summit’ “The Australian Government adopted an Interim Planning Target to stabilise greenhouse gas emission at 1988 levels by 2000 and to reduce emissions by 20 per cent by the year 2005 based on 1988 levels (known as the Toronto target). An important caveat was included in this target. This stated that measures which would have net adverse economic impacts nationally or on Australia’s trade competitiveness would not be implemented in the absence of similar action by major greenhouse gas producing nations. Actions would be taken if benefits were realised in addition to the greenhouse gas emission reduction benefits, for example energy conservation. This became known as the ‘no regrets’ strategy.” https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/Publications_Archive/Background_Papers/bp9798/98bp04
This caveat is good policy. It should have been included in UNFCCC policy and should been central to the IPCC’s mission (but with a long term time frame). If that had been the case, IPCC would have had to have taken a very different approach than it has since it was established.
Robert, an example of “positive risk” could,be such as a two year project delay which defers startup until commodity prices rebound. For example, about 15 years ago the BP Thunderhorse was found to have serious design flaws, this led to several years delay, but when the project came on line oil prices were a lot higher.
This is unusual in the sense that a poor outcome was almost offset by market swings. A more common example of a positive risk outcome would be such as having Trump win the election and allowing USA oil exports.
In the case of climate change risk for policy analysis, benefits/opportunities (positive risks) include:
1. that global warming is beneficial – i.e. increases global economic growth and the overall productivity of ecosystems
2. that increasing GHG concentrations delay the next abrupt cooling event and reduce its magnitude, thereby reducing the economic damages
While Judith can reasonably claim that mitigation policy is outside her scope I think the paper should give it a brief mention. The reason is that its uncertainty and risk are at least as high as with climate but at the moment are pretty much ignored.
Here’s the problem. Energy emissions from use of fossil fuels are a major culprit in climate change. Renewable energy is widely adopted as the primary remedy. All that’s needed are the policy settings to assure its expansion. Solar and wind energy are the growth areas since hydroelectricity, dependent on geographical factors, is considered to be near its limit.
The risks and uncertainties that concern me reside in common assumptions about growth of low-emissions energy systems. The most popular assumption is that growth will mainly come from solar and wind energy. These now provide about 6% of global electricity production. There are other possibilities such as carbon capture and storage from fossil fuels, geothermal and nuclear but they are not prominent in energy policy. All these ‘clean energy’ systems will produce electrical energy. Depending on the assumptions used, the world will require several times its present electrical output to sustain future prosperity.
To my mind there are two main kinds of risk and uncertainty in that scenario.
Firstly the extrapolation is very long. Solar and wind will grow from their present role as supplements to being the dominant energy sources. Any such extrapolation is inherently uncertain. In addition it embodies even longer extrapolations concerning electrical energy storage technologies like batteries or pumped hydro technologies that are widely regarded as essential to stabilise intermittent solar and wind energy.
Secondly, as electricity will become the primary form of energy supply all or most industrial technologies will need to be ‘electrified’. Technologies to use electricity to produce high-specific energy liquid fuels for transportation will probably be needed. From a risk perspective all such technologies should be regarded as new and hence uncertain, especially at the scale required.
I’m not suggesting that such a popularly envisaged clean energy future is impossible. However it contains risks and uncertainties that must not be forgotten.
Wind and solar barely scratch the surface of the emission equation. Neither does Tom.
Easy read. Three minor issues. Third para ” policyis” needs space. Same issue with last para before “conclusions”.
“How the climate system actually behaves” might be better than “how the climate system actually is.” In “Developing scenarios ..” First para.
Please see my web site at: https://www.climateauditor.com for a statistical analysis of atmospheric temperature/CO2 concentration data assigning probabilities to hypotheses.
The first plot should use the natural log of CO2 concentration. I’m not sure if you are able to get data, but it would be really interesting to see total column water vapor from 20 degrees North and South latitude to add a dash of spice to that graph.
fernandoleanme the range of CO2 values is so small, 334.67 to 407.7 ppm, that it makes little difference visually, whether the scale is linear or logarithmic. It certainly does not show the range of variation evident in the temperature graph which is what we should see if there was a causal relationship between the two.
I have not had the opportunity to look at water vapour concentration. However it is good to know that you took the trouble to look at the results from my data analysis.
Thanks Dr. Curry. Yes there are many uncertainties when it comes to climate change. I do not believe that CO2 is a huge climate influencer at all. Take a read through my blog https://iceage2050.com/ and you will see some similar things to what is on this website
Type B (Systemic etc.) Errors
I encourage adding discussion of Type B (systematic etc.) uncertainty vs Type A (statistical) uncertainty. The IPCC and climate scientists almost universally ignore the BIPM international standard on uncertainty GUM JCGM 100 2008 E.
John Christy (2016) shows the warming predicted for the Tropical Tropospheric Temperature (T3) is about 250% of the actual warming since ~1980 by balloon and satellite measurement. Christy (2017) finds “the mean model trend is highly significantly different from the observations” at the 1% level.
The spread of climate model runs show huge Type A errors. The difference between the model mean and the measurement mean indicates a huge Type B error that is even higher than the very high Type A errors. Either error invalidates the IPCC’s confidence. Both together are devastating and invalidate the IPCC’s results.
See BIPM’s Guide for the Expression of Uncertainty
Barry N. Taylor and Chris E. Kuyatt, Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, NIST Technical Note 1297, 1994 Edition
U.S. House Committee on Science, Space & Technology 2 Feb 2016, Testimony of John R. Christy
House Committee on Science, Space and Technology U.S. House Committee on Science, Space & Technology, 29 Mar 2017 Testimony of John R. Christy
“The root of the most significant problem at the climate science-policy interface lies not in the climate models themselves but in the way in which they are used to guide policy making.”
Policy has always driven the models, critically, with the magic words, ‘internal climate variability. Probably by Sir John Houghton and onward. What remains is a model that is great for attributing warming to rising greenhouse gases, but which abandons the means to tell us something useful about natural climate change, which is what we actually need to know about. As Hubert Lamb appreciated.
“most of the investment in global climate models is motivated by the needs of policy makers”
Longevity of the Golden Goose.
To be relevant for policy making, the uncertainty must be of the economic impacts, not of projected temperature changes, nor of projected 21st century climate.
“Research scientists focus on the knowledge frontier, where doubt and uncertainty are inherent. Formal uncertainty quantification of computer models is less relevant to science than an assessment of whether the model helps us learn about how the system works.”
1. too generic, not everyone works on the frontier.
2. too generic about models, take models of radiative transfer as an example where the uncertainty in estimates is important.
“Risk is the probability that some undesirable event will occur, and often describes the combination of that probability and the corresponding consequence of the event. Economists have a specific definition of risk and uncertainty that harkens back to Knight (1921). Knightian risk denotes the calculable and thus controllable part of what is unknowable, implying thatrobust probability information is available about future outcomes. Knightian uncertainty addresses what is incalculable and uncontrollable.”
1. A little thin on the treatment of risk
2. Needs a quote from Knight
3. Are there other views of Risk? if so what
try to improve on wikipedia
“Risk and uncertainty
In his seminal work Risk, Uncertainty, and Profit, Frank Knight (1921) established the distinction between risk and uncertainty.
… Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term “risk,” as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different. … The essential fact is that “risk” means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. … It will appear that a measurable uncertainty, or “risk” proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We … accordingly restrict the term “uncertainty” to cases of the non-quantitive type.:
Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is measurable.
Another distinction between risk and uncertainty is proposed by Douglas Hubbard:
Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The “true” outcome/state/result/value is not known.
Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example: “There is a 60% chance this market will double in five years”
Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.
Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: “There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs”.
In this sense, one may have uncertainty without risk but not risk without uncertainty. We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If we bet money on the outcome of the contest, then we have a risk. In both cases there are more than one outcome. The measure of uncertainty refers only to the probabilities assigned to outcomes, while the measure of risk requires both probabilities for outcomes and losses quantified for outcomes.”
It is interesting that the examinations of “risk” and “uncertainty” that are the most appealing come from investment. Because “risk” is both the consequence of action and lack of action in the world of investment.
“There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs”.
And, of course, a 100% chance of bankruptcy if an oil company never drills for oil. Economics understands that risk must be undertaken, it’s too risky not to.
The risks of implementing climate policy are measurable with some uncertainty. The level of risk of some of them – such as 100% renewables plans – are getting higher the more we understand them. The risk of AGW is also measurable with some uncertainty. This level of risk has been declining the more we understand it (ECS).
What should politicians do for risk management when the cost of action is rising and the cost of inaction is real but declining? Many answers, but the least appealing is the most expensive form of action.
“While climate models continue to be used by climate scientists to increase understanding about how the climate system works, most of the investment in global climate models is motivated by the needs of policy makers.”
needs a citation.
quoting actual dollars would be good
What could be more revealing of–e.g., ‘Total ignorance: the deepest level of uncertainty,’ that we do not know enough to to quantify clouds? Nic Lewis understood that global warming alarmists’ treatment of clouds, for example, is wholly inaccurate. Lewis essentially says it’s mathematically impossible to quantify the actual effect of clouds. To fit them into the scheme of things GCMs only consider ghosts of clouds as they edge in and peep over artificial thresholds before hitting imaginary ceilings and dissolve on the outskirts of invented mathematical grids
The IPCC uses “high confidence” “medium confidence” in their statements on equilibrium climate sensitivity. The section Menton references is a footnote in regards to the temperature increase for a doubling of CO2. It is an improved understanding that only bolsters CO2 as primary forcing.
You don’t throw away the understanding that CO2 is the primary forcing just because the feedbacks are uncertain. If water vapor is a greater feedback than we thought, then we’re in for even more warming. If less, then CO2 is a more powerful forcing. Neither scenario changes the fact that the earth is warming and that CO2 is a greenhouse gas.
The warming has to come from somewhere, and with less solar irradiance there must be quite a scramble for people to find where the heat is coming from if not greenhouse gasses.
“The warming has to come from somewhere.” If one takes all of the warming to date and assumes it is all from CO2, then from Lewis and Curry the ECS is about 1.5 to 1.7. So if as you say “CO2 is the primary forcing”, it is still not very effective.
” If one takes all of the warming to date and assumes it is all from CO2″
So don’t assume that. We know there are feedbacks. I guess if you think the most effective, i.e. largest forcing, is not effective enough for you, then what is? Is there another forcing that exceeds CO2?
Apparently CO2 is very effective.
So far the answer is that CO2 is the primary forcing. All forcings are going to have feedbacks. Since no other forcing, natural or man-made, has been shown to exceed CO2, I wonder what could possibly knock CO2 off of it’s top status?
Scott Koontz: So far the answer is that CO2 is the primary forcing.
The CO2 induced “warming effect” only occurs when the Earth is at the right temperature to radiate heavily in the CO2 absorption/radiation band. Why is this called a “forcing” rather than a “feedback”?
The only forcing is sunlight.
More precisely, the CO2 change is the primary forcing change. At 2 W/m2 and counting it exceeds any 11-year-cycle solar changes by an order of magnitude for example.
Jim D: the CO2 change is the primary forcing change.
When did CO2 stop being insulation, or was that somebody else?
“CO2 is the primary forcing”
I believe there are a lot of climate scientists that disagree.
You said in another blog comment that “Judith Curry believes” that.
I’d like to get see if she agrees.
Perhaps Pielke has not seen this. Most would call this dominant forcing.
The source is IPCC, if you believe it. It’s just CO2 and epicycles.
Donn, yes, it is the very public IPCC values, which is why it is so surprising that Pielke would be so unaware of it. However, that link he has is from 2009, so he probably thinks different by now if he is paying attention.
Leftist indulgence of what Bob Carter calls, “computer gamesmanship,” is mute testimony to the existence of government-sponsored science authoritarianism– Kafkesque machinations of the climate machine, show how far they’re willing to go to enforce their beliefs on the rest of us.
The first issue with uncertainty and risk is with adequate, meaningful data.
We do not have this at this stage. We need data covering at least 60 years * with reliable data collections of temperature air mass and composition. Ocean salinity, density, volume and understanding of both pressure and current affects.at the surface and at all depths.Variations in atmospheric density, composition, fires, volcanoes, winds. Also volcanic effects, Tsunami effects, Anthropogenic CO2 and emission effects. Last the dynamics of planetary movement and of solar heat, magnetism and plasma effects.
We have some. We have an idea of what others we need to have. We have a starting point but we do not have an understanding of what the natural variations can be* and how often they can occur hence our risk assessment has a high inbuilt level of uncertainty.
People like looking for their lost keys under the streetlight because at least there they can see. Similarly Climate egnostics like to look at recent data and experience because that is what they know and are comfortable with. Secondly they like familiarity, normal is what one was born in and grew up with. But climate is much more unpredictable than what we have experienced in the last 20 to 70 years. My apologies to those too young, not enough data base and those too old, set in their ways.
Climate is the Grapes of Wrath years in America. In Australia we had our settlers in South Australia who grew wheat and sheep for 40 years successfully only for the true drought and heat conditions of that area of the world return to wipe them out. History is redolent with famines and feasts [the bible] The deserts of the ancient Egyptian food bowls. Little Ice Ages Roman warm periods..
The best and most simple, elegant example of complete misunderstanding of the risks has happened before our eyes with hardly a whimper from anyone involved. The Global Sea Ice extent. Here we have an example of dropping levels from the 1970’s satellite levels With a recovery due to the Antarctic ice going up to and over x2 SD for several years. In fact roughly 4 years ago we had for one month the highest sea ice extent on record for 1 month. Just 4 years ago.
Then we had a fall in both Arctic and Antarctic extent together in the last 3 years giving a 5-7 SD drop to the lowest figures recorded, now thankfully improving. I cannot emphasize this enough. A 7 SD difference is immense, mind boggling and in terms of risk either immensely significant or significant of something immensely wrong with our understanding of the real standard deviations normally available.The latter is the reality. We are able to have climate and weather fluctuations in terms of years that are a lot higher than what we currently cater for. This means that people who want to see risk can find it anywhere if they are prepared to lie to themselves. Highest temp in all of recording time [3 years] for the hill on the back of our block in Tennessee for one hour in July 13th? Shame about the bushfire. But also for people who desperately wish to see green shoots of recovery anywhere and choose the opposite cooling examples.
Things are not made any better by specious, sometimes mendacious and precious commentary on the different measuring systems available when they disagree with one’s own precious views.Or by using anomaly measurements and making backwards adjustments to real temperature measurements.
Disclaimer, CO2 is real. CO2 by itself can make temperatures go up and it is a small but important balanced component of our atmosphere, hydrosphere and lithosphere.Some people like to argue an it is just atmosphere composition and gravity. Cart before the horse, no, just it is both cart and horse. If the CO2 goes up at a certain gravity and pressure then of course the temperature goes up as well. Both sides would do well to take a chill pill.
But CO2 is only one component of the complicated CO2 pathway for the atmosphere and due to the high carbonic content of the sea and the immense heat capacity of the oceans, which both give and take heat, Vegetation growth on land and sea, responses to CO2 addition are unclear and unknown
While chaos is an unknown, organisation is a given. The immense ocean buffering, the nature of our rocks and thin water envelope. The photosynthesis at this distance from the sun means that for human lifetime experiences, individually and collectively there is minimal risk of anything other than the gradual massive slow changes the earth has had for 2 of its 4 billion plus years. This does not mean that on a decadal or centennial scale we could have changes that seem extreme to us, particularly when we panic.
Those with glasses half full will not agree that “Risk is the probability that some undesirable event will occur.” We also take risks to try for some most desirable events. (My wife, my coming lunch with my secretary, etc.) Geoff
“Knightian risk denotes the calculable and thus controllable part of what is unknowable, implying that [ insert space ] robust probability information is available about future outcomes. Knightian uncertainty addresses what is incalculable and uncontrollable.”
The second sentence contradicts the first. I think you mean something like
Knightian uncertainty addresses fluctuations due to finite sample size..
Otherwise interesting article.
When it’s clear that the earth is warming and CO2 is the primary forcing, why not plan for the future?
“most of the investment in global climate models is motivated by the needs of policy makers.”
Very strange statement. For all of those in a science field, the motivation is fine-tuning he science. Unless, of course, you are paid by the organizations that create the problem, then your motivation is clear.
““most of the investment in global climate models is motivated by the needs of policy makers.”
Very strange statement.”
To the extent you reject that the reason for the models is policy, I suggest that amount of climate scientists leave the field and study origami instead. It would free up the news cycle some.
“…fine-tuning the science.”
The ECS is from 1.5 to 4.5 C with 66% certainty. A little more tuning and we’ll have it.
“To the extent you reject that the reason for the models is policy, I suggest that amount of climate scientists leave the field and study origami instead. It would free up the news cycle some.”
You clearly are not working in a science field. Ask a scientist why they do what they do. Maybe you’re learn something.
At least you admit the earth is warming, and that CO2 is the primary forcing. That’s most of the battle.
We could read what they write. Because of climate change, since climate change. Who cares if Greenland melts or if we get 3 meters of SLR? Tipping points. Extreme weather. Hurricane Sandy.
Almost every scientific organization mentions climate change front and center. My son is doing lowest level work on thin materials in high heat environments. There’s a big grant from our government supporting this. I have to think there is fair amount hope for a practical use. Same with drug research.
Since his work deals with heat, you’re still claiming that the science behind that work and/or the results are based on policy?
“The NSF was established by the National Science Foundation Act of 1950. Its stated mission is “To promote the progress of science; to advance the national health, prosperity, and welfare; and to secure the national defense.” The NSF’s scope has expanded over the years to include many areas that were not in its initial portfolio, including the social and behavioral sciences, engineering, and science and mathematics education. The NSF is the only U.S. federal agency with a mandate to support all non-medical fields of research.”
It seems to be funded by the Federal government.
GCMs are so expensive there are not that many of them. Using the NSF as a model, what do we get for that money?
It is a policy to: “To promote the progress of science; to advance the national health, prosperity, and welfare; and to secure the national defense.”
Thin materials inside a jet engine, a rocket engine or the pipes of a power plant perhaps to measure something, may have value beyond saying we did it. Somehow mixed up in this is the idea of value.
You hint at a point. Why did Karl do what he did? Why does SkS do what it does? James Hansen.
Would like to see this at the beginning of your article as an executive summary. Your writing is always accurate Judith but, depending on the target audience, sometimes a little dense for the average reader.
“Current predictions about future climate change are full of ambiguity and deep uncertainty. The models used to make these predictions have probabilistic limits that are real and strictly limit their accuracy. Individuals and institutions need to guard against using such predictions to promote actions that they desire to see occur, as opposed to actions that are actually required and justified. Policy changes have costs. In every case, the true cost of promoted actions should be compared to the costs of doing nothing.”
“Any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system.” (see Adolf Grünbaum 1962).
Dr Curry’s draft references some consequences of the IPCC AR5 footnote that “No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.” She describes one consequence as “—we simply do not have grounds for formulating a precise probability distribution. Without precise probability distributions, no expected utility calculation is possible.”
In climate modelling, there are several other critical relationships besides ECS that are missing an accepted mathematical equation or an acceptable error estimate. 1. It is hard to find an equation linking ocean level change to global temperature change. See https://link.springer.com/article/10.1007/s10712-016-9387-x 2. The fundamental balance between incoming and outgoing radiation at the top of the atmosphere has severe measurement problems requiring subjective adjustments, study http://www.geoffstuff.com/toa_problem.jpg 3. Satellite-derived ocean levels w.r.t earth’s centre are complicated by large measurement errors in the component parts of the final equation, some errors being an order of magnitude greater than those of the final product. 4. With ocean level change, Munk’s Enigma needs more work. https://doi.org/10.1073/pnas.092704599 5. While measurement of ocean properties seems to have been improved by Argo floats, the some critical parameters involve very small changes close to the limit of measurement of the devices in controlled conditions and some subjective adjustment. https://earthobservatory.nasa.gov/Features/OceanCooling 6. As for 5., ditto for satellite MSU temperatures. Christy, J.R., R.W. Spencer, W.B. Norris, W.D. Braswell and D.E. Parker, 2003: Error Estimates of Version 5.0 of MSU-AMSU Bulk Atmospheric Temperatures. Journal of Atmospheric and Oceanic Technology: 20, pp. 613-629. 7. While models commonly model volcanic ejecta as a cooling agent for the atmosphere, this is not always the case if both Eschenbach and the common global land temperatures indices like BEST are correct. https://wattsupwiththat.com/2018/06/25/stacking-up-volcanoes/ 8. There could well be more examples. I have chosen enough examples to make a general point.
Keep in mind the WG1 statement in TAR that “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future exact climate states is not possible”. This alone might not be enough to call for a cessation of more climate modelling. However, some of the above eight one-sentence summaries do have the ability to kill modelling in its present form. Just one would caution care. Eight of them running simultaneously should have rung alarm bells long ago.
Some of the TAR authors seem to have realised some implications of dubious fundamental data and adjustments – see for example http://www.ipcc.ch/ipccreports/tar/vol4/index.php?idp=106
However, instead of starting urgent and effective inquiries to see if remediation was possible, the IPCC adopted the completely unacceptable policy of essentially disregarding these danger signals.
This brings us back to RISK. Can we postulate that the biggest risk in current climate modelling sensu lato is indifference of participants to gross errors and uncertainties?
Question: What type of failure, what demonstration of future difficulties, will be enough to call a pause to the effort of the modellers and their persistent failure to deliver? Is it not time to pause and go back to Square One? Geoff.
Not sure if it fits exactly into this essay, but the general idea of creating probabilities from ensembles is an absurdity in the climate modelling arena, because one of the fundamental sources of error, i.e. numerical integration error, is highly correlating, and anything BUT independent from model to model.
Certainty is the measure of the confidence one has in the validity of an outcome. This concept has no mathematical and no scientific ground since it encompasses the famous known-unknowns and the ominous unknown-unknowns that have no dimensions. Lacking experimental comparisons and validation, climate science is stuck with such uncertainties.
If a model has a high level of uncertainty it cannot be used to frame the probability of any outcome, or, combined with potential damages, to produce risk assessments.
Despite of this, a large part of the climate science community tries to validate the results from uncertain models by a kind of tautology, using model ensembles to address model variability, deriving from this policy relevant dimensions such as ECS. They feel right because they want to believe to have chosen the right and complete scope.
Policy makers accept wholeheartedly such projections, as they bring more water to their mill.
The scientific community, in particular those advocating a mainstream view (the famous 97%), has to discipline itself and to object to the use of uncertainties to shape policies.
With its pseudo-scientific wording of high or low confidence or extremely unlikely or likely, the IPCC has willfully and faithfully added to the confusion, and contributed to the diversion of science for political purposes.
I liken the rush to serve policymakers to “the tail wagging the dog”. This is one man’s opinion! Keep up the good work, Dr. Curry. When can I expect to see you on Tucker Carlson’s show again? Thanks for the opportunity to respond.
Stephen C. Pennisi PhD DABT
Not so sure about the lead-off in your conclusion “… most of the investment … is motivated by the needs of policy makers.” Seem to me most of the investment is motivated by the needs of green energy advocates. Suggest change to ” … is motivated by special interest groups.” as a general statement that is universally true.
From a purely mathematics standpoint, the models are not fit for stated purpose as the non-linear partial differential equations cannot provide meaningful solutions, particularly when considering the vast time scales and spaces involved.
As vast sums of money are being spent on essentially esoteric “angels-on-the-head-of-a-pin” exercises, it is clear that the motivational drivers are definitely not the needs of policy makers, but rather more crass motivations involving lining the pockets of special interest groups of one kind or the other.
Haha, Mann was certainly correct that uncertainty is not your friend, but he didn’t seem to consider sign uncertainty. Here’s a fun question: if you had to choose, which of these conditions would you rather see your civilization experience?
You do realize that the two definitions you give are at odds? Have you ever done any practical risk management on a $10 billion project? Frankly the idea of negative and positive risk takes the meaning of risk into new postmodern territory. And you miss the point as usual.
There is a low probability of extreme and adverse Earth system change – temperature, hydrology or biology – whether intrinsic or anthropogenic – that is an extreme negative risk. To use the tautology.
Hmmmm… that was supposed to be a reply to Peter Lang.
Typical alarmist response. Resorts to derision as arguing tactic, as usual.
And, cannot allow the possibility that global warming is beneficial.
Apparently is unaware of the PMBOK and the definition of project risk. The ISO project management standard is based on the PMBOK as is the Project Management Professional registration.
A comparison of ISO21500 and PMBOK says:
That should be pretty clear to most. Project risks caqn be both positive and negative. It is just as important to manage the project to maximise opportunities for improvement in outcomes as it is to minimise the probability and consequences of negative outcomes.
And yet his original definition talked about adverse outcomes. Still not the slightest concept of low probability and high consequence events in the Earth system. He talks airily about definitions – but there is no hint of real risk in the climate system.
Where are the graphs showing natural climate change so we can compare them against the claimed warming.
For some reason they are never produced.
That’s telling. If they were, climate alarmists would be claiming that natural change has stopped, and then they would have to explain that. Very hard to do.
Instead they just present temperatures and claim its all anthropogenic.
LB: “Where are the graphs showing natural climate change so we can compare them against the claimed warming.
For some reason they are never produced.”
They are produced all the time.
There are graphs of natural factors that are model results. Since we don’t have only those anymore we can’t measure it.
Let’s see how we do this. We adjust the models to mimic the past controlling the CO2 gain. Then we hold CO2 constant and that’s natural factors.
Since the only way we get the models to work is CO2 gain, that’s how it works, and you need to come up with something better. And the great thing is, it uses one variable to predict one variable in one place, the surface.
“So how much wiggle room do we have in our models, assuming we have decided what the key processes are? In a model with, conservatively, 20 important and uncertain processes, each associated with a single uncertain parameter, the model outputs can be sampled from 20-dimensional space. This is a hypercube with around a million corners.”
So when we graph with constant CO2 and the CO2 we got, I want to see the other variables. Because I don’t care if you only got the GMST right. You need to get most everything right. And when one says climate do they really mean climate or just the GMST?
It gets the stratosphere right too, and the physics of GHGs, so that’s a bonus.
I suggest we move from 2 variable presentations of results to 4 variable presentations.
Ragnaar, what are your variables apart from CO2? The IPCC already has the sun, volcanoes, other GHGs, aerosols, and various other smaller ones. What else do you have? Do you think a big forcing term is missing?
I am off on a different path:
Ocean Surface Heat Flux
Sea Surface Salinity
Sea Surface Temperature
One can look at the link at Atmosphere, Land and Oceans. Which is to say getting the GMST or something right is simple compared to getting a majority of the things right.
We’ve heard the GCMs bollix up the Arctic. I want them to get the GMST right and the Arctic right. It is going to collapse and raise sea levels so getting the GMST right is what? Getting the GMST right does not allow one to pass go or collect $200. If this is too mean, think of what this accomplishment would mean.
Pingback: Climate uncertainty & risk – I Didn't Ask To Be a Blog
Reblogged this on I Didn't Ask To Be a Blog.
Model uncertainty evolves from imprecisely known parameters. It is not error as such but an irreducible imprecision emerging from the core equations.
There are other approaches including fine scale ‘process level’ modelling.
Ultimately – initialized decadal scale forecasts may be feasible – requiring vastly more computing power and a much better understanding of intrinsic variability.
But the failure of models does not rationally equate to negligible risk in the climate system.
“Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.” NAS 2002
Model uncertainty evolves from imprecisely known parameters. It is not error as such but an irreducible imprecision emerging from the core equations.
Model error resulted from people who do not understand climate being trusted to model climate. Over and over, they tell us past and present and future climate is and will be chaotic, but they understand it well enough to build proper models. It is an error, as such and much, much, worse.
“The most common ways to evaluate a climate model are to assess how well model results fit observation-based data (empirical accuracy) and how well they agree with other models or model versions (robustness) (e.g. Flato et al. 2013).”
Okay so Hawkins has shown the models are running hot. Should both the upper and lower trend lines be lowered?
“Parker (2011) has argued that robustness does not objectively increase confidence in simulations of future climate change.”
If this is true then the models are useless in prediction or for policy considerations
“Baumberger et al. (2017) address the challenge of building confidence in future climate model predictions through a combination of empirical accuracy, robustness and coherence with background knowledge.”
What does it take to get the models rebuilt with empirical accuracy, robustness and coherence? I can’t imagine the IPCC even attempting this so what we are stuck with is like Parker except after never even attempting robustness.
“Baumberger et al. acknowledge that the role of coherence with background knowledge is limited because of empirical parameterizations and the epistemic opacity of complex models (Winsberg and Lenhard, 2010).”
So even if the IPCC did attempt to rebuild a more robust model it is still of limited value
“The climate science input to IAMs is the probability density function of equilibrium climate sensitivity (ECS). The dilemma is that with regards to ECS, we are in a situation of scenario (Knightian) uncertainty—we simply do not have grounds for formulating a precise probability distribution. Without precise probability distributions, no expected utility calculation is possible.”
Knightian uncertainty prevents climate science from ever being relevant?
Imagine telling your audience that! We’re wasting are time here!
“Subjective or imprecise probabilities may be the best ones available. However, over-precise numerical expressions of risk are misleading to policy makers. Frisch (2013) argues that such applications of IAMs are dangerous, because while they purport to offer precise numbers to use for policy guidance, that precision is illusory and fraught with assumption and value judgments.
Policies optimized for a ‘likely’ future may fail in the face of surprise. At best, policy makers have a range of possible future scenarios to consider. Alternative decision-analytic frameworks that are consistent with conditions of deep uncertainty can make more scientifically defensible use of scenarios of climate futures.”
So if climate science is fraught with value judgement and policy misinformation assessments that is dangerous … WHERE DO WE TURN?
It looks to me like we are just watching and waiting with our doomsday signs and collection cans.
Okay so Hawkins has shown the models are running hot. Should both the upper and lower trend lines be lowered?
Models are running hot. That means model output does not mean anything. Throw the models away. Climate is self correcting, forget the alarmist, sky is falling junk.
“… we know a range of values within which the climate sensitivity is very likely to fall, with values better constrained on the lower end than on the high end. The AR5 further states:“No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.”
Well, you and coauthors have done better. You need to point out that all the credible EXPERIMENTAL/observational studies (that I’ve seen) are at the low end, some lower than the IPCC lower limit. The high-end estimates are ALL computer-model wanking, by politically-active “scientists”.
OK. I realize this isn’t language suitable for a formal report. But it has the BIG advantage of being true, and Reality trumps Theory, every time. You really need to make that clear, right at the start ! Remember your Feynman. Quote him. Do you need the quote? One is “Mother Nature cannot be fooled” one.
Climate uncertainty & risk
The balance of evidence suggests that warming would be beneficial, cooling seriously damaging. It is cooling we should be concerned about, not warming.
We are currently about 8,000 years after the peak of the current interglacial and on the long cooling trend to the next glacial maximum. It is well known that global cooling would be very damaging for human well-being and the environment – perhaps catastrophic, with millions of deaths.
However, there seems to be little valid evidence to support the contention that global warming this century (i.e. from the current extremely cold temperatures in the context of the past 500 Ma) could be catastrophic or dangerous. On the other hand, there seems to be valid evidence warming would be beneficial.
It would be interesting to see what proportion of climate research funding over the past 30 years has been directed to investigating the beneficial impacts of global warming, versus that spent on investigating the negative impacts.
Second to your comments. As to research re benefits of warming, pretty much nil, sfaik, and all of that self-funded & self-published.
Peter & Pete,
Tried to say much the same earlier – “Can we postulate that the biggest risk in current climate modelling sensu lato is indifference of participants to gross errors and uncertainties?” Geoff
‘The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change… Over the last several hundred thousand years, climate change has come mainly in discrete jumps that appear to be related to changes in the mode of thermohaline circulation.’ Wally Broecker
Unfortunately there is a complete absence of any understanding of climate ‘modes’ and the associated risk.
thanks for this link
I assume you’re not aware you are inadvertently hosting this
“During the Medieval Climate Anomaly (MCA), Western North America experienced episodes of intense aridity that persisted for multiple decades or longer. These megadroughts are well documented in many proxy records, but the causal mechanisms are poorly understood.”
Yet another paper mentioning the nonexistent MCA. These authors either need to mend their ways or risk having the thought police at their door. If they need to be convinced on how off course they are, we have many denizens who will be all to happy to set them straight
Also, more honesty by authors that there are some elements of the climate that are poorly understood. Who’d da thunk it.
Beyond the obvious that much evidence exists of mega droughts in the past, this paper tells me that each region might be unique in how it reacts to changes in global climate and the numerous associated oscillations and should not necessarily be used as a proxy for how other regions would react to the same global dynamics.
I agree….what Mann has done is in fact ridiculous.
The Black Swan. apologies to Taleb
” If you know a set of basic parameters concerning the ball at rest, can computer the resistance of the table (quite elementary), and can gauge the strength of the impact, then it is rather easy to predict what would happen at the first hit. The second impact becomes more complicated, but possible; and more precision is called for. The problem is that to correctly computer the ninth impact, you need to take account the gravitational pull of someone standing next to the table (modestly, Berry’s computations use a weight of less than 150 pounds). And to compute the fifty-sixth impact, every single elementary particle in the universe needs to be present in your assumptions! An electron at the edge of the universe, separated from us by 10 billion light-years, must figure in the calculations, since it exerts a meaningful effect on the outcome.”
Risk can be a funny thing.
More like a game of Hamiltonian billiards in which the force on the balls is not constant, the equation is not analytically solvable and the position, velocity and vector of a ball is a priori unknowable.
The three body Hamiltonian is solved numerically over small time steps assuming force is constant in the period.
Like the compound pendulum:
The science demonstrated by the double compound pendulum, that is, a second pendulum attached to the ball of the first one. It consists entirely of two simple objects functioning as pendulums, only now each is influenced by the behavior of the other.
Lo and behold, you observe that a double pendulum in motion produces chaotic behavior. In a remarkable achievement, complex equations have been developed that can and do predict the positions of the two balls over time, so in fact the movements are not truly chaotic, but with considerable effort can be determined. The equations and descriptions are at Wikipedia Double Pendulum.
But here is the kicker, as described in tomomason’s comment:
If you arrive to observe the double pendulum at an arbitrary time after the motion has started from an unknown condition (unknown height, initial force, etc) you will be very taxed mathematically to predict where in space the pendulum will move to next, on a second to second basis. Indeed it would take considerable time and many iterative calculations (preferably on a super-computer) to be able to perform this feat. And all this on a very basic system of known elementary mechanics.
I like to look at risk as a film is wound backwards. Despite all that chaos the broken cup and spilt milk always end up perfectly.
Note all the same factors were in play with the very first guess.
You are addressing the various forms of uncertainties relative to Climate Change.
However, except for Aleatory uncertainty (“Future expectations lie outside of the regular or quantifiable expectations”), all of the uncertainties can be dispelled simply by recognizing that the control knob for Earth’s climate is governed by the amount of SO2 aerosols in the atmosphere, of either volcanic or anthropogenic origin..
Decrease them, and it warms up. Increase them, and it cools down.
This effect has been confirmed multiple times by large volcanic eruptions, and the effects of anthropogdenic activities. For example, the super El Ninios of 1997-8, and 2015-16 were both caused by large reductions in SO2 emissions due to Clean Air efforts (approx. 7.7 and 30 Megatons, respectively).
The effects of atmospheric SO2 aerosols are superimposed upon Earth’s natural recovery from the Little Ice Age cooling, over which we have no control, and from volcanic eruptions, which are unpredictable and also uncontrollable. However, much can be done by recognizing the role of anthropogenic SO2 aerosol emissions.
As a scientist, you need to also consider the above statements,and not just blindly follow the agenda-driven peer-reviewed literature.
“…much can be done by recognizing the role of anthropogenic SO2 aerosol emissions.”
Yes, it’s interesting that the middle of the 20th century saw arguably the worst of anthropogenic pollution on land, sea, and in air; and, commensurate with cleaning up much of humanities aerosol footprint, the planet warmed somewhat post peak aerosol pollution.
“the super El Ninios of 1997-8, and 2015-16 were both caused by large reductions in SO2 emissions due to Clean Air efforts ”
And the proof for this claim is?
This doesn’t address specifically the El Niño query you ask of Henry, but it’s not all that relevant when considering that removing aerosols does enhance warming in general, right? Even China has seen aerosols reductions, albeit they started their clean-up later than western nations:
Primary anthropogenic aerosol emission trends for China, 1990-2005
Abstract: An inventory of anthropogenic primary aerosol emissions in China was developed for 1990-2005 using a technology-based approach. Taking into account changes in the technology penetration within industry sectors and improvements in emission controls driven by stricter emission standards, a dynamic methodology was derived and implemented to estimate inter-annual emission factors. Emission factors of PM2.5 decreased by 7%-69% from 1990 to 2005 in different industry sectors of China, and emission factors of TSP decreased by 18%-80% as well, with the measures of controlling PM emissions implemented. As a result, emissions of PM2.5 and TSP in 2005 were 11.0 Tg and 29.7 Tg, respectively, less than what they would have been without the adoption of these measures. Emissions of PM2.5, PM10 and TSP presented similar trends: they increased in the first six years of 1990s and decreased until 2000, then increased again in the following years. Emissions of TSP peaked (35.5 Tg) in 1996, while the peak of PM10 (18.8 Tg) and PM2.5 (12.7 Tg) emissions occurred in 2005. Although various emission trends were identified across sectors, the cement industry and biofuel combustion in the residential sector were consistently the largest sources of PM2.5 emissions, accounting for 53%-62% of emissions over the study period. The non-metallic mineral product industry, including the cement, lime and brick industries, accounted for 54%-63% of national TSP emissions. There were no significant trends of BC and OC emissions until 2000, but the increase after 2000 brought the peaks of BC (1.51 Tg) and OC (3.19 Tg) emissions in 2005.
Thanks, I do think that removing real air polution leads to local warming, brightening, less drizzle rain and fog but more showers due to increased CAPE.
Couple of obvious edits and a series of suggestions:
implying thatrobust probability information
“Aleatory uncertainty is associated with inherent variability or randomness, and is by definition irreducible. Natural internal variability of the climate system contributes to aleatory uncertainty.”
Only the RANDOM natural internal variability contributes to Aleatory Uncertainty. Other natural variables, ENSO cycles, AMO cycles, the seasons, etc only contribute to the degree that their timing is unknown or random.
“The location of uncertainty refers to where the uncertainty manifests itself within the model complex:”
The chaotic effects of the inclusion of known nonlinear equations in climate models adds another layer of uncertainty in numerical climate models….the cumulative numerical effects of coupled, inter-dependent nonlinear equations (even in their simplified forms), calculated iteratively, compounds and magnifies those uncertainties.
but also from uncertainty in the equilibrium climate sensitivity to CO2(ECS).
“Baumberger et al. acknowledge that the role of coherence with background knowledge is limited because of empirical parameterizations and the epistemic opacity of complex models (Winsberg and Lenhard, 2010).”
This sentence might need an explanatory phrase added at the end, “such as …..” or “as in the ….”
“Sea level predictions are only indirectly driven by global climate models, since these models do not predict the mass balance of glaciers and ice sheets, land water storage or isostatic adjustments. Hence estimates of the worst-case scenario”
And what — thus are not part of this discussion….need to be looked at in another context…will be handled in a separate section furtheron????
“Of particular concern is how the upper end of the ECS distribution is treated—either by assuming symmetry or fitting a ‘fat tail.’ The end result is that this most important part of the distribution drives the economic costs of carbon using a statistically-manufactured ‘fat tail.’”
Depending on your audience, the Fat Tail Distribution might need an example or explanatory phrase added. (The Wiki page is incomprehensible – don’t link to that.)
“Subjective or imprecise probabilities may be the best ones available. However, over-precise numerical expressions of risk are misleading to policy makers. Frisch (2013) argues that such applications of IAMs are dangerous, because while they purport to offer precise numbers to use for policy guidance, that precision is illusory and fraught with assumption and value judgments.”
This “over-precise numerical expressions” might be expanded with an example. The same point might have been made in the general section on models — they are said to offer “precise numerical” answers — which are really averages of chaotic outputs and at best represent very broad ranges of possible outcomes.
A robust policyis defined to
“Encouraging overconfidence in the realism of current climate model simulations or intentionally portraying recognized ignorance incorrectly as if it was statistical uncertainty (Knightian risk) can lead to undesirable policy outcomes.”
Very important and is a very widespread practice — might be good to add at least one more sentence punching this up.
Note — You might make the above two points into a list of “dangers” and then use your two paragraphs explaining.
Kip and Judith,
What is relevant for policy is the impacts of temperature change, not the temperature change itself. Therefore, for a policy to be sustainable over the long term it must be economically beneficial. Therefore, the temperature projections and uncertainties are just one input to policy analysis and development. Others are the impact functions, the costs of the policy and the cost benefit analyses. As IPCC AR5 WG3 Chapter 3 says repeatedly the empirical data to validate the impact functions used in the IAM are sparse and the uncertainties huge.
Furthermore, clearly nearly all the funding has been directed at finding downside risks not benefits of global warming. We can conclude that the damage functions are biased to show global warming would be damaging rather then beneficial.
I’ve pointed out on other threads that empirical evidence suggests global warming is likely to be benefical, not damaging and that there is negligible chance of global warming being dangerous or catastrophic this century, or perhaps for millions of years. However, we know cooling would be very damaging, and perhaps catastrophic.
The consequences and probabilities need to be properly analysed using proper risk management processes.
@ Peter Lang
“I’ve pointed out on other threads that empirical evidence suggests global warming is likely to be benefical, not damaging and that there is negligible chance of global warming being dangerous or catastrophic this century, or perhaps for millions of years. However, we know cooling would be very damaging, and perhaps catastrophic.”
I’ve pointed this out too. Our hostess, for reasons that escape me, hasn’t responded, nor has she acknowledged this stuff in her presentations, sfaict.
This is odd, as it clearly needs to be addressed. Hasn’t been due to pure power politics from Michael Mann and the other Consensus Manufacturers.
What about it, Dr. Curry?
Climate uncertainty & risk analysis for policy development and implementation
Policy development and implementation are projects. Some information on risk management that is relevant to policy development and implementation may be of interest. Below I refer to Chapter 11 Project Risk Management in the Guide to the Project Management Body of Knowledge (PMBOK Guide), Fourth Edition. https://www.works.gov.bh/English/ourstrategy/Project%20Management/Documents/Other%20PM%20Resources/PMBOKGuideFourthEdition_protected.pdf . The PMBOK Guide is American National Standard ANSI/PMI 99-001-2008. It is also the basis for ISO 21500, and the central text for the Project Management Professional (PMP) certification.
Some definitions, pp,426–453:
• Project: A temporary endeavour undertaken to create a unique product, service or result.
• Risk: An uncertain event or condition that, if it occurs, has a positive or negative effect on a project’s objectives.
• Project Risk Management includes the processes concerned with conducting risk management planning, identification, analysis, responses, and monitoring and control on a project.
Chapter 11 Project Risk Management, pp. 273–312
Of particular note:
Figure 11-1. An overview of Project Risk Management processes, p.273
Figure 11-4. Example of a Risk Management Breakdown Structure (RBS), p.280
Figure 11-6. Identify Risks, Inputs, Tools & Techniques, and Outputs, p.282
Figure 11-8. Perform Quantitative Risk Analysis Risks, Inputs, Tools & Techniques, and Outputs, p.289
Figure 11-10. Probability and Impact Matrix, p.292. [Note that this deals with both Threats and Opportunities]
Section 11.4 Perform Quantitative Risk Analysis, pp.294–301
Figure 11:15. Decision Tree Diagram, p.299
Figure 11:16. Cost Risk Simulation Results, p.300
Section 11.5 Plan Risk Responses, pp.301–307
Section 11.6 Monitor and Control Risks, pp.307–312
Congratulations on taking on an impossible topic – Climate Uncertainty and Risk!
As a layman, it seems like most of your paper addresses the difficulty (impossibility) of adequately predicting climate using any current accepted understanding of climate cause and effect (agreed-to model).
It seems that the “risk,” associated with this climate prediction uncertainty is the risk of taking action based on an assumed climate state that would never be reached and screwing up the natural climate cycle with inappropriate anthropogenic meddling to inappropriately adjust. Policy makers need this kind of information to prevent making matters worse. Maybe a little more on the risk of making inappropriate policy would be instructive. …Just an observation from a deplorable layman…
Dr Curry: I’ve been invited to write an article on climate uncertainty and risk.
Why not: “Climate Uncertainty: Event Probabilities, Risks and Benefits” ?
Note the perversion (my claim) of language introduced by Markowitz for quantitative portfolio analysis and adopted by economists.
Your suggested title implies risks are only downside risks.
Risk is probability x consequences of an event. The probability distribution is for each potential consequence, not for the event. The event can have many potential consequences, all with their own probability distribution.
Therefore, I’d suggest:
“Climate Change Risks; Potential consequences (both detrimental and beneficial) and their probabilities”,
“Climate Change Risks; Potential Consequences and Probabilities”,
Peter Lang: implies risks are only downside risks.
Yes. I think that “upside risk” is a perversion of language. It emerged after Markowitz identified “risk” of an asset with the variance of its price, instead of with the probability of loss.
I wrote a related comment, but it seems to be in moderation.
Well, it may be a perversion of language, but it is entrenched in the widely accepted and used project risk management standard and has been for over 20 years (perhaps 30 years). So, we need to live with it. Whatever the term, we need to look past that and recognise it is essential to give equal weight to potential positive and negative consequences of an event. If climate policy makers do that, I believe it will become clear that the negative risks (threats) of global warming are numerically substantially less than the positive risks (benefits).
Peter Lang: So, we need to live with it.
In that case, how about “threats” and “benefits”? I think there is a substantial audience who will tune out after the first mention of “positive risks”. It sounds too clever by half. How about “Losses” and “Gains”?
There is a related, possibly small, problem in probability theory in the use of “expected value” for an outcome that nobody expects; as with the “expected value” of the roll of an ordinary 6-sided die being 3.5. You might have an audience, or Dr Curry might have an audience, for whom justifying the name uses up all their time and energy.
In auto insurance, how many positive risks are there? In life and medical insurance? Is the concept of “positive risks” empty in most or all “risk management”?
I understand the issue you raise. I agree the terms used must be readily understandable to the general public. So the problem is that of what terminology to use and how to explain the issue that both benefits (positive consequences) and threats (negative consequences) must be included in risk analysis (and decision analysis) for climate change policy development and implementation.
Auto industry accident insurance is not equivalent project (and policy) risk. Project and policy risk is specific to projects, which are defined as:
Peter Lang: I agree the terms used must be readily understandable to the general public.
The other example I was thinking of was “acceleration and deceleration”. For physicists, the single word “acceleration” suffices, as the constant in the equation can be either + or – in value. For every other audience, it is more meaningful to use the two words for increases and decreases in speed respectively.
So some of this discussion depends on the audience.
Yes. I totally agree Not yet sure of what words to use though. I think we need to stick with ones that are already widely used for the context of positive an negative consequences of uncertain events, and for the opportunity offered by positive outcomes, and the threat of negative outcomes,
Matthew, I’d like to contact you by email to ask you a question on a related matter.
Could you email me at the email address given here: http://www.mdpi.com/1996-1073/10/12/2169
Yes, Yes and Yes
as a meteorologican here in the Austrian Alps i can tell (and that is nothing surprising) that no regional climate model is able to show the correct hindcast of the known regional climate, not even decadal T trends are solfed. Sometimes the long term trend is well done, but this says nothing about the RCM`s quality, nothing! So, if all those regional models do not show the almost correct climate past, there is no single reason to trust in the future simulations.
On the other hand, you can never ever simulate the glacial cycles with high sensitivity parametrisations (look at the Eem, the fast T increase, the following CO2 increase and the relative fast declining T Trend from the beginning of the early Eem, not possible if CO2 generates strong positive feedbacks.
All in all, climate models are “high end science” but still no useful tool to predict future climates, not even on global scale and for sure not regional.
The paper documents very well the significant uncertainty in climate predictions because of model shortcomings. The paper also documents that because of this uncertainty care should be taken in actively responding to those uncertain predictions (science -policy interface problem). What I was hoping to read next was, why? “ What could possibly go wrong? What is the risk of acting on bad climate predictions? I have been warning my fellow laymen about the risk of taking action to prevent a perceived future climate state that may not ever occur. To date I have not been able to give any examples. I was hoping to get some based on your title. (I’m sitting at the end of my pier writing this, so you can already claim this is a Pier Reviewed Paper.)
I’m guessing your pier is not near Florida.
“Over the last month, blue-green blooms have spread across Lake Okeechobee, igniting a crisis as they quickly moved from a shallow, moon-shaped wedge in the southwest corner that scientists call the “fertile crescent” for its ability to produce blooms. By last week, as water circulated counterclockwise, 90 percent of the 370-square-mile lake was filled with foul green slime. In some areas, two toxic subspecies of the algae were detected. “
:-) but honestly, many papers look like “Pier Reviewed” or Pall Reviewed in modern, political driven climate journals today. In front we find nature.
All projections indicate warming, and are within a “very likely” boundary that is tightening each year.
Are we still arguing that we should do nothing because there are error bars? Are is there still that chance that CO2 is not a greenhouse gas and there’s some very basic forcing missed by scientists?
I’m guessing the next decade will be filled with “A ha! It’s warming and it’s primarily CO2, but a small percentage comes from forcing x and we now know more about it.” all the while CO2 continues to lead the pack.
“Are we still arguing that we should do nothing because there are error bars?”
What do you suggest we should do? Waste more time and resources achieving absolutely nothing?
What a completely silly non-answer. Did you think CO2 was not a greenhouse gas?
Or maybe you think shifting arable land, rising sea levels, more acidic oceans, changes in rainfall (plus and minus depending upon the area) and managing migrating pests is a good thing, and the problems are very simple and cheap to mitigate.
What do you suggest we do? Nothing now so we can spend far more time and resources later? Yes, that’s about right.
Why didn’t you answer the question? What do you suggest? What can we do to reduce CO2 emissions?
I think there is nothing to mitigate, but even if there was, we are not doing it and it is naive to think that we can agree to do something. It is also almost impossible to do anything – we would have to switch to nuclear completely.
We can take the IPCC’s banner statement on ECS and go from there. It is their, what will happen statement. We can take 1.5 C and say there’s a 1/6th chance, less than 1.5 C, it doesn’t matter. And there’s another 1/6th chance it hardly matters. And a 90 percent chance what we do to mitigate emissions will not matter. So let’s adapt, go nuclear power, and give farmers money to soil bank carbon.
I agree with all except this:
Why would we want to do that given that global warming is probably beneficial?
Furthermore, it would be the start of another massive boondoggle like the subsidies and incentives for renewable energy.
Brief History of the USDA Soil Bank Program
I guess there are a number of reasons of varying strength. If I had to choose between a wind turbine and a square mile of farmland converted to prairie I’d take the latter. No power lines need, no negative impacts to the grid. There is the wildlife angle, the bee angle and the watershed improvement angle. There is the, we are doing something angle. I do think it is a literal bank. The value of the land for farming increases. I’ll pick a number out of a hat of a 15 year term, meaning it all is reversible as far as production goes with a higher amount of good carbon in the soil.
The possibilities are numerous. Buffalo on the prairie land or just future cheeseburgers. Prairie around the wind turbines. Pheasants and deer to hunt. Flood mitigation as drainage tile isn’t needed. Rednecks saving the planet.
This one by Lal has over 4000 citations, so they say:
Thank you for this reply. I’ve only just seen it.
Well, I wouldn’t choose either. If they are viable they will will be developed and implemented without needing public funding.
A very important concept, that of “Scientism”, a flawed ideology that is certainly rampant in climate circles, and is another underlying pillar of their false-certainty.
“An additional strength of the falsifiability criterion is that it makes possible a clear distinction between science properly speaking and the opinions of scientists on nonscientific subjects. We have seen in recent years a growing tendency to treat as “scientific” anything that scientists say or believe . The debates over stem cell research, for example, have often been described, both within the scientific community and in the mass media, as clashes between science and religion. It is true that many, but by no means all, of the most vocal defenders of embryonic stem cell research were scientists, and that many, but by no means all, of its most vocal opponents were religious. But in fact, there was little science being disputed: the central controversy was between two opposing views on a particular ethical dilemma, neither of which was inherently more scientific than the other . If we confine our definition of the scientific to the falsifiable, we clearly will not conclude that a particular ethical view is dictated by science just because it is the view of a substantial number of scientists.”
“The fundamental problem raised by the identification of “good science” with “institutional science” is that it assumes the practitioners of science to be inherently exempt, at least in the long term, from the corrupting influences that affect all other human practices and institutions. Ladyman, Ross, and Spurrett explicitly state that most human institutions, including “governments, political parties, churches, firms, NGOs, ethnic associations, families … are hardly epistemically reliable at all.” However, “our grounding assumption is that the specific institutional processes of science have inductively established peculiar epistemic reliability.” This assumption is at best naïve and at worst dangerous. If any human institution is held to be exempt from the petty, self-serving, and corrupting motivations that plague us all, the result will almost inevitably be the creation of a priestly caste demanding adulation and required to answer to no one but itself .”
“The fundamental problem raised by the identification of “good science” with “institutional science” is that it assumes the practitioners of science to be inherently exempt”
The entire climate PR program, the 97% meme, et al; utilizes science as advocacy leverage to initiate pro climate change policies. Science after all is above reproach, if one can’t question it then one must go along with it, a wonderful mechanism. Cue innumerable, deeply funded, per review contrivances that collectively form the bible of skeptical science. This methodology in turn fuels the virtual compost of huffpost/wapost; a couple of contemporary leaders of virtual aerosol pollution who warm the hearts and minds of the religious order.
The skepticism has been very ragtag and disorganized, so you need to blame them for not using any science worth publishing.
“…you need to blame them for not using any science worth publishing”
D, that’s so pathetically tired and irrelevant. One knows when the ingredients in a dish taste bad even if they can’t put together the recipe themselves. Just tell the chef to do it again until they get it right, or go somewhere else. Judge the quality.
Being skeptical doesn’t require doing research. Pointing out poor quality work is good enough. One doesn’t necessarily win by working really, really hard; sometimes, or all the time, can be failures. It’s not a competition, and showing up doesn’t make one a winner unless you’re of the Left; I do acknowledge this idea is apparently reserved for grade schoolers who get fifth place, or no place. Lord help us if these grow up to become scientists too.
Deliver convincing work. Today some science is sound, some isn’t; but certainly nothing’s settled.
This is with respect to your earlier post on Chinese SO2 aerosol emissions.
A new national pollution limit took effect in 2014, resulting in a drop in SO2 emissions to 8.4 Megatons in 2016, down from 37.5 Megatons in 2014, as confirmed by monitoring instruments and a NASA satellite.
This massive 29.1 Megaton reduction in Chinese SO2 emissions, along with smaller reductions in The United States and elsewhere, was the cause of the 2015-2016 El Nino.
(Since about 1900,(when reliable data became available), essentially all El Ninos can be associated with reductions in atmospheric SO2 aerosol emissions)
Burl Henry, very interesting, and thank you for sharing. What’s the estimate for total warming attributable to aerosol reductions since the concerted effort to began to reduce these pollutants? In the U.S. this began in 1970 after passing Clean Air Act legislation.
Maybe you don’t consider Tyndall or Arrhenius sound. How far back does it go for you? These are basic physics ideas that later science has only refined. They explain why it is 33 degrees warmer than it would be without GHGs, and a significant factor is the CO2 level. This is all quantitative science and you need quantitative science to argue against it, at least in the scientific community. They don’t accept guff and handwaving. It’s about numbers.
D, you’re flailing. To be clear, I’ve been explicit that I don’t do science and don’t need to in order to evaluate how solid the science is. I know CO2 is a GHG, so how can I possibly know that as a fact and not be a scientist, you say? I do this in the same way a CEO utilizes a spectrum of tools to develop contextual understanding; aggregating people with diverse expert skill sets that go way beyond my own levels of expertise. I juxtapose ideas and thoughts from many who have differing viewpoints. In the case of climate science it’s inclusive of determining distinctions between politics (which runs deep), and science; and ascertaining truth apart from various motivational factors that has nothing to do with basic scientific knowledge. The political motivations that climate science is emerged in are unique to this branch of science; unequivocally politics is a corrupting factor to this science, that you believe it’s mostly pure is reason enough to be cynical.
Climate scientists must make convincing quantitative presentations to those who aren’t scientists to initiate policy. I judge presentations. It’s true that, like most scientific endeavors, climate science should only be about numbers. But almost uniquely for this scientific discipline it isn’t just about numbers, this represents a red herring if described as such here. Arrogant practitioners wearing white lab coats trying to satiate an agenda utilizing models that are tuned and retuned. Agenda driven contrivances can be recognized at 30 thousand feet by many people. There are many who aren’t scientists, and many who are, who understand the nature of how confirmation bias, fuzzy math and global politics are deeply woven into climate science. It’s a distracting basket of luggage that could make the rings in Saturn that one must entertain, engage with, and navigate through; your own hand waving isn’t going to make the sale.
Do you not wonder when the people on the anti-consensus side can’t produce a full energy budget of how that works when CO2 has little effect? If you’re not a scientist, you would not understand the difference between an energy budget and Tuesday, so why listen?
The answer to your arrogant query is; why can’t “you” convincingly argue your interpretation, and importantly, your conclusions about the energy budget? Start your slam dunk with the denizens here, or perhaps continue your attempt is the apropos phrase. I’m always listening because I care about many things.
Would I be allowed to refer you to Lewis and Curry about how to do an energy budget or is that already too much like doing science?
I’m not arguing about the energy budget, you are. I suggest that you continue to make your case for whatever you believe about that topic, or anything else, frankly. Consensus must convince policy makers they know what they’re talking about, that they have most of the right answers. I hate to break the news to you, but the effort isn’t working. Do more science, I’m a listener.
But I’m interested in many things. There is certainly questions relative to the accounting of heat; what percentage of warming is anthropogenic, and what is natural variability. The lukewarm position suggests more weight should be given to natural variability. You haven’t been shy telling Dr. Curry why she shouldn’t believe her views.
I have an interest in deep sea vulcanism. It’s said that there’s more human knowledge about the surface of the moon than Earths deep oceans. Massif is the largest volcano known in the solar system, it was only discovered 20 years ago, on Earth. It covers an area 119k square miles. It’s said that science is still scratching the surface of knowledge about deep sea geological forces.
What’s the average output per year of heat and gas expelled by deep sea vulcanism along the rifts, or is it sporadic? What’s the frequency for large, anomalous events that vent huge amounts of heat and gas? Are all the metrics associated with deep sea volcanic activity adequately, accurately represented in climate models?
OK, provide your number for volcanoes. You should have one if you’re so interested. For CO2 it is 2 W/m2. I will bet that for volcanoes it is orders of magnitude smaller. When you see that comparison, you still won’t believe in the dominant effect of CO2, however. Using this number it is five orders of magnitude less than CO2, from what I calculate.
The way that I read this Mop-Up-Crew is talking about deep sea volcanism which we do not have a good estimate of. The report Jim D refers to is from the 45 known most active volcanoes which would be ones that are not under the sea. I looked at this is the past, and from what we know the deep sea heat is pretty small compared to the temperature changes that we have seen. However, Mop-Up-Crew does have a good point that we do not have good studies of the deep sea heat so it could be a much larger factor than we have calculated to date. It is very unlikely the amount of heat is smaller so this is a number that can only reasonably be larger not smaller. Also the number on volcanism below ice is even less known than the deep sea numbers. There was a recent paper on that but it was my understanding that the studies level of confidence in the numbers was fairly low.
Just from the heat budget, the amount from CO2 can account for all the warming we have seen including in the ocean heat content, and most of the ocean heat content change has been towards the surface. No one is looking for another heat source.
“ENSO causes climate extremes across and beyond the Pacific basin; however, evidence of ENSO at high southern latitudes is generally restricted to the South Pacific and West Antarctica. Here, the authors report a statistically significant link between ENSO and sea salt deposition during summer from the Law Dome (LD) ice core in East Antarctica. ENSO-related atmospheric anomalies from the central-western equatorial Pacific (CWEP) propagate to the South Pacific and the circumpolar high latitudes. These anomalies modulate high-latitude zonal winds, with El Niño (La Niña) conditions causing reduced (enhanced) zonal wind speeds and subsequent reduced (enhanced) summer sea salt deposition at LD. Over the last 1010 yr, the LD summer sea salt (LDSSS) record has exhibited two below-average (El Niño–like) epochs, 1000–1260 ad and 1920–2009 ad, and a longer above-average (La Niña–like) epoch from 1260 to 1860 ad. Spectral analysis shows the below-average epochs are associated with enhanced ENSO-like variability around 2–5 yr, while the above-average epoch is associated more with variability around 6–7 yr. The LDSSS record is also significantly correlated with annual rainfall in eastern mainland Australia. While the correlation displays decadal-scale variability similar to changes in the interdecadal Pacific oscillation (IPO), the LDSSS record suggests rainfall in the modern instrumental era (1910–2009 ad) is below the long-term average. In addition, recent rainfall declines in some regions of eastern and southeastern Australia appear to be mirrored by a downward trend in the LDSSS record, suggesting current rainfall regimes are unusual though not unknown over the last millennium.” https://journals.ametsoc.org/doi/10.1175/JCLI-D-12-00003.1
A southern hemisphere summer that is. ENSO is a dynamic response to perturbations in the Earth system flow field. It currently has a 2 to 5 year beat – that changed from 6 to 7 years around 1920 along with an increase in El Nino frequency and intensity in the 20th century. Bringing drought to Australia, Indonesia, Northern Africa and India. But event frequency and intensity is modulated on decadal to millennial scales.
I noted above another scientifically naive attempt to link ENSO events to sunspots in the quasi decadal solar cycle. There is too much chance and too little ENSO data to be definitive on the scale of the Schwabe cycle. The null hypothesis is that there is no statistical difference between the number of positive or negative ENSO events at either solar max or min seems to be borne out with many different statistics yielding very different interpretations.
Nonetheless there are interesting longer term correspondences of solar intensity and ENSO. The mid Holocene transition in panel B below and the similarity of the Law Dome ice core ENSO proxy and the isotope record in C.
Oceans modulate open and closed cell cloud cover with warmer or cooler surfaces respectively (Koren 2017). The eastern and central Pacific are the largest global source of cloud cover and albedo variability. The alternating state of warmer or cooler surface depends on the state of gyre circulation – a globally coupled flow – in all oceans – shifts in atmospheric mass in the circumpolar modes. A solar trigger may be UV/ozone chemistry translating as atmospheric flows to surface pressure at the poles (e.g. Ineson et al 2015) – driving zonal or meridional winds in the circumpolar modes.
Global energy accumulates a Watt at a time – most in the oceans – with changes in insolation and reflected and emitted light. Climate evolves turbulently over moments to ages (Demetris Koutsoyiannis 2013) with aperiodic chaotic oscillations at many scales. Lower solar activity associated with more meridional polar winds and enhanced gyre strength over the upwelling regions of the eastern and central Pacific – cools the planet through natural cloud dynamics.
Global energy content – I meant to say – is precisely like water in a dam – the storage change over time evolves with instantaneous inflow less outflow.
Aleatory uncertainty is associated with inherent variability or randomness, and is by definition irreducible. Natural internal variability of the climate system contributes to aleatory uncertainty.
There is a big difference between the artificial construct of a dice game and the variability involved with the climate system.
One is unbounded by outcome, the other drawn back by the weight of all preceding events.
In a dice throw one can throw 6’s to infinity. In the world each chance is drawn back to an average by the weight of displacements the previous changes caused. Yes we can rarely have a tsunami. No it will never reach the height of the moon.
The system does not of itself have a runaway capacity practically in itself.
Random walk not dice throws, please.
Re: Your post of July 10, 2018.
“Professor Curry’s forecast is an El Nino is possible late 18/early 19”
You state that “She is almost certainly correct”, and, after an analysis, “So Judith can safely predict the next El Nino and the next La Nina by watching solar activity.”
No, she cannot.
Most La Ninas and El Ninos are unpredictable, since they are caused by changing SO2 aerosol levels in the atmosphere due to VEI4 or larger volcanic eruptions.
There is an initial decrease in average global temperatures as their aerosols circulate around the globe, and if there are enough of them, they will cause a La Nina in roughly 10 to 15 months (depending upon ENSO temperatures at the time of eruption, plume altitude, etc.).
Then, as the aerosols eventually settle out, warming occurs because of the cleansed air, and temperatures recover to their original levels, or higher, if the formation of a volcano-induced El Nino occurs (roughly 18-24 months after the eruption),
So, she could eventually forecast La Ninas and El Ninos, but not by counting sunspots, unless they somehow affect volcanic activity.
(There are also anthropogenic changes in atmospheric SO2 levels, so volcanoes are not the whole story)
The purpose of computing is insight, not numbers – R.W.Hamming, in his numerical analysis book
You ask “What’s the estimate for total warming attributable to aerosol reductions since the concerted effort to began to reduce these pollutants? In the U.S., this began after the passing of the Clean Air Act legislation.in 1970.
According to the CEDS data, global SO2 aerosol emissions peaked in 1976 at 136 Megatons. In 2014, they had fallen to 111 Megatons, a reduction of 25 Megatons.
GISS lists the 1976 anomalous average global J-D temperature as (-)0’11 Deg. C (Met Office (-)0.166 deg. C), average (-)0.14 deg. C.
In 2014, they were reported as (+)0.73 and (+) 0.58 deg.C, respectively.
(The reported GISS value is for 1200 km fill-in; for 250 km fill-in, it is (+) 0.63 Deg. C, which is more accurate).
So, using 0.60 Deg C. for 2014, and (-)0.14, for 1976, the amount of warming is 0.74 deg. C.since SO2 aerosol emissions began to decrease.
Up until 1970, Earth was naturally warming up from the Little Ice Age cooling at the rate of 0.05 deg. C/decade, so that 4 decades of natural warming, or 0.2 deg. C needs to be subtracted to determine the amount of warming attributable to SO2 aerosol reductions, giving 0,54 deg. C.
In an earlier study (Google “Climate Change Deciphered), I had determined that the climate sensitivity factor (or rule of thumb) for temperature increases due to the removal of SO2 aerosols from the atmosphere was approx..02 deg. C. or warming (or cooling) for each net Megaton of change in global SO2 aerosol emissions.
Here, 25 Megatons x .02 would give an expected temp. rise of 0.50 deg. C, versus the 0.54 deg. C. that has occurred.
To answer your question, the answer is ALL of it.
What is the aerosol reduction since 1750, when we have had all this global warming?
The Roman warming period ended due to extensive volcanism, which eventually ended, the air cleared, and temperatures rose to those of the Medieval warming period.
This was followed by the Little Ice Age, also due to extensive volcanism and when it ended, the Earth began to naturally warm up, AS BEFORE.
Circa 1750, SO2 aerosols were being removed but they were all volcanic aerosols drifting out of the atmosphere. Anthropogenic aerosols in 1750 were very low (reported as only 0.46 Megatons),
The Industrial revolution began putting dimming anthropogenic SO2 aerosols into the atmosphere, preventing temperatures from rising to the Roman and Medieval warming period levels, and by 1850, they had reached. 3.0 Megatons, and 6.6 Megatons by 1873, at the start of the Oct. 1873-March 1879 “Long Depression”.
During the depression these aerosols mostly settled out of the atmosphere, cleansing the air, and temperatures soared, reaching an anomalous temperature rise of 0.4 deg. C in Feb. 1878,(per Met Office data), and not seen again until 1997.
So, until we began cleaning up the air circa 1970, most of the warming was simply due to natural recovery from the Little Ice Age cooling.,
You’re saying that we had so many volcanic aerosols left over from Roman times that finally decided to clear out in the last 200 years and that reduction overcame the increasing industrial revolution aerosols between 1750 and 1950 because it already started warming by then. Hmmm. No one thinks that happened, of course.
Global SO2 emissions as the cause make as much sense as the CO2 emissions. If not less. There was no big reduction globally after 1976. Both 136 and 111 Mt is more than double the emissions in 1950 (50 Mt).
It is simply not humans, why do people insist on it? Vanity?
There was a 25 Megaton reduction in SO2 emissions between 1976 and 2014 due to global clean air efforts, with a Climate Sensitivity factor of about .02 deg. C. of warming for each Megaton removed, resulting in an anomalous temperature increase of about 0.5 deg. C.
This WAS caused by humans. How can you say otherwise?
How can I say otherwise? Well look at the SO2 emissions!
Yes, the emissions flattened out in the 70s and the 80s and even decreased a bit in the 90s, but this was basically a very high plateau, much higher than the emissions in the 50s/60s, during the steep upward trend. The warming started (again) around 1975, at the peak/plateau of the SO2 emissions! Maximum emissions (cooling effect) and cooling switched to warming? Does that make sense. Clearly no! SO2 is not the knob, just like CO2.
Burl Henry, I very much appreciate the detail you provide in this analysis and would enjoy seeing further discussion surrounding the data. Why isn’t total warming adjusted to account for this in the formal presentation? Considering it reverts to a natural variability metric (eliminating a factor of human causation).
By formal presentation, do you mean the monthly anomalous temperatures provided by GISS and the Met Office?
Yes, they do ignore any natural warming in their presentations, this would blunt their scare tactics.
Would you happen to know where I can find probability density functions and charts for projected anthropogenic temperature change for this century, for the four RCPs and either ECS =3.0, or better still, for a range of ECS values? (free access).
I would suggest utilizing Research Gate
Reblogged this on Climate Collections.
No, I am not saying that there were any aerosols left over from Roman times.
They settled out centuries ago, causing the Medieval Warm Period.
Those that began settling out circa 1750 were from the extensive Little Ice Age eruptions, and the Earth has been naturally recovering from that cooling ever since, with full recovery being slowed by the introduction of anthropiogenic SO2 aerosol emissions produced since the Industrial Revolution.
Since 1976, we have accelerated that recovery by reducing the amounts of anthropogenic SO2 aerosols introduced into the atmosphere, due to Clean Air efforts.
Thus, NO evidence of any warming due to greenhouse gasses. All warming can be accounted for by natural recovery and the reduction in the amount of atmospheric SO2 aerosol\ emissions.
So you may then see your problem between 1750 and 1950 where aerosols increased while temperatures did too. Your theory fails for that period unless you are saying volcanic settling out drowned out the aerosol increase from industrialization. You have not said this explicitly before, however, so maybe you are just thinking this up now, and it needs more work because volcanic data does not support it.
Between 1750 and 1950 (and to the present), the Earth began warming up because the extensive volcanism of The Little Ice Age abated, and their SO2 aerosols settled out of the atmosphere, cleansing the air.
Temperatures would have quickly risen to those of the earlier Roman and Medieval warming periods, but this amount of warming was prevented from happening by the introduction of Anthropogenic SO2 aerosols into the atmosphere because of the Industrial revolution..
(We experienced a preview of the temperatures to be expected when most of the anthropogenic SO2 aerosols settled out of the atmosphere during the Long Depression of 1873-79, and temperatures temporarily rose by 0.4 deg.C.)
Since 1976, due to Clean Air efforts, we have steadily reduced our anthropogenic SO2,emissions so that present day temperatures, even in the absence of an El Nino, are now equivalent to, or a bit higher, than during the earlier warming periods (probably because of the energy generated by our present-day population and civilization).
“because volcanic data doesn’t support it”
No, volcanic data clearly supports it.. When there is extensive volcanism, the Earth cools down, and when it ends, Earth warms up.
During the warmer Intervals, the occasional volcanoes are responsible for temporary increases and decreases in average global temperatures, by causing the formation of La Ninas and El Ninos
Now you have volcanoes dominating everything including manmade SO2. That wasn’t what you were saying before.
“DMS is undoubtedly part of the system of checks and balances that keeps the climate from taking wild swings,” said Ron Kiene, a professor of marine sciences at the University of South Alabama and one of the world’s leading DMS researchers. Putting sulfur in the atmosphere, as with DMS emissions, is a more efficient way of cooling the atmosphere than removing carbon dioxide. So it might be possible for the natural feedback mechanisms of the biosphere to use DMS to limit global warming, he said. – https://www.whoi.edu/oceanus/feature/dms–the-climate-gas-youve-never-heard-of
Not a new idea – but still an interesting and informed article on DMS – critical in cloud and rain nonlinear dynamics.
“Marine stratocumulus cloud decks are regarded as the reflectors of the climate system, returning back to space a significant part of the income solar radiation, thus cooling the atmosphere. Such clouds can exist in two stable modes, open and closed cells, for a wide range of environmental conditions. This emergent behavior of the system, and its sensitivity to aerosol and environmental properties…” https://aip.scitation.org/doi/10.1063/1.4973593
Sulphur compounds are central to the nonlinear rain dynamic – water drops form around aerosols – dropping rain from the center of a closed cloud cell creates an open cell as moisture is depleted. Warm oceans and air – with more sulfur compound and more turbulent mixing – form rain more quickly. It tends to rain less and be more cloudy over a cool ocean than on a warmer ocean surface.
You wrote “Maximum emissions (cooling effect) and cooling switched to warming? Does that make sense Clearly not! SO2 is not the knob, just like CO2
Human anthropogenic SO2 aerosol emissions peaked in 1976, with their cooling effect.
The subsequent warming was caused by continued reduction in the amount of those peak emissions due to global Clean Air efforts, down from 136 Megatons in 1976, to 111 Megatons in 2014. SO2 aerosols have a dimming effect, and fewer of them results in less dimming, hence warming (caused by human actions)
This clearly makes sense, and SO2 IS the control knob.. .
It clearly doesn’t make any sense (if you think that human SO2 emissions have such major cooling effect) that at the peak/plateau of the human SO2 emissions cooling changed to warming.
Let me give you an analogy:
If you are outside on a hot summer day, and a cloud passes over you, you will initially experience some cooling, then renewed warming when it leaves.
SO2 aerosols (both volcanic and anthropogenic) have the same climatic effect: cooling when they are present, then renewed warming as they are reduced in quantity or completely removed.(as when settled out after a volcanic eruption)
The NASA fact sheet on SO2 aerosols states “Stratospheric SO2 aerosols reflect sunlight, reducing the amount of energy reaching the lower atmosphere and the Earth’s surface, cooling them…..Human made sulfate aerosols .absorb no sunlight, but they reflect it, thereby reducing the amount of sunlight reaching the Earth’s surface”
You need to accept the fact that warming naturally occurs when intervening dimming aerosols dissipate..
As Dr. C noted, the IPCC AR5 greatly reduced estimates of aerosol effects.
We do know that whatever effects SO2 emissions may have had, it is effectively zero going forward for the US and probably also Europe based on emissions:
At this point, I have to ask you, can you read the graph? At the peak/plateau of the SO2 emisions cooling changed to warming. Emissions in the 80s/90s were 2x as high as the emissions in the 50s, when according to you they caused cooling.
some scientists know, that the explanation “global dimming/brightening, for the T signals is wrong. Climate also changes without any external forcings, someone could have forgotten…and the aerosole effects must have a lag in the T signal from years to 1-2 decades.
Yes, I can read the graph.
After 1976, emissions, decreased, and warming naturally occurred.
And, yes, I (and NASA) claim that SO2 emissions cause cooling.
You appear to believe that, after 1976, I claim that SO2 emissions no longer caused cooling, but switched instead, to causing warming (which, of course, is nonsense).
No, the warming was simply caused by there being fewer of them in the atmosphere, due to Clean Air efforts.
no, you can not claim that, because after aerosol emissions and concentrations starts to decline, the T signal lags some years. The cooling peak must be some years after the turning point in that concentrations, like the NH is not hottest at 21.06, it is about 2 months later…
You stated earlier that “Climate also changes without any external forcings”. I think not, but I could be wrong.
Could you give me an example?
It is well known in climatology, that the highly dynamical climate system acts like every dynamical system: you need no external forcings to create changes. On regional and decadal time scales most of the climate variations are dynamical, stochastic and they can influence the global mean too.
” Formal uncertainty quantification of computer models is less relevant to science than an assessment of whether the model helps us learn about how the system works.”
I believe that the above statement is patently false. The model is a collection of code that represents what the coder and his team have decided is relevant for describing a system. If the underlying science is not there as per climate change science as you have admitted yourself when you looked for it and couldnt find it, how do you expect that the simulation of the computer code will find the science? Only when the code is run during a simulation do we get any results. For those results to have any realistic meaning, requires that the underlying science to have been accurately coded and have been accurately represented in the code. This is especially true when we know that all the models have been biased to show more warming with each additional CO2. Expecting a human to learn something from a climate model computer simulation requires the belief that the model knows how the climate system works. An individual simulation result may accidently portray some realistic result but that is only because of the law of large numbers whereby a team of monkeys given enough time could recreate Shakespeare. We know that it is impossible for the computer simulations to accurately portray the actual earth system for many reasons with spatial resolution being only 1 of the the many reasons. EX: smallest spatial resolution is 1.5 km.
In view of the above, any conclusion that a human could draw from a computer climate simulation is just as likely to be false as to be true. So it is no better than throwing darts at a dartboard.
You said “We do know that whatever effects SO2 may have had, it is effectively zero going forward for the US and probably also Europe based om emissions”
Regardless of what happens with anthropogenic SO2, we will always have effects whenever there are SO2 emissions from a large volcanic eruption.
And with respect to global anthropogenic SO2 emissions, they are far from zero, totaling 111 Megatons in 2014.(the U.S. was 4.25 Megatons).
Should total global SO2 emissions somehow be reduced to zero, we could expect temperatures to rise by 2.0-2.5 deg. C.
(China reduced its emissions by 29 Megatons between 2014-16, and we had the 2015-2016 El Nino), where anomalous temperatures rose from 0.73 to 0.99 deg. C.)
So SO2 effects going forward, at this time, are far from zero.
Judith if I take your segmentation of uncertainty in nature, location and level then I am missing one major point .
There are 2 kinds of models/theories .
– deterministic theories . These theories yield a unique result/prediction which can be computed with an arbitrary accuracy . Assuming that numerical constants and parameters are known, there is no uncertainty in these models . Most physical theories are of this kind and the uncertainty can always be reduced by measuring constants and parameters entering the theory with a greater accuracy .
– probabilistic theories . These are theories where only probabilities of a result can be computed but the number of possible results is infinite what interdicts to predict a unique result (or a set of results) .
There are 2 fields of physics where only this kind of theories must be used – Quantum Mechanics and Chaotic systems .
These theories, like deterministic theories, can also compute the subject of their study ( a probability of a result) with an arbitrary accuracy provided that constants and parameters can also be known with an arbitrary accuracy .
By using your categories, these domains don’t belong to either of the 2 natures you defined . They obviously do not contain the epistemic uncertainty and they do not contain aleatory uncertainty either (there is nothing random in QM or in deterministic chaos) . If I had to define the nature of these models/theories, I would say “intrinsically probabilistic” e.g no unique final state (or a set of final states) can be predicted – only probabilities may be known .
As for the uncertainty location, the classical one related to the knowledge of the values of constants and parameters applies .
For the level, here clearly the “statistical uncertainty” applies but should be slightly modified by saying “outcomes can never be known , but precise, decision-relevant probability statements can be provided.” This is exactly what quantum mechanics does .
Now the point I was missing was that climate models are a strange hybrid which has sofar never existed in science .
They are deterministic models computing unique outcomes for systems where no unique outcomes may be predicted and only probabilities of the possible outcomes may be known .
It is like constructing a deterministic model of quantum mechanics which computes energy levels in a molecule . Even if every energy level predicted with such a model belonged to the set of possible energy levels, the probability that any of the predicted outcomes would occur would be exactly 0 . This is simply due to the fact that the measure of any set of single states is 0 .
The same conclusion applies of course to chaotic systems – any finite set of computed outcomes is of measure 0 (e.g the probability to observe any one of them is 0) .
This is not a detail , it is a fundamental difficulty because it is impossible to construct empirically a continuous probability distribution in an infinite dimensional phase space using sets which are all of measure 0 .
There is a theoretical answer which would be to construct the Frobenius Perron operator for the climate dynamics assuming (or proving) that a unique invariant “natural” SRB measure exists for such dynamics for a long enough period of time . But such an undertaking is totally out of reach of anything we know today and probably for many decades in the future .
So how to qualify then the “uncertainty” in climate models when they are applied to intrinsically probabilistic systems yet no probability distribution is known or computable with the current knowledge ?
They are certainly impacted by most of the uncertainty locations you defined but aren’t these uncertainty sources all negligible in the face of the fact that no invariant probability distribution in the system’s dynamics is known because the models use the wrong deterministic paradigm instead of the true probabilistic one ?
The Lorenz model is a simple model of the same kind. It is not deterministic, but has states limited by an attractor, and the attractor space is deterministic for a given set of parameter values in the Lorenz model. If you change a parameter, for example the Rayleigh number that defines a forcing, you get another attractor, but that shape is also deterministically known because the available energy states have changed by a known amount.
“Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” http://rsta.royalsocietypublishing.org/content/369/1956/4751
One could go a lot further only to be met by more completely invented, uncomprehending narratives from Jimmy.
Tomas wants to define deterministic in a different way from Lorenz. Lorenz would not say his model is deterministic because its predictabilty has a limit and that was the whole point.
Koutsoyiannis may redefine random and deterministic as unpredictable or predictable.
The trajectory of solutions of Lorenz’s simple equations are completely deterministic but still unpredictable beyond small lead times. Collectively in climate models there are 1000’s of plausible solutions starting from conditions within the limit of precision of measurement of input data. This samples the solution space and may – one day – allow for probabilities to be assessed.
Thus there is the evolution of uncertainty into irreducible imprecision in forecasts. “Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” http://www.pnas.org/content/104/21/8709
Climate evolves as the system is pushed by greenhouse gas changes and warming – as well as solar intensity and Earth orbital eccentricities – past a threshold at which stage the components start to interact chaotically in multiple and changing negative and positive feedbacks – as tremendous energies cascade through powerful subsystems. Some of these changes have a regularity within broad limits and the planet responds with a broad regularity in changes of ice, cloud, Atlantic thermohaline circulation and ocean and atmospheric circulation.
Dynamic climate sensitivity implies the potential for a small push to initiate a large shift. Climate in this theory of abrupt change is an emergent property of the shift in global energies as the system settles down into a new climate state. The traditional definition of climate sensitivity as a temperature response to changes in CO2 makes sense only in periods between climate shifts – as climate changes at shifts are internally generated. Climate evolution is discontinuous at the scale of decades and longer.
This is a very different way of thinking about fundamental modes of operation. One at odds with a Procrustean urge to make climate fit the meme.
Climate models only differ from the Lorenz model in the number of variables. The attractor is defined by the energy in the system, and that is determined by the forcing in both cases. In that sense the climate state is deterministic if defined by the attractor in phase space. The Lorenz attractor is also deterministic.
You assume – inter alia – that probability precludes determinism. In models there is no unique deterministic solution – hence solutions are probabilistic.
Exactly???? Model results are at best probabilistic – and individual trajectories exponentially divergent. It is not as if the experiment has not been done. Nor can we determine an emergent climate state on the basis of a few poorly constrained parameters.
The large ensemble experiment has been done, and there is a spread by natural internal variability, but also a constraint on the mean change by the forcing change which is because of energy conservation.
Divergence is the result of sensitive dependence on initial conditions. Jimmy lacks even the most basic of the ideas of chaos.
Forcing may bias models to warmer conditions – but that may not hold in the real climate system where there is epistemic uncertainty.
This was the result of perturbing just the initial conditions and keeping the coupled model the same. Note how the common forcing dominates but there is natural internal variability like we see in the temperature record too.
The point is that initial conditions influence the trajectories of solutions. This is the LENS Earth community model you trot out. The starting points there are different by 0.000000000000001 degree Kelvin and diverge by around 0.4K in 2100 around a model trajectory. But realistic imprecision in inputs is many orders of magnitude greater than this – thus there are very many 1000’s of plausible trajectories of solutions possible in any model.
So there are many divergent trajectories – and you can have as many 0.4K solutions about each of those as you can pay for.
Climate evolves on it’s own trajectory – and the difference can be 10 or 12 degrees K I read somewhere recently.
The internal variability quickly grew to about 0.4 C and saturated at that because that is the internal variability in this coupled system. It, like the climate, is tightly constrained by the forcing which governs the total energy available to the internal modes. To deviate beyond internal variability you have to add or subtract energy somehow, which is what changing the forcing does.
Absolutely – growing on a foundation of sand. And there is that other component of a large natural variation.
You have natural internal variations larger than ENSO in mind? Not discovered yet.
You call Milankovitch cycles “internal variations”? Interesting.
The insolation changes are modest and are only one factor – the large response is from internal variability.
aka positive feedback.
What does he imagine large internal variability is?
Internal variability is random, while positive feedback is in the same sense as a forcing change such orbital forcing.
Internal variability is not random – it is deterministically chaotic. But call it internal variability from multiple negative and positive feedbacks.
Thomas Milanovic: there is nothing random in QM
Outcomes can not be predicted in advance nor reproduce exactly. How is there “nothing random” in QM?
For the level, here clearly the “statistical uncertainty” applies but should be slightly modified by saying “outcomes can never be known , but precise, decision-relevant probability statements can be provided.” This is exactly what quantum mechanics does .
Outcomes can never be known? or Outcomes can never be predicted in advance?
This is not a detail , it is a fundamental difficulty because it is impossible to construct empirically a continuous probability distribution in an infinite dimensional phase space using sets which are all of measure 0 .
That is no more a problem than is modeling human height or increments in Brownian motion with normal distributions. Measurements are only precise down to intervals, and the probability laws give the probabilities of intervals.
Lorenz model has nothing to do with what I said and it is absolutely not of the same “kind” . Btw the Lorenz model IS strictly deterministic !
a) Lorenz phase space is low finite dimensional
b) Lorenz is ergodic
c) A natural unique SRB measure exists and is known
d) The corresponding probability density is known
e) Lorenz has continuous dependence on initial conditions
Nothing of the above is true or known for climate models so it is irrelevant what Lorenz system does or does not .
What means a “shape” in the infinite dimensional Hilbert space anyway ?
If you call the Lorenz model deterministic, so is a climate model driven by a CO2 level. It has a phase space of possible states constrained by the energy available. Change the CO2 level, and you change that phase space. The states don’t overlap because the energy in the system is different, same as when you change the Rayleigh number in the Lorenz system.
Models have a probabilistic solution space constrained by time and solution divergence. The experiment has been done.
Over any finite time Δt solution trajectories starting from feasible initial conditions diverge. The solution space just keeps getting bigger over time. Reducing irreducible imprecision depends on more precise inputs, smaller grids and reduced simulation periods.
I am not starting this one again. What you have there is different solutions diverging due to different model parameters, Different models of course have different sensitivities. It would be more remarkable if they didn’t diverge. The climate on the other hand only has one set of parameters, so this experiment does not represent that.
I don’t have a clue what this free range driveling means. We are not yet talking about different models. That would bring in the science shattering sham that is opportunistic ensembles. Couldn’t do that. Alarmists don’t have a freakin’ clue.
What I was talking about is a single model starting from within the range of imprecision of parameters – just as with the Rowland et al model – evolving into many non-unique and diverging solutions. Climate evolves – as I said somewhere very recently – from small changes – including greenhouse gases – that initiate large internal responses.
Anthropogenic CO2 is a likely a blip – peaking within decades and with large scale soils and habitat sequestration not only feasible but necessary for other reasons. CO2 is likely more variable than in the ice core records – so a blip may not matter much. But blips can cascade through powerful Earth sub-systems bringing ice, snow, wind and cold. Here’s a great site for cold.
The Quaternary saw big changes in ENSO and AMOC. And then there was the mid Pleistocene transition some 1250 to 700 000 years ago – to quasi 100ky ice ages. CO2 here follows biology and natural emissions.
When you change the model parameters, you change the model’s sensitivity. The results will diverge because of that even for the same identical forcing and initial conditions. You just see the different sensitivities spreading the results because nothing changed except model parameters.
Other starting points in the model are scientifically plausible. What chaotic trajectory is right?
Some sets of parameters are better than others. Methods such as emergent constraints help sort out the better ones using past and current behavior, and those are the ones to believe more. Some others would be out to lunch when evaluated this way, and are best ignored. These trajectories can be thinned down.
There are large changes in toa radiant flux due to changes in ocean and atmospheric circulation. This is epitemic uncertainty. But this still different from the initial condition issue. It you start from points within the limits of input precision – trajectories will diverge from intrinsic nonlinearity. Jimmy is still talking individual runs of different models in CMIP ensembles. .
I won’t reply, because what you said makes no sense.
Won’t reply? I should be so lucky.
You apparently don’t know what my comment was about – it was just adressed to Judith who does .
What part of “Deterministic does not imply predictable” you don’t understand ?
And for that matter what part of “Lorenz system is irrelevant for the climate dynamics (e.g a) , b) , c) , d) , e)) ” is not clear enough ?
Frankly you should stop these embarassing comments which make no sense whatsoever .
“Shapes” in a Hilbert space indeed :)
Let’s try it this way.
– Maybe you don’t understand that attractors have shapes even in multidimensional spaces.
– Maybe you don’t see that the Lorenz attractor does change its shape when you change its Rayleigh number parameter.
– Maybe you don’t see that climate models are like the Lorenz model, just with millions times more parameters, but just as deterministic in the sense of being repeatable with the same initial state, and also with an attractor that depends on the imposed forcing.
As with the Lorenz model, the forcing governs the energy in the system which limits the space occupied by the attractor.
If you see a big difference between the Lorenz model and climate models or their attractors, you have to spell it out because mathematically they would be the same except for the number of variables/dimensions.
Lorenz had a convection model – in which fluid transport was a factor.
“The Rayleigh number is defined as the product of the Grashof number, which describes the relationship between buoyancy and viscosity within a fluid, and the Prandtl number, which describes the relationship between momentum diffusivity and thermal diffusivity.
The set of simple, nonlinear equations associated with Lorenz’s butterfly have a simple state space that can be numerically mapped in time, Run it again from slightly imprecise starting points and there is a divergent trajectory in solution space with uncertainty saturating at a level intrinsic to the set of equations.
Changing the temperature difference, which is a forcing, changes the Rayleigh number. This changes the energy in the system much like changing the forcing changes the energy in the climate system. If you change this parameter in the Lorenz system, it will modify the shape and position of the “butterfly” attractor.
The simple set of the nonlinear Lorenz equations was developed to illustrate an underlying principle. It has as much to say about climate change as Tim Palmer’s magnetic pendulum. At least there we have a wedge.
It has a forcing term that is also analogous to the one in climate. Changing the forcing changes the energy states available to it and gives a new attractor. Climate change is like that too because there is now more energy in the system whether you like it or not.
It has a relatively small change in initial conditions – that’s the fundamental point. Different attractors in this very simple system are unpredictable – let alone the high dimensional state space of climate. Whether you like it or not.
The part that is predictable is that the energy increases when you add GHGs. It’s insulation.
If you don’t like the Lorenz model analogy take a different model like a gas in a box where there are 6N degrees of freedom for N atoms. This phase space has an attractor for a given temperature. Changing the temperature changes the attractor. This is equivalent to changing the forcing in a climate model. It gives a different mean temperature and a different attractor, just as when you change the forcing in other models. The Lorenz model can’t emulate climate change unless you are allowed to change one of its parameters (rho) because climate change is a shift in the attractor position.
Proceeding from a false attribution to an imaginary equivalence. It is pure narrative intended as a post hoc justification for AGW in a nonlinear world.
Re your post of July 15, 1:32 pm.
I have to stick my oar in again:
You stated “It is extremely common in science that the correct theory supported by evidence is rejected by the consensus”.
I see nothing in your comments with respect to atmospheric SO2 aerosols, both volcanic and anthropogenic, being the control knob for Earth’s temperatures, even though there is MUCH empirical evidence that this is indeed the cause of Climate Change.
Why do you reject this explanation without even exploring it?
Pingback: Weekly Climate and Energy News Roundup #321 | Watts Up With That?
All this uncertainty discussion and the only certainty is that the underlying physics are most certainly incorrect.
Pingback: Weekly Climate and Energy News Roundup #321 |
I don’t have a clue what this free range driveling means. We are not yet talking about different models.
Could not say it better Robert – I am exactly in the same case. I don’t post here often enough to know everybody but definitely this JimD is an enigma for me . Either he is a troll or really limited in the understanding compartment .
Otherwise I agree with what you say about the cascade – there is actually an interesting paper by Tim Palmer conjecturing that there exists an upper bound for the Navier Stokes predictability barrier (independent of the accuracy of initial conditions) and uses indeed the cascade concept which he calls “the real Butterfly effect” .
JCH = JIMD, same user, 2nd account. Nothing to worry…trolls are always and everywhere :-)
Fabulous. I’ll look it up. Bye.
Cascade to my knowledge came from a Quatenary Science Journal article – many years ago – I forget even who.
It might be useful to distinguish (a) uncertainty as to the correct value of ECS, if there is such a thing, from (b) uncertainty as to what, if anything, ECS has to do with what the actual climate will do in the future. ECS is not a prediction, so these are two very different uncertainties.
Is there even a literature on (b)?.It may be the more important of the two.
but ECS is handled like a 3 day weather prediction….!?
I agree with you.
Yes, ECS is often reported and discussed as though it were a prediction, which is a grand blunder that needs to be corrected. ECS may be interesting and complex, hence the debate, but it is an abstraction that has little to do with what is likely to happen. Treating it as a prediction tacitly assumes that CO2 is the temperature control knob, which we know is false. (Even the IPCC lists a dozen other forcings.)
In this respect ECS is analogous to describing what will happen to a feather dropped in a vacuum. In contrast, future temperature in the actual climate is more like a feather dropped in a windstorm. The gravity based behavior is highly unlikely and the feather may well go up rather than down. Thus increasing CO2 in no way precludes major global cooling. (I have yet to see a model that allows for such cooling, which means they are probably falsely constrained.)
In short, whether ECS is 1.2 or 4.2 probably tells us noting about future climate, but that is not how the issue is treated. It is treated as though the future were being debated, which is a great mistake indeed. So the real question (hence uncertainty) is what, if anything, is ECS good for?
Here is my latest for CFACT: Maladaptation to bogus climate change
People seem to think that adaptation to climate change is a free good. Some people tout it as a “no regrets” alternative to questionable mitigation efforts such as decarbonization. It is certainly not free and if the projected changes fail to occur it can easily be very bad.
Here are just a few examples, among many possibilities:
1. A billion dollar irrigation system where none is presently needed, based on a projected 50% drop in annual rainfall, which does not happen.
2. A billion dollar hydro power dam based on a projected 100% increase in annual rainfall, which does not happen.
3. A billion dollar dike system based on accelerating sea level rise, which does not happen.
Adaptation to projected climate change is actually a hugely expensive gamble. The climate change scare is by necessity based on horrendous projected adverse impacts. Adapting to these projections, as opposed to reality, will therefore be horrendously expensive. If these monster changes do not occur, which is extremely likely, then these monster adaptation efforts will be wasted money, effort and hardship.
Note that simply preparing for extreme events like droughts and floods is not climate change adaptation. These things occur naturally. They are not climate change. In reality this nice sounding idea of adaptation is a yawning money pit, driven by deliberately scary computer projections. Far from being benign, adaptation is potentially disastrous. It would be better called maladaptation.
This is especially true in poorer developing countries, which desperately need real developments like clean water and electricity. Diverting funding from these real needs, in the bogus name of climate change adaptation, would be incredibly cruel. This fallacy of maladaptation is rampant in the UN, but unfortunately it is to be found lurking big time in the Trump Administration as well. Its highest form is in something called Executive Order 13677. This is the one big Obama climate scare Executive Order that President Trump has not rescinded.
EO 13677 is titled “Climate-Resilient International Development.” This Order basically instructs all Federal Agencies to incorporate climate change activism into all foreign activities, especially in the roughly $40 billion a year spent on foreign aid. While restriction of fossil fuel use is included, the thrust of the Order is adaptation, which is also called resilience.
There is more.
Rescind EO 13677!
I agree that your examples would be undesirable outcomes.
Generally adaptation can be defined rather open-endedly depending on the lens one uses, as either an implementation narrow in scope, or very broad. Adaptation simply means by definition “lowering the risks posed by the consequences of climatic changes”. But your description follows the generic CAGW line; media also pushes the broad all encompassing definition you describe. But I believe it’s too big to quantify adequately, and riskier than the problem it tries to solve. A massive federally funded beast would be disastrous on many levels.
I would prefer a limited adaptive approach that’s already occurring with little formality. Though I probably shouldn’t describe it using the same vernacular to avoid confusion. My thoughts are closer to a wait and see, but adapt and evolve to circumstances and exploit technology approach. Use common sense risk management, open eyed, not reactionary; a do no harm position. Capitalism actually by default naturally gravitates towards eco friendly solutions as technology evolves. Energy Star, SEER ratings, etc., all designed to lower energy costs, products designed to exploit these ratings reduce costs, invite further innovation, and thus reduce the CO2 footprint incrementally, and ongoing.
A minimal adaptive policy simply should encompass ideas behind good stewardship. A farmers choice to change crops in order to exploit circumstances as they arise, moving with any change. Utilize all common sense definitions for alleviating risks, utilizing technology to incrementally move towards an ever smaller CO2 footprint. I would include Exxon’s development of biofuels as another example of naturally evolving adaptation with potentially large, paradigm shift repercussions. These things are real, not some fuzzy math equation juxtaposed to a mile high stack of scare mongering propaganda. The tech curve is bigger and faster, it will overcome the warming curve. But maybe not bigger than the “one world fascist cult” curve; which with little exaggeration and mostly in unspoken terms, finally, is what free thinking non cultist individuals are really fighting.
Mop: Most of what you are talking about sounds like mitigation, not adaptation. The standard language here is that there are three primary responses to the threat of climate change: 1. Mitigation, mostly cutting emissions, 2. Adaptation to reduce damage. 3. Plus compensation of those still damaged, by those who caused the climate to change (who should also pay for 1 and 2).
Really? Yes, I understand mitigation. I was previously going to say your #2 was more mitigation centric, but felt it distracted from the more salient over arching theme you are trying to express. Yes, Exxon’s work, for example, could definitely be seen as mitigation, especially if their sole purpose was to defeat evil big oil. A quite constructive, gracious exchange. Thanks.
– Maybe you don’t understand that attractors have shapes even in multidimensional spaces.
– Maybe you don’t see that the Lorenz attractor does change its shape when you change its Rayleigh number parameter.
– Maybe you don’t see that climate models are like the Lorenz model, just with millions times more parameters, but just as deterministic in the sense of being repeatable with the same initial state, and also with an attractor that depends on the imposed forcing.
As with the Lorenz model, the forcing governs the energy in the system which limits the space occupied by the attractor.
If you see a big difference between the Lorenz model and climate models or their attractors, you have to spell it out because mathematically they would be the same except for the number of variables/dimensions.
God you are still not getting it and continue writing nonsense even if you have been shown that it is nonsense .
– Define “shape” in an infinite dimensional function space . You still didn’t . Hint : compact sets don’t exist in infinite dimensional function spaces while they exist in any space isomorphe to R^n .
– Prove that 1 000 000 =/= infinity . This one should not be hard .
-I have forgotten more things about Lorenz systems than you will ever know but this is off topic anyway .
– I already spelled out why Lorenz is irrelevant for climate dynamics (reasons a) through e) in the above post) . Note that there is a difference between climate and climate models . I do not speak about the latter .
– Don’t insult mathematics . What you are doing is anything but mathematics . If for you topology of finite dimensional spaces = topology of infinite dimensional function spaces then I am afraid that there is no hope to ever learn anything about non linear dynamics .
If you are comparing the Lorenz model with the full climate dynamics and not with climate models, you have missed the way people use the Lorenz model. Climate models have millions of variables (not an infinite number) and an attractor that depends on the forcing in the same way as a Lorenz model with its own given forcing (Rayleigh number). The similarity is between the Lorenz model and climate models. The mathematics is similar because both use deterministic prognostic equations for all their variables in terms of the other ones. Both have attractors that are subsets of a multidimensional space. In the Lorenz model, the “shape” in question is the well known butterfly one. There are multidimensional analogies that are subsets of their spaces. You don’t like to say an attractor has any shape, and that’s fine, but it is just semantics and irrelevant to the discussion. The point is that an attractor in these systems is defined by the energy available. A changing climate changes the energy available, and climate models go to previously unavailable states as the forcing is increased.
“I would rather have questions that can’t be answered than answers that can’t be questioned.”
― Richard Feynman
For the greenhouse theory to operate as advertised requires a GHG up/down/”back” LWIR energy loop to “trap” energy and “warm” the earth and atmosphere.
For the GHG up/down/”back” radiation energy loop to operate as advertised requires ideal black body, 1.0 emissivity, LWIR of 396 W/m^2 from the surface. (K-T diagram)
The surface cannot do that because of a contiguous participating media, i.e. atmospheric molecules, moving over 50% ((17+80)/160) of the surface heat through non-radiative processes, i.e. conduction, convection, latent evaporation/condensation. (K-T diagram) Because of the contiguous turbulent non-radiative processes at the air interface the oceans cannot have an emissivity of 0.97.
No GHG energy loop & no greenhouse effect means no CO2/man caused climate change and no Gorebal warming.
Nick Schroeder, BSME, PE
well, but the “free IR emission” to space increases with increased CO2 density,eg. Upper levels in the Troposphere must become warmer somewhat and because of the almost stable lapsrate you`l find this warming throughout the Troposphere. You do not need this back radiation claims to call it GHE, but you need a vertical well mixed Troposphere and in global mean, it is.
”How climate science handles uncertainty matters.”
The current climate uncertainty, together with the risks connected to that, is the main problem to be solved. You have only to understand either, that a) the anthropogenic CO2- emissions to atmsphere do not dominate the increase of CO2-content in atmosphere or, that b) according to observations the trends of CO2-content in atmosphere follow trends of climate temperature and not vice versa. They both even alone prove, that human CO2- emissions are not any threat of climate warming; see e.g. my link https://judithcurry.com/2017/05/02/nyes-quadrant/#comment-848558 :
As anthropogenic CO2 emissions do not dominate the CO2 content in atmosphere, and as even total content of CO2 in atmosphere does not dominate the climate temperature, the influence of CO2 from fossile fuels – and even from other anthropogenic CO2 sources – is so minimal that it cannot be distinguished from zero. This means that cutting of anthropogenic CO2 emissions from any anthropogenic source is unnecessary and causes only losses.”
An uncertainty provided at 60, 95, or 99,7 % confidence level tend to give an impression of risk that is higher than the real risk.
For a normal distribution, there actually is a mathematical relationship between risk and uncertainty at a stated confidence level.
Risk = Uncertainty at 95 % confidence level * 0.2
The relationship was derived within the field of measurement of oil and gas where it solved a long-standing problem with cost-benefit evaluations. The relationship is now used to quantify risk for loss due to measurement (also prediction) uncertainty and perform cost-benefit evaluations of alternative designs.
See my post on this with a link to the original paper (open source):
There is an enormous range in energy fluxes in the CMIP5 models (CMIP5 =Climate Model Incomparison Project) used by IPCC in their Assessment Report 5.
The energy balance over land and oceans: an assessment based on direct observations and CMIP5 climate models – Wild et al 2014
Here are some examples of the range of energy fluxes that is spanned out by the models (See Table 2: Simulated energy balance components averaged over land, oceans and the entire globe from 43 CMIP5/IPCC AR5 models at the TOA, atmosphere, and surface)
Surface (All units: W/m2): Solar down: 18.6
Solar up: 10.5
Solar net: 17.2
Thermal down: 18.5
Thermal up: 11.8
Thermal net: 15.7
Net radiation: 17.2
Latent heat: 13.9
Sensible heat: 13.1
(Averages are taken over the period 2000–2004)
On the other hand, the current energy accumulation on earth is estimated from observation of ocean warming to be 0.6 W/m2 (Ref.: “considering a global heat storage of 0.6 W m–2» ref.: IPCC;AR5;WGI;page 181; 2.3.1 Global Mean Radiation Budget).
The range in energy fluxes between the models is 10-fold the observed energy imbalance.
I think it is fair to assume that the models would have been all over the place if not constrained by tuning to fit various observations. But a model that has been tuned to match history doesn´t necessarily have any predictive skills. The model may seem right but for the wrong reasons.
More than that, the range in energy fluxes between the models demonstrate that many models must be wrong or inaccurate. All the models cannot possibly be right. A model may seem right for the wrong reasons, but that doesn´t make the model accurate or reliable.
I think it is fundamentally wrong to base politics on models that are wrong or models that may seem right for the wrong reasons.
The question that remains is: Are there any models that have demonstrated predictive skills without significant systematic bias. I think the answer to that must be no. Contrary, it has been demonstrated that the model predictions of tropospheric warming are systematically biased compar