by Judith Curry
I have been asked to write an Expert Report on climate models.
UPDATE: A final version of the report is attached [climate-models]. Thanks a ton to all who provided comments in the blog post and via email, I have incorporated many of these suggestions.
No, I can’t tell you the context for this request (at this time, anyways). But the audience is lawyers.
Here are the specific questions I have been asked to respond to:
- What is a Global Climate Model (GCM)?
- What is the reliability of climate models?
- What are the failings of climate models?
- Are GCM’s are a reliable tool for predicting climate change?
I’ve appended my draft Report below. I tried to avoid giving a ‘science lesson’, and focus on what climate models can and can’t do, focusing on policy relevant applications of climate models. I’ve tried write an essay that would be approved by most climate modelers; at the same time, it has to be understandable by lawyers. I would greatly appreciate your feedback on:
- whether you think lawyers will understand this
- whether the arguments I’ve made are the appropriate ones
- whether I’m missing anything
- anything that could be left out (its a bit long).
What is a Global Climate Model (GCM)?
Global climate models (GCMs) simulate the Earth’s climate system, with modules that simulate the atmosphere, ocean, land surface, sea ice and glaciers. The atmospheric module simulates evolution of the winds, temperature, humidity and atmospheric pressure using complex mathematical equations that can only be solved using computers. These equations are based on fundamental physical principles, such as Newton’s Laws of Motion and the First Law of Thermodynamics.
GCMs also include mathematical equations describing the three-dimensional oceanic circulation, how it transports heat, and how the ocean exchanges heat and moisture with the atmosphere. Climate models include a land surface model that describes how vegetation, soil, and snow or ice cover exchange energy and moisture with the atmosphere. GCMs also include models of sea ice and glacier ice.
To solve these equations on a computer, GCMs divide the atmosphere, oceans, and land into a 3-dimensional grid system (see Figure 1). The equations and are then calculated for each cell in the grid repeatedly for successive time steps that march forward in time throughout the simulation period.
Figure 1. Schematic of a global climate model. https://upload.wikimedia.org/wikipedia/commons/thumb/7/73/AtmosphericModelSchematic.png/350px-AtmosphericModelSchematic.png
The number of cells in the grid system determines the model ‘resolution.’ Common resolutions for a GCM include a horizontal resolution of about 100-200 km, a vertical resolution of about 1 km, and a time stepping resolution that is typically about 30 minutes. While GCMs represent processes more realistically at higher resolution, the computing time required to do the calculations increases substantially at higher resolutions. The coarseness of the model resolution is driven by the available computer resources, and tradeoffs between model resolution, model complexity and the length and number of simulations to be conducted.
Because of the relatively coarse spatial and temporal resolutions of the models, there are many important processes that occur on scales that are smaller than the model resolution (such as clouds and rainfall; see inset in Figure 1). These subgrid-scale processes are represented using ‘parameterizations.’ Parameterizations of subgrid-scale processes are simple formulas based on observations or derivations from more detailed process models. These parameterizations are ‘calibrated’ or ‘tuned’ so that the climate models perform adequately when compared with historical observations.
The actual equations used in the GCM computer codes are only approximations of the physical processes that occur in the climate system. While some of these approximations are highly accurate, others are unavoidably crude. This is because the real processes they represent are either poorly understood or too complex to include in the model given the constraints of the computer system. Of the processes that are most important for climate change, parameterizations related to clouds and precipitation remain the most challenging, and are the greatest source of disagreement among different GCMs.
GCMs are used for the following purposes:
- Simulation of present and past climate states to understand planetary energetics and other complex interactions
- Numerical experiments to understand how the climate system works. Sensitivity experiments are used to turn off, constrain or enhance certain physical processes or external forcings (e.g. CO2, volcanoes, solar output) to see how the system responds.
- Understanding the causes of past climate variability and change (e.g. how much of the change can be attributed to human causes such as CO2, versus natural causes such as solar variations, volcanic eruptions, and slow circulations in the ocean).
- Simulation of future climate states, from decades to centuries, e.g. simulations of future climate states under different emissions scenarios.
- Prediction and attribution of the statistics extreme weather events (e.g. heat waves, droughts, hurricanes)
- Projections of future regional climate variations to support decision making related adaptation to climate change
- Guidance for emissions reduction policies
- Projections of future risks of black swan events (e.g. climate surprises)
The specific objectives of a GCM vary with purpose of the simulation. Generally, when simulating the past climate using a GCM, the objective is to correctly simulate the spatial variation of climate conditions in some average sense. When predicting future climate, the aim is not to simulate conditions in the climate system on any particular day, but to simulate conditions over a longer period—typically decades or more—in such a way that the statistics of the simulated climate will match the statistics of the actual future climate.
There are more than 20 climate modeling groups internationally, that contribute climate model simulations to the IPCC Assessment Reports. Further, many of the individual climate modeling groups contribute simulations from multiple different models. Why are there so many different climate models? Is it possible to pick a ‘best’ climate model?
There are literally thousands of different choices made in the construction of a climate model (e.g. resolution, complexity of the submodels, parameterizations). Each different set of choices produces a different model having different sensitivities. Further, different modeling groups have different focal interests, e.g. long paleoclimate simulations, details of ocean circulations, nuances of the interactions between aerosol particles and clouds, the carbon cycle. These different interests focus computational resources on a particular aspect of simulating the climate system, at the expense of others.
Is it possible to select a ‘best’ model? Well, several models generally show a poorer performance overall when compared with observations. However, the best model depends on how you define ‘best’, and no single model is the best at everything. The more germane issue is to assess model’s ‘fitness for purpose’, which is addressed in Sections 2-4.
The reliability of climate models
Because of the complexity of GCMs, the notion of a correct or incorrect model is not well defined. The relevant issue is how well the model reproduces reality and whether the model is fit for its intended purpose.
Statistician George Box famously stated: “All models are wrong but some are useful.” All models are imperfect; we don’t need a perfect model, just one that serves its purpose. Airplanes are designed using models that are inadequate in their ability to simulate turbulent flow. Financial models based upon crude assumptions about human behavior have been used for decades to manage risk. In the decision making process, models are used more or less depending on a variety of factors, one of which is the credibility of the model.
Climate model development has followed a pathway mostly driven by scientific curiosity and computational limitations. GCMs were originally designed as a tool to help understand how the climate system works. GCMs are used by researchers to represent aspects of climate that are extremely difficult to observe, experiment with theories in a new way by enabling hitherto infeasible calculations, understand a complex system of equations that would otherwise be impenetrable, and explore the system to identify unexpected outcomes. As such, GCMs are an important element of climate research.
Why do scientists have confidence in climate models?
Scientists that evaluate climate models, develop physical process parameterizations, and utilize climate model results are convinced (at least to some degree) of the usefulness of climate models for their research. They are convinced because of the model’s relation to theory and physical understanding of the processes involved, consistency of the simulated responses among different models and different model versions, and the ability of the model and model components to simulate historical observations.
The culture of establishing confidence in climate models is illuminated by considering numerical weather prediction models. Roger Pielke Jr. provides an interesting perspective on this in The Climate Fix:
Decision makers, including most of us as individuals, have enough experience with weather forecasts to be able to reliably characterize their uncertainty and make decisions in the context of that uncertainty. In the U.S., the National Weather Service issues millions of forecasts every year. This provides an extremely valuable body of information experience for calibrating forecasts in the context of decisions that depend on them. The remarkable reduction in loss of life from weather events over the past century is due in part to improved predictive capabilities, but just as important has been our ability to use predictions effectively despite their uncertainties.
This same general strategy for developing confidence is being extended to seasonal climate prediction models, which are based on coupled atmosphere/ocean models. However, on seasonal timescales, skill is assessed in terms of monthly- or seasonally-averaged values. Since the same general formulation for the atmosphere and ocean are used for models across the range of time scales, confidence from the weather and seasonal climate forecast models is transferred to the climate models used in century scale simulations. However, caution is needed in this transferal of confidence, since other factors become significant for the longer timescales in climate models, which have less import in weather models.
User confidence in a forecast model depends critically on the evaluation of the forecasts, both using historical data (hindcasts) and actual forecasts. Evaluation of forecasts is feasible for short time horizons (e.g. weather forecasts). Capturing the phenomena in hindcasts and previous forecasts is a necessary, but not sufficient, condition for the model to capture the phenomena in the future.
Why are some scientists concerned about the reliability of climate models?
Uncertainties in GCMs arise from uncertainty in model structure, model parameters and parameterizations, and initial conditions. Uncertainties in parameter values include uncertain constants and other parameters, subgridscale parameterizations (e.g. clouds), and ad hoc modeling to compensate for the absence of neglected factors. Calibration is necessary to address parameters that are unknown or inapplicable at the model resolution, and also in the linking of submodels. As the complexity of a model grows, model calibration becomes unavoidable and an increasingly important issue. A calibration required in one model may not be required in another model that has greater structural adequacy or higher resolution. Continual ad hoc adjustments of the model (calibration) can mask underlying deficiencies in model structural form. However, it should be noted that in a climate model with millions of degrees of freedom (i.e. different variables, grid cells), it is impossible to tune the model to provide a correct 4D solution of many variables at the same time.
Concerns about evaluating climate models has been raised in context of model calibration/tuning practices; see particularly this recent paper by IPCC coauthors Mauritsen et al. on climate model tuning:
“Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results.”
A remarkable article was recently published in Science: “Climate scientists open up their black boxes to scrutiny.”
“Indeed, whether climate scientists like to admit it or not, nearly every model has been calibrated precisely to the 20th century climate records—otherwise it would have ended up in the trash. “It’s fair to say all models have tuned it,” says Isaac Held, a scientist at the Geophysical Fluid Dynamics Laboratory, another prominent modeling center, in Princeton, New Jersey.”
We are now in a situation whereby matching the 20th century historic temperatures is no longer a good metric for determining which models are good or bad. The implication is that models that match 20th century data as a result of model calibration/tuning are of dubious use for determining the causes of 20th century climate variability.
Agreement between model forecasts/hindcasts and data does not imply that the model gets the correct answer for the right reasons. For example, all of the coupled climate models used in the IPCC Fourth Assessment Report reproduce the time series for the 20th century of globally averaged surface temperature anomalies; yet they have different feedbacks and sensitivities and produce markedly different simulations of the 21st century climate. Success in reproducing past states provides only a limited kind of confidence in simulation of future states.
Broader concerns about climate models have been raised by scientists, engineers and modelers from other fields, outside of climate science. These concerns have been raised in guest posts and comments made at my blog Climate Etc. (judithcurry.com) and also the blog Climate Audit (climateaudit.org). These concerns include:
- GCM predictions of the impact of increasing CO2 on climate cannot be rigorously evaluated for order of a century; climate model development timescales are on the order of a few years, with new model versions emerging every few years.
- Insufficient exploration of model & simulation uncertainty
- Impenetrability of the model and formulation process; extremely large number of modeler degrees of freedom in terms of selecting parameters and parameterizations
- Lack of formal model verification & validation that is the norm for engineering and regulatory science
- Circularity in arguments validating climate models against observations, owing to model tuning/calibration.
- Concerns about a fundamental lack of predictability in a complex nonlinear system characterized by spatio-temporal chaos with changing boundary conditions.
What are the failings of climate models?
As they have matured, GCMs are being increasingly used to provide information to policy makers. Climate model simulations are being used as the basis for international climate and energy policy, so it is important to assess the adequacy of climate models for this purpose. In particular, GCM fitness needs to be assessed for:
- Understanding the causes of 20th century climate change
- Simulation of climate states in the 21st century under different emissions scenarios.
An assessment of the fitness for these two purpose of GCMs will be provided at the end of this section and also in section 4. The focus of this section is on two general topics where GCM simulations are inadequate:
- determination of climate sensitivity to increasing CO2, including the fast thermodynamic feedbacks related to clouds and water vapor that amplify the model sensitivity
- the chaotic nature of the climate system and internal climate variability
Climate sensitivity to CO2
Human-caused warming depends not only on the amount of increase in greenhouse gases but also on how ‘sensitive’ the climate is to these increases. Climate sensitivity is defined as the global surface warming that occurs when the concentration of carbon dioxide in the atmosphere doubles. If climate sensitivity is high, then we can expect substantial warming in the coming century as emissions continue to increase. If climate sensitivity is low, then future warming will be substantially lower.
The equilibrium climate sensitivity (ECS) is defined as the change in global mean surface temperature at equilibrium that is caused by a doubling of the atmospheric CO2 concentration. The IPCC AR4 (2007) conclusion on climate sensitivity is stated as:
“The equilibrium climate sensitivity . . . is likely to be in the range 2oC to 4.5oC with a best estimate of about 3oC and is very unlikely to be less than 1.5oC. Values higher than 4.5oC cannot be excluded.”
The IPCC AR5 (2013) conclusion on climate sensitivity is stated as:
“Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence)”
This likely range of ECS values varies by a factor of 3. Whether or not human caused global warming is dangerous or not depends critically on whether the ECS value is closer to 1.5oC or 4.5oC. Research over the past 3 decades has not narrowed this range of ECS – the 1979 National Academy of Sciences study – the so-called Charney Report – cited a likely range for ECS that was between 1.5 and 4.5oC.
In fact, it seems that uncertainty about values of ECS has been increasing. The bottom of the ‘likely’ range has been lowered from 2 to 1.5oC in the AR5, whereas the AR4 stated that ECS is very unlikely to be less than 1.5oC. It is also significant that the AR5 does not cite a best estimate, whereas the AR4 cites a best estimate of 3oC. The stated reason for not citing a best estimate in the AR5 is the substantial discrepancy between observation-based estimates of ECS (lower), versus estimates from climate models (higher).
Table 1 compares the values of ECS determined by: the IPCC AR4 (2007), the IPCC AR5 (2013), the CMIP5 climate models cited in the IPCC AR5 (2013), the observational analysis of Lewis and Curry (2014) and the update by Lewis (2015) with lower aerosol forcing,
Table 1: Values of equilibrium climate sensitivity (ECS) (oC)
Lewis and Curry (2014) found values of ECS approximately half that determined from the CMIP5 climate models. Using an observation-based energy balance approach, Lewis and Curry’s calculations used the same data (including uncertainties) for changes in greenhouse gases, aerosols and other drivers of climate change given by the IPCC AR5. Lewis and Curry’s range for ECS is much narrower, with far lower upper limits, than reported by the IPCC AR5. Other recent papers also find comparably low values of ECS.
The latest research suggests even lower values of the equilibrium climate sensitivity. The greatest uncertainty in ECS estimates is accounting for the effects of small aerosol particles in the atmosphere, which have a cooling effect on the climate (partially counteracting the greenhouse warming). A new paper by Stevens constrains the impact of aerosols on climate to be significantly smaller than assumed in the IPCC AR5. Nicholas Lewis has re-run the calculations used in Lewis and Curry (2014) using aerosol impact estimates in line with Stevens’ paper. Most significantly, the upper bound (95th percentile) is lowered to 2.38 oC (Table 1).
At the recent international Workshop on Earth’s Climate Sensitivity, concerns were raised about the upper end of the Lewis and Curry sensitivity being too low, owing to uncertainties in ocean heat uptake. Many of the climate model simulations used for the AR5 (CMIP5) are using values of aerosol forcing that are now known to be far too high. Climate model simulations that are re-assessed and re-calibrated to account for smaller values of aerosol forcing can be used to clarify the upper bound of ECS. In a presentation at the Workshop, IPCC lead author Bjorn Stevens argued for an upper bound to ECS of 3.5oC based on analyses of climate models. Research continues to assess the methods used to estimate climate sensitivity. However, the reduced estimates of aerosol cooling lead inescapably to reductions in the estimated upper bound of climate sensitivity.
What is the source of the discrepancies in ECS among different climate models, and between climate models and observations? In a paper entitled “What are Climate Models Missing?” Stevens and Bony argue that:
“There is now ample evidence that an inadequate representation of clouds and moist convection, or more generally the coupling between atmospheric water and circulation, is the main limitation in current representations of the climate system.”
What are the implications of these discrepancies in the values of ECS? If the ECS is less than 2oC, versus more than 4oC, then the conclusions regarding the causes of 20st century warming and the amount of 21st century warming are substantially different.
Further, the discrepancy between observational and climate model-based estimates of climate sensitivity is substantial and of significant importance to policymakers. Equilibrium climate sensitivity, and the level of uncertainty in its value, is a key input into the economic models that drive cost-benefit analyses and estimates of the social cost of carbon. In spite of the IPCC AR5 assessment (where a ‘best value’ was not given) and this recent research on climate sensitivity, economists calculating the social cost of carbon and the impacts of emissions reductions on climate continue to use the ‘best value’ of ECS = 3oC determined by the 2007 IPCC AR4 Report.
Chaos and natural internal climate variability
Variations in climate can be caused by external forcing (e.g. solar variations, volcanic eruptions or changes in atmospheric composition such as an increase in CO2). Climate can also change owing to internal processes within the climate system (internal variability). The best-known example of internal climate variability is El Nino/La Nina. Modes of decadal to centennial to millennial internal variability arise from the slow circulations in the oceans. As such, the ocean serves as a ‘fly wheel’ on the climate system, storing and releasing heat on long timescales and acting to stabilize the climate. As a result of the time lags and storage of heat in the ocean, the climate system is never in equilibrium.
With regards to multi-decadal natural internal variability, previous IPCC reports consider this issue primarily in context of detection of a human-caused warming signal above the background ‘noise’ of natural variability. However, other interpretations of the climate system argue that the natural internal variability constitutes the instrinsic climate signal.
Many processes in the atmosphere and oceans are nonlinear, which means that there is no simple proportional relation between cause and effect. The nonlinear dynamics of the atmosphere and oceans are described by the Navier-Stokes equations, based on Newton’s Laws of Motion, which form the basis of prediction winds and circulation in the atmosphere and oceans. The solution of Navier-Stokes equations is one of the most challenging problems in all of mathematics: the Clay Mathematics Institute has declared this to be one of the top 7 problems in all of mathematics and is offering a $1M prize for its solution (Millenium Prize Problems).
Arguably the most fundamental challenge with climate models lie in the coupling of two chaotic fluids – the ocean and the atmosphere. Weather has been characterized as being in state of deterministic chaos, owing to the sensitivity of weather forecast models to small perturbations in initial conditions of the atmosphere. The source of the chaos is nonlinearities in the Navier-Stokes equations. A consequence of sensitivity to initial conditions is that beyond a certain time the system will no longer be predictable; for weather this predictability time scale is weeks. Climate model simulations are also sensitive to initial conditions (even in an average sense). Coupling a nonlinear, chaotic atmospheric model to a nonlinear, chaotic ocean model gives rise to something much more complex than the deterministic chaos of the weather model, particularly under conditions of transient forcing (such as the case for increasing concentrations of CO2). Coupled atmosphere/ocean modes of internal variability arise on timescales of weeks, years, decades, centuries and millenia. These coupled modes give rise to bifurcation, instability and chaos. How to characterize such phenomena arising from transient forcing of the coupled atmosphere/ocean system defies classification by current theories of nonlinear dynamical systems, particularly in situations involving transient changes of parameter values. Stainforth et al. (2007) refer to this situation as “pandemonium.”
Fitness for purpose: attribution of 20th century warming
So, what does this mean for the fitness for purpose of climate models, to determine the causes of the recent warming?
The combination of uncertainty in the climate sensitivity and the uncertainties in the magnitude and phasing of the major modes in natural internal variability preclude an unambiguous separation of externally forced climate variations from natural internal climate variability. If the climate sensitivity is on the low end of the range of estimates, and natural internal variability is on the strong side of the distribution of climate models, different conclusions are drawn about the relative importance of human causes to the 20th century warming.
The key conclusion of the 2013 IPCC AR5 Report is that it is extremely likely that more than half of the warming since 1950 has been caused by humans, and climate model simulations indicate that all of this warming has been caused by humans.
Global surface temperature anomalies since 1850 are shown below.
Figure 2: Global surface temperature anomalies from the UK HadCRUT4 dataset http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT4.pdf
If the warming since 1950 was caused by humans, what caused the warming during the period 1910 – 1945? The period 1910-1945 comprises about 40% of the warming since 1900, but is associated with only 10% of the carbon dioxide increase since 1900. Clearly, human emissions of greenhouse gases played little role in causing this early warming. The mid-century period of slight cooling from 1945 to 1975 – referred to as the ‘grand hiatus’, also has not been satisfactorily explained.
Apart from these unexplained variations in 20th century temperatures, there is evidence that the global climate has been warming overall for the past 200 years, or even longer. While historical data becomes increasingly sparse in the 19th century, the Berkeley Earth Surface Temperature Project has assembled the available temperature data over land, back to 1750:
Figure 3: Global land surface temperature anomalies since 1750, smoothed with a 10 year filter.
The Berkeley Earth analysis shows a warming trend back to 1800, with considerable variability around the turn of the 19th century. Some of this variability around the turn of the 19th century can be attributed to large volcanic eruptions; this was also the time of the Dalton solar activity minimum (1791-1825). Paleoclimate reconstructions of Northern Hemisphere climate – such as from tree rings and boreholes – indicate that overall warming may have occurred for the past 300-400 years. Humans contributed little if anything to this early global warming. What could be the cause of a 200 – 400 year period of secular warming? The obvious places to look are to the sun and the ocean. Ocean circulation patterns influence climate also on century to millennial time scales. Sun-climate connections are receiving renewed interest, as evidenced by the National Academies Workshop Report “The Effects of Solar Variability on Earth’s Climate”. Understanding and explaining the climate variability over the past 400 years, prior to 1950, has received far too little attention. Without this understanding, we should place little confidence in the IPCC’s explanations of warming since 1950 – it is too easy to get the ‘right’ answer for the wrong reasons.
Anthropogenic (human-caused) climate change is a theory in which the basic mechanism is well understood, but whose potential magnitude is highly uncertain. What does the preceding analysis imply for IPCC’s ‘extremely likely’ attribution of anthropogenically caused warming since 1950? Climate models infer that all of the warming since 1950 can be attributed to humans. However, there have been large magnitude variations in global/hemispheric climate on timescales of 30 years, which are the same duration as the late 20th century warming. The IPCC does not have convincing explanations for previous 30 year periods in the 20th century, notably the warming 1910-1945 and the grand hiatus 1945-1975. Further, there is a secular warming trend at least since 1800 (and possibly as long as 400 years) that cannot be explained by CO2, and is only partly explained by volcanic eruptions.
Given the uncertainties in equilibrium climate sensitivity and the magnitude and phasing of natural internal variability on decadal to century timescales, combined with the failure of climate models to explain the early 20th century warming and the mid-century cooling, I conclude that the climate models are not fit for the purpose of identifying with high confidence the proportional amount of natural versus human causes to the 20th century warming.
Are GCMs are a reliable tool for predicting climate change?
The IPCC has made dire predictions that we can expect 4 oC or more of warming by the end of the 21st century if carbon dioxide emissions are not reduced. How well have climate models done in simulating the early 21st century climate variability?
Projections of warming for the early 21st century
Chapter 11 of the IPCC AR5 Report focused on near term climate change, through 2035. Figure 7 compares climate model projections with recent observations of global surface temperature anomalies.Figure 4. Comparison of CMIP5 climate model simulations of global surface temperature anomalies with observations through 2014 (HadCRUT4). Figure 11.25 of the IPCC AR5
The observed global temperatures for the past decade are at the bottom bound of the 5-95% envelope of the CMIP5 climate model simulations. Overall, the trend in the climate model simulations is substantially larger than the observed trend over the past 15 years.
Regarding projections for the period 2015-2035, the 5-95% range for the trend of the CMIP5 climate model simulations is 0.11°C–0.41 °C per decade. The IPCC then cites ‘expert judgment’ as the rationale for lowering the projections (indicated by the red hatching in Figure 4):
“However, the implied rates of warming over the period from 1986–2005 to 2016–2035 are lower as a result of the hiatus: 0.10°C–0.23°C per decade, suggesting the AR4 assessment was near the upper end of current expectations for this specific time interval.”
This lowering of the projections relative to the results from the raw CMIP5 model simulations was done based on expert judgment that some models are too sensitive to anthropogenic (CO2 and aerosol) forcing.
IPCC author Ed Hawkins, who originally created the above figure, has updated the figure with surface temperature observations though 2015: Figure 5. Comparison of CMIP5 climate model simulations of global surface temperature anomalies with observations through 2014 (HadCRUT4). Updated from Figure 11.25 of the IPCC AR5, to include observations through 2014. http://www.climate-lab-book.ac.uk/comparing-cmip5-observations/
The spike in global temperatures from the 2015 El Nino helps improve the agreement between models and observations, but not very much. The 2015 temperature spike does not even reach the midpoint of the climate models, whereas the 1998 El Nino temperature spike was at the top of the envelope of temperature predictions. The bottom line conclusion is that so far in the 21st century, the global climate models are warming, on average, about a factor of 2 faster than the observed temperature increase.
The reason for the discrepancy between observations and model simulations in the early 21st century appears to be caused by a combination of inadequate simulations of natural internal variability and oversensitivity of the models to increasing CO2 (ECS). Multi-decadal ocean oscillations (natural internal variability) play a dominant role in determining climate on decadal timescales. The Atlantic Multidecadal Oscillation (AMO) is currently in its warm phase, with a shift to the cool phase expected to occur sometime in the 2020’s. Climate models, even when initialized with ocean data, have a difficult time simulating the amplitude and phasing of the ocean oscillations. In a paper that I coauthored, we found that most of CMIP5 climate models, when initialized with ocean data, show some skill out to 10 years in simulating the AMO. Tung and Zhou argue that not taking the AMO into account in predictions of future warming under various forcing scenarios may run the risk of over-estimating the warming for the next two to three decades, when the AMO is likely in its cool phase.
Projections for the year 2100
Climate model projections of global temperature change at the end of the 21st century are driving international negotiations on CO2 emissions reductions, under the auspices of the UN Framework Convention on Climate Change (UNFCCC). Figure 6 shows climate model projections of 21st century warming. RCP8.5 reflects an extreme scenario of increasing emissions of greenhouse gases, whereas RCP2.6 is a scenario where emissions peak around 2015 and are rapidly reduced thereafter.Figure 6: Figure SPM.7 of the IPCC AR5 WG1. CMIP5 multi-model simulated time series from 1950 to 2100 for change in global annual mean surface temperature relative to 1986–2005. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings. The mean and associated uncertainties averaged over 2081−2100 are given for all RCP scenarios as colored vertical bars.
Under the RCP8.5 scenario, the CMIP5 climate models project continued warming through the 21st century that is expected to surpass the ‘dangerous’ threshold of 2°C warming as early as 2040. It is important to note that the CMIP5 simulations only consider scenarios of future greenhouse gas emissions – they do not include consideration of scenarios of future volcanic eruptions, solar variability or long-term oscillations in the ocean. Russian scientists argue that we can expect a Grand Solar Minima (contributing to cooling) to peak mid 21st century.
While the near-term temperature projections were lowered relative to the CMIP5 simulations (Figure 7), the IPCC AR5 SPM states with regards to extended-range warming:
“The likely ranges for 2046−2065 do not take into account the possible influence of factors that lead to the assessed range for near-term (2016−2035) global mean surface temperature change that is lower than the 5−95% model range, because the influence of these factors on longer term projections has not been quantified due to insufficient scientific understanding.”
There is a troubling internal inconsistency in the IPCC AR5 WG1 Report: the AR5 assesses substantial uncertainty in climate sensitivity and substantially lowered their projections for 2016-2035 relative to the climate model projections, versus the projections out to 2100 that use climate models that are clearly running too hot. Even more troubling is that the IPCC WG3 report – Mitigation of Climate Change – conducted its entire analysis assuming a ‘best estimate’ of equilibrium climate sensitivity to be 3.0 oC.
The IPCC AR5 declined to select a ‘best estimate’ for equilibrium climate sensitivity, owing to discrepancies between climate model estimates and observational estimates (that are about half the magnitude of the climate model estimates). Hence the CMIP5 models produce warming that is nominally twice as large as the lower values of climate sensitivity would produce. No account is made in these projections of 21st century climate change for the substantial uncertainty in climate sensitivity that is acknowledged by the IPCC.
The IPCC’s projections of 21st century climate change explicitly assume that CO2 is the control knob on global climate. Climate model projections of the 21st century climate are not convincing because of:
- Failure to predict the warming slowdown in the early 21st century
- Inability to simulate the patterns and timing of multidecadal ocean oscillations
- Lack of account for future solar variations and solar indirect effects on climate
- Neglect of the possibility of volcanic eruptions that are more active than the relatively quiet 20th century
- Apparent oversensitivity to increases in greenhouse gases
There is growing evidence that climate models are warming too much and that climate sensitivity to CO2 is on the lower end of the range provided by the IPCC. Nevertheless, these lower values of climate sensitivity are not accounted for in IPCC climate model projections of temperature at the end of the 21st century or in estimates of the impact on temperatures of reducing CO2 emissions.
The IPCC climate model projections focus on the response of the climate to different scenarios of emissions. The 21st century climate model projections do not include:
- a range of scenarios for volcanic eruptions (the models assume that the volcanic activity will be comparable to the 20th century, which had much lower volcanic activity than the 19th century
- a possible scenario of solar cooling, analogous to the solar minimum being predicted by Russian scientists
- the possibility that climate sensitivity is a factor of two lower than that simulated by most climate models
- realistic simulations of the phasing and amplitude of decadal to century scale natural internal variability.
The climate modeling community has been focused on the response of the climate to increased human caused emissions, and the policy community accepts (either explicitly or implicitly) the results of the 21st century GCM simulations as actual predictions. Hence we don’t have a good understanding of the relative climate impacts of the above or their potential impacts on the evolution of the 21st century climate.
1). Consi/der getting this reviewed by a couple of lawyer to see if they understand it. As a geoscientist, I understand it,but you perhaps need to remove as much jargon and technical terminology as is credible and use more plain English to get your points across to them. (I you like, I can forward to a lawyer friend for feedback).
2). Consider including something about diurnal temperature changes because everyone experiences it. In most places, it is greater than the increased temps that IPCC and policy makers are concerned about.
George Devries Klein,PhD, PG, FGSA
A few lawyers are tweeting this to get feedback, hopefully some show up here
For lawyers brevity is the soul of wit unless you are getting paid by the folio. If you were my expert witness this report, which is excellent from a scientific viewpoint, would be greatly condensed and a summary would be necessary
I was a retired lawyer, and I understand this summary without any problem. But I’m also a retired academic (DPhil) who has been casually following the climate debate for 25 years or more, so I might not be representative…
One thought: the most important climate models for advising policy are the IAMs. These should be included in the analysis. The damage functions are critically important and need to be rigorously assessed and validated.
agreed, but the focus of this is on GCM’s
If the request is specifically for climate models then the answer has to be limited (As a consultant I’ve found myself in the same “expert in a cage” circumstances).
Dr Curry: maybe you should add a paragraph explaining why the models are judged using the T anomaly, and not the actual temperature, which has tremendous variability within the AR5 ensemble). I would also add a paragraph explaining the models require emissions scenarios which are outside your scope but are critically important.
I’d add an adjective before “computational”, e.g.:
…Just to remind your audience just how limited they are. Unlike people who regularly work with computer systems, they may not have internalized your earlier reference to the limitation.
its a technical term. GIYF
I know what “skill” is Steven. I was writing from the point of a lawyer reading the document for the first time.
Perhaps the words ‘dynamic / interactive’ in the opening
sentence describing our complex climate system? The bits
in isolation do not make up the case.
And lawyers focus on the evidence. This from Climate Audi, AR4 discrepancy between model projections and observations.
I’m sticking with surface temperature rather than the satellite data, since it is more easily understood
I think this is a mistake. Surface data have so many local influences, as you and others have pointed out, that their reliability is not great. The lawyers who can understand the value of satellite data will get it, the others won’t be reading your piece in full in any case.
Depending on what these lawyers intend to use your article for, I would likely make it much shorter, or at least include a shorter summary. If these are IP lawyers they can read and understand a great deal of what you have written. If PI (personal injury) lawyers or similar, their eyes will glaze over after only a few paragraphs. I deal with dozens of lawyers and while many are quite capable of understanding the arguments, most are not.
When making the point that the models are tuned to 20th century observations you might make mention that the record has been diverging from the satellite record since its start in Dec. 1978.
Ron yes, just about the time satellite teams switched sensors…
I think you can get lemonade out of the surface data, I’m just not sure this is it.
Should some mention of the comparison(s) involved in “evaluation” be included right here? (I probably would, depending on my perception of my audience.)
you are raising an important issue, but not clear how to explain this concisely
I know. That’s part of why it’s complicated.
Perhaps add a sentence such as:
That does lead to further questions, but that’s unavoidable. At least you’ve given your audience a hint of the direction.
The models vary over 1°C in absolute temperature.
If you increase the initial conditions of a low temperature model by 1°C to match the initial conditions of a “high temperature” model the model to my understanding simply goes wild.
There is a fundamental problem with permitting models to match the delta T without matching the absolute temperature.
Matching the delta T without matching the absolute temperature is curve fitting not modeling.
If a analog circuit simulator only produced a “accurate” simulation with a +1 Volt DC bias and somebody claimed it was accurately modeling the circuit they would be greeted with howls of laughter.
Similar failures in Climate modeling are treated as acceptable. There is a fundamental problem with Climate Scientists and their inability to distinguish between success and failure.
The models need to be evaluated by an outside agency with more common sense and better judgment.
Including a discussion of this issue would be helpful.
just make up more random stuff for Judith to include…
Steven Mosher: “just make up more random stuff for Judith to include…
Perhaps you could give her some pointers, Mosh.
After all, that’s pretty much your stock-in-trade, isn’t it?
I think section  and figure 4 in the following paper are quite interesting:
Tuning the climate of a global model
“If a model equilibrates at a positive radiation imbalance it indicates that it leaks energy, which appears to be the case in the majority of models, and if the equilibrium balance is negative it means that the model has artificial energy sources. We speculate that the fact that the bulk of models exhibit positive TOA radiation imbalances, and at the same time are cold-biased, is due to them having been tuned without account for energy leakage.”
Quite interesting that many models seem to stabilise quite well at a temperature in a 50-year control run and at the same time are having quite huge Top Of the Atmosphere energy imbalance.
The other issue is evaporation at equatorial temperatures is grossly nonlinear. If the absolute temperature is too high the humidity has to be too high or the evaporation rate too high.
If the absolute value of the property (temperature) being simulated is incorrect the model parameterization must be skewed to force the model to match the known anomaly trend. It is curve fitting not modeling.
Too long, too far into the weeds. This would put most people to sleep. This should be offered as an appendix to a more executive summary brief, eg:
Explain model results are really a cloud of multiple model results. The changes of overall global temperature is within the cloud of results.
Talk about GCMs failure to simulate regional effects, increased drought, floods, super storms, etc.
Talk about GCMs failure to include land use changes, biochemical effects, poorly understood particulates, major oceanic cycles, volcanic eruptions.
Identify the major policy issues and then write up how GCMs could be useful, somewhat useful because uncertain and not useful for each issue.
I agree, it is very long and drier than a James bond martini. It comes alive with the charts and graphs as the words are then turned into pictures.
As we don’t know the context its difficult to give too much constructive comment.
i know it is pretty dry and too long. First priority is to make it clear and understandable
Channel Trump the master persuader. Speak using visual words, keep it brief and simple.
The hardest thing to do is to be a technical weedeater, then have to obey professional standards while explaining it clearly and accurately to a lay audience.
I agree with the comment lower down about treating it as one of your blog posts which are invariably clear, to the point and laid out in a narrative fashion that makes them interesting.
your audience is specific, so presumably are used to the logic that a lot of information in climate science is circumstantial rather than necessarily evidence based. Where it is evidence based it is sometimes based on flimsy evidence that would not stand up in a court of law.
So write for your audience and as others have said use summary points.
Over the last 40 years we evolved these documents to a fairly long text (say 20 to 30 pages with further reading appendix). These are usually read by some in the target audience, and they serve as a reference for their staff. We include a 3-5 page executive summary, and a PowerPoint presentation intended to run for 45 minutes with additional back up slides in the back. In some cases I’ve used two slide packs, and ran the presentation using two projectors.
I was introduced to this type of high level document writing when I was in my 20’s, and had to work as a peon for more senior personal crafting presentations for audiences which ranged from management boards to bankers to ministers and in some cases country presidents.
What I found is that the lengthier document did get a lot of reading and exposure with selected individuals down the organization ladder, who in turn had to brief and serve as “working memories” for higher level types. For example, a few years ago I was hired to write a relatively short white paper and make a presentation for an audience which included government types. Over the subsequent weeks I received phone calls from staffers who had been in the room, didn’t have very high profiles, but were evidently digging very deep into the subject.
I’m the least technical person here.
I like dry.
Dry lacks BS.
Weeds are good because weedy shows how it ain’t settled
Trump won ’cause the other side talked down to people.
I learned a lot from this.
Lawyers are accustomed to perusing documents that are both verbose and prolix. That is to all intents and purposes their stock-in-trade.
If they don’t get plenty of turgidity they probably won’t consider they’re getting their money’s worth.
Compared to your average legal document, the above is a triumph of compactness and comprehensibility, not to mention a comprehensive, even-handed summary of the subject.
I am reminded of some advice I once received on selling technical ideas (which is what Judith is doing here): “assert, don’t explain; be prepared to explain”. Also a nit…climate models don’t “infer”, they “imply”. Lawyers will appreciate the difference.
In this case, they don’t “imply” either. Neither word really applies. Perhaps “assume”? Or “are trained to assume”?
“Imply” is correct.
“Indicate” is better.
Trust Willard for coming up with the most appropriate weasel words. :^)
Yeah, how about “wild guess” instead?
Dr. C., no one as yet has seemed to comment on just how wonderful this piece is! (WOWZA!!!) Can’t speak for everyone else here, but i’m bookmarking this baby for sure.
(thank you very much)…
Oops… “wild guess” should read “wildly guess” (don’t want the grammar police on my back side… ☺)
This is for Lawyers, “Allege is better”
Allege definition, to assert without proof
What are “feedbacks and sensitivities”
I need to get rid of that phrase here, agreed
I know you are done with a final draft, but I think feedbacks and sensitivities here was fine. A reader will have a general idea of what this means and this will alert them to look for those terms later for explanation.
Or a parenthetical “(see below)”. I thought about mentioning that, then remembered that Prof. Curry has created many documents, and probably would have considered all options once it was pointed out.
That’s usually how I do red-pencil jobs like this: draw something to attention and let the creator judge how/whether to deal with it.
I had to reread the section from “Lewis and Curry (2014)….” through to “2007 IPCC AR4 Report.” several times to get what you meant. (Assuming I actually got it!)
May I respectfully suggest it be reworded to be more of “If we compare the ECS result given for five models, IPC AR4, IPC AR5, CMIP5, Lewis and Curry (2014), and Lewis (2016) we find models varied giving estimates of ECS of between 1.45 (in which case reducing anthropogenic C02 emissions would have relatively little effect on climate) and 3.22 (in which case reducing C02 emissions would have a major impact on climate). The differences are due to differences in the way the models account for the effects of small aerosol particles in the atmosphere, which have a cooling effect on the climate.
And then go to “A new paper by Stevens constrains the impact of aerosols on climate to be significantly smaller than assumed in the IPCC AR5. Nicholas Lewis has re-run the calculations used in Lewis and Curry (2014) using aerosol impact estimates in line with Stevens’ paper. Most significantly, the upper bound (95th percentile) is lowered from 3.22 to 2.38 oC (Table 1).”
I feel I was able to read and understand the rest of the paper.
One other very minor note is that nonscientists have a hard time with the way we define something once and then use abbreviation without defining it again in a very long paper. I would occasionally write out the abbreviations for key concepts like ECS and GCM instead of abbreviating, perhaps as they come up at the beginning of each major section. I personally found I had to scroll back to look up what ECS is because there is so much content here that I forgot between sections.
Otherwise this seemed to me to be a really excellent and accessible report that is very well written. Congratulations!
Oh and assuming you have not already done so, prepare a single paragraph summary and a bullet point list with a one sentence reminder of each of the major point of each section so people sitting at a table discussing it can quickly move around the sections while discussing it.
‘What are “feedbacks and sensitivities.”
Some systems are clocks and some are clouds,
I know, I’m always saying they’re myths. But if you use the words to me, I understand.
Would it be the same for an audience of lawyers?
Great draft. From a quick scan, note that lawyers need clear explicit definitions of ALL technical terms on which to make logical distinctions. I strongly recommend stating the scientific definition of “climate change” per IPCC WG1 versus its political redefinition by the UNFCCC. Note that “climate change” is often used as an equivocation by implicitly appealing to the political definition, and then switching to accuse skeptics of being “anti-scientific” when they address evidence relative to the scientific definition. e.g. the UNFCCC Fact sheet: Climate change science – the status of climate change science today (and your your quote.)
UNITED NATIONS FRAMEWORK CONVENTION ON CLIMATE CHANGE (UNFCCC) 1992
More after Monday.
Does your lawyer audience need a precise definition of “degrees of freedom”?
Is it possible the term implies other things to a lawyer than to a mathematical physicist? (I don’t know the answer to this, since I’ve understood it in these terms since high school. Perhaps ask a lawyer with less scientific training?)
“Modes of decadal to centennial to millennial internal variability arise from the slow circulations in the oceans. As such, the ocean serves as a ‘fly wheel’ on the climate system, storing and releasing heat on long timescales and acting to stabilize the climate.”
The AMO is distinctly modal, and seems to effectively function as an amplified negative feedback to solar wind strength variability and its effects on the NAO/AO.
If I were writing it, I would say “Research over the past 3 decades has failed to narrow […]”. Use of the word “failed” makes the point more strongly that it was expected to, and much of the policy response was planned based on that expectation.
Of course, it is somewhat tendentious. I’d still do it, myself.
“The best-known example of internal climate variability is El Nino/La Nina.”
I don’t believe it is internal variability. Slow solar wind periods consistently result in El Nino conditions and episodes.
“I don’t believe it is internal variability.”
It’s suspiciously close to a circular argument…
In fact internal variability is the circular argument, in the physical sense.
Not really. Internal variability is a ubiquitous feature of such hyper-complex non-linear systems. Including GCM’s. The idea that it’s also a feature of the real-world climate system is reasoning by analogy, but all science is, at its base, reasoning by analogy.
It makes sense to consider the presence of internal variability to be the default assumption till it’s proven not to be present in this particular case. Sort of a hard thing to prove when the “climate models” that are supposed to represent the real thing also exhibit it.
AK: ” Internal variability is a ubiquitous feature of such hyper-complex non-linear systems.”
That’s sounds entirely specious.
“The idea that it’s also a feature of the real-world climate system is reasoning by analogy”
“It makes sense to consider the presence of internal variability to be the default assumption till it’s proven not to be present in this particular case.”
No it does not, there is zero indication of internal forcing of the AMO.
“Sort of a hard thing to prove when the “climate models” that are supposed to represent the real thing also exhibit it.”
It cannot be proven to be internal, and the models do not exhibit it.
ulric lyons: “That’s sounds entirely specious.”
There’s a reason for that…
Does your lawyer audience understand the precise scientific definition of “constrains”? I notice the other two times the word is used it has a different definition.
Pingback: Climate models for lawyers – Enjeux énergies et environnement
I would add “the state of”, thus:
This might avoid all sorts of sniping.
You are not raising the value of water vapor in the atmosphere. I am in the ground level of understanding and fail to see any connection with what we people live with every day. Climate change and climate refers to archived temperature readings. If you fail to recognise the impact of all greenhouse gases in each of those readings, you will end up in a mess. To separate out CO2 as a culprit is not a good solution, rather a convenience. Basic modelling is fraudulent in that it does not give us the weather picture with daily temperatures, precipitation, tornado warnings, etc.
We do not need climate change as a policy object and a recipe for increased cost on the global population that cannot afford the extravagant ideas resident with elitist western ideas.
Simple def. : A bifurcation is a point – value of a parameter – where the phase portrait of a dynamical system changes from a sink to a source or neither, no particular order implied in this definition.
Justin, I don’t think that’s applicable in this use, it’s a point where the system(equation) resolves into two different points or answers for the same input stimulus.
Here are some bifurcations in the population equation. When the “forcing” reaches a certain threshold, the system can take on two different values.
Different from what?
If natural variability is high and climate sensitivity to greenhouse gasses is low, then the natural variability overwhelms the response to greenhouse gasses.
That’s how I read it…could be wrong…
How will this be presented? In person, on-line? Understand you can’t say, but visuals are appealing and a hard copy to follow would help for those not versed in ‘lingo’.
Acronyms are one area (specifically RCP’s in Projections for 2100 segment) which might need expanding (or supplement?).
IPCC, AR’s, WG’s, and the like.
Maybe an appendix with resources?
Well written, not hard to follow.
“Maybe an appendix with resources?”
Including James Gleick’s excellent work, “Chaos” that makes an excellent job of explaining an extremely complex(!) subject in terms that an averagely literate individual can understand.
And this w/r/t uncertainties:https://www.ipcc.ch/pdf/supporting-material/uncertainty-guidance-note.pdf
Maybe even earth’s energy budget and Carbon cycle, but only as references. You know the setting so may be excessive.
All the para past: “Statistician George Box famously stated: “All models are wrong but some are useful.” can be removed to shorten.
ECS history can be summarized: State of art 1970’s and now, to shorten, since very little has changed in that time. Add reference to appendix for those needing more.
Fitness for purpose relates solely to temperature. Is that intended when the climate discussion is so much more broad?
Heck, if the budget allows it she could have it leather bound with engraved letters. Remember the Northern Gateway oil quality bank report? I kept it on my office bookshelf for years just because it looked so gorgeous.
Add a reference (back) to degrees of freedom and tuning?
I too think you need to go in at a higher level.
I think I’d make some general observations about modelling to start with: modelling is purposeful and utility is the ultimate test, the need for abstraction and simplification, the need to estimate key parameters and the need for verification. I’d then get to a high level description in these terms of what is being done in GCMs, namely: (1) attempting to model the way the dynamics of the oceans an atmosphere will evolve over time assuming assumptions about various changes in the perceived drivers of changes in climate; (2) the abstractions and simplifications that are used (use of voxels, everyone understands the way pixels work and the problems that emerge when using them to model a continuous space); (3) how parameterisation is; done and (4) how verification is handled.
I think you have then set the stage for some commentary on whether the general circulation approach being adopted is the best way to think about the problem in hand; what are the problems with the approach to simplification and parameterisation, and the limitations in verification. in each case these issues need to be linked back to the purpose.
There are a limited number of key points I’d make under each heading. However a couple of points not made that I think should be are the absolute temperature problem and that the ECS issue should be characterised as the models failing to reproduce empirical estimates of it.
Finally, putting all the above aside, there is the general comment about the fact that these models get used well outside their stated limitations and design purpose.
So I think start with the high level structure and illustrate the issues, rather than deal with all the detail. Think of common place analogies for good and bad models, and models that do one job well but breakdown elsewhere.
Seems to me need to lead off with a simple summation:
The climate is too complex for mathematical models to predict the distant future. The uncertainties are too great, the non-linearities too complex, the variables too unsettled, and the fundamental physics too uncertain. Predictions are prophecy, not science.
That is something a lawyer can understand.
How about “Climate predictions are more like prophecy than a science such as physics.”
Says almost the same thing, but not as blatantly tendentious. (I’d expect a good lawyer to get the same message.)
In any case, need to lead-off with very simple summation of some type (executive summary), otherwise audience will drift away.
Maybe it’s just me, but long scientific papers are not particularly helpful. Get to the point at the front, then provide the details making the case.
Wonderful summary. Answers 3 of the 4 questions without one technical statement! She should just answer the first question and insert your summary at the end of that!
This might be tough sledding for the average member of your audience.
Could you say: “Latest science points to the conclusion that ‘a’ is between ‘c’ and ‘d.’ The implication of this is ‘e.’ The detailed justification for this conclusion is in section ‘x’ of the available handout.”
Depending on the circumstances you might even be able to circulate the supplementary material in advance.
My comments above cover all I found. I notice there’s no mention of their execrable regional skill, or inability to replicate such important features as the ITCZ and monsoon effect.
I’m guessing the focus here is almost entirely on predicting global average temperatures (and other features?), I wonder if some sort of disclaimer might be appropriate.
Hope I’ve helped.
Regarding tropical precipitation and the (double) ITCZ,
I don’t know if this is has been raised here before or not, but I recently found the paper by Zhang, et. al. :
I find this paper remarkable in that it indicates:
* why and how the GCMs are in error projecting a HotSpot and
* the CMIP5 has actually gotten worse than the CMIP3!
Perhaps these two points: the GCMs are in error and the errors are getting worse because they fail to produce accurate precipitation, are worth inclusion.
As for the paper, perhaps it’s worth keeping a version as above with technical details referred to on line for any that had more specific knowledge, but making a briefer version, excluding any references unlikely to be known by the more lay reader.
I think I read both papers when I looked into the subject a while back, but thanks for the references.
It’s certainly indicative of something that models aren’t improving their ability to model these important global phenomena. AFAIK they also aren’t improving their ability to simulate such unique regional phenomena as the Tropical Easterly Jet Stream (TEJ), at least as an emergent phenomenon.
I found a paper on the TEJ that I don’t think I’ve seen before:
Weakening Trend of the Tropical Easterly Jet Stream of the Boreal Summer Monsoon Season 1950–2009 by B. Abish, P. V. Joseph, and Ola M. Johannessen (2013) J Clim 26:9408–9414. doi:10.1175/JCLI-D-13-00440.1
Some of the papers that cite it are also very interesting.
A book written by climate modelers for the layman is this one.
It is rather longer, but the introductory sections are good, and the rest is there for more information for the interested reader.
This is terrible! Simple statements are better. Nobody cares how much you know, they’ve already come to you to get an answer. Prepare two or three paragraph answers per question. Put the technobabble in an appendix. Clearly state the problems with the models – not all is known that needs to be known for a complete model. Assumptions or guesses need to be made. Etc. You hedge everything, and don’t clearly answer the questions
I agree it is too long. From a lawyer’s perspective (or a lay person) they want to know if the model has been validated in the real world. Has it? Has it made a risky prediction that turned out to be true (think Popper)? What is the direct effect of CO2, about 1 degree per 2xCO2? How much feedback is pumped into the model? How does that compare with observations (Lindzen, your work, Nic Lewis)? What about previous warming periods (for example 1910-1944)? Did the model capture them or just slide through the middle of the warming? Why did temperatures go down from 1944 to 1970? Did the models capture that? Why not? What natural facors are included? What is left out? The burden of proof is on the modeler. He or she needs to prove the model works, has that been done? Why or why not? I’d focus on the arguments and leave out all or most of the discussion. Just an intro to the science, absolute minimum to teach the vocabulary – then get right after the arguments.
GCM are models which use equations to predict behavior of complex behavior of turbulent flows, like mixing cream in a cup of coffee and then stirring. How the coffee will look and taste is dependent upon the coffee staying in a cup. Pouring cream into an ocean, at least for GMCs means trying to predict how the cream will disperse with or without stirring.
Climate models use Navier-Stokes equations to predict turbulence, the violent movement of air, water and the interface at air water. Either the equations are imperfect or the solutions are yet too difficult.
GCM’s do not predict abrupt changes in the climate, changes that can and has happened over years or decades going from colder to warmer and back again.
GCM’s have not incorporated the influence of ENSO, El Nino or La Nina conditions occurring in the tropical Pacific that in turn influences weather in the Northern and Southern Hemispheres over decades, centuries or longer still.
GCM’s have been built upon complex weather models that extend their forecast range to centuries and millennium. Weather can be predicted out only to 3 or so days.
Any particular GCM is a prisoner of the assumptions made in how the climate works. Each computer run of a GCM depends upon what assumptions are made at the beginning.
GCM’s use parameterizations which are shorthand assumptions of what any particular item, such as rain or clouds or wind or whatever, to substitute in the model when that item can’t be pinned down into an equation.
Lawyers always like a section at the top called “Definitions”.
I think you’re going to rapidly lose them, jargon is creeping in, it will be good to get comments from non-expert lawyers.
Try labelling your arguments and conclusions as to what style they are:
For example: argument “ad absurdum” and the like.
The circular logic of tuning GCMS by assuming (building in) CO2 is the cause of observed warming and then projecting to the future:
argument: “circulus in probando”.
I think legal people know these terms.
Consider a graph of temps for a longer period, like 10,000 years or 2000 years.
Illustrate that we may know the average temperature for a period of years in the distant past, but it is a fallacy to compare to a single year in the present. It should be easy for a valid GCM to predict thirty year averages but do they?
Tough duty, Judith, but a good effort. I was once pitching a CEO whose favor was important. His response was “You R&D guys are all the same, trying to impress me with how good your science is. I assume that. That’s why we pay you. What I want to know is how you are going to help me get to where I want the company to go.” Is this perhaps relevant here?
Without some context it is difficult to provide useful suggestions.
For example, are the lawyers beaurocrats, or providing advice to beaurocrats? And so on.
Your report seems a little (maybe a lot) too long. For example, the first six paragraphs could be summed up by the first sentence. What additional meaning is conveyed by mention of Newton’s laws of motion, or the First law of thermodynamics?
If the answers to the questions you have been asked cannot be adequately explained in half a page or so each, you have probably included too much padding.
It may sound like heresy, but pointing out that increasing the temperature of a planet by raising the number of CO2 molecules in the atmosphere from 4 in 10,000 to 5 in 10,000 has no experimental support at all, might be of use.
Maybe pointing out that no climatologist can even give a scientific definition of the climate of California more meaningful than any of your audience, might indicate the present state of climate science in general. Predicting the climate is difficult, if you can’t even say what it is at the moment.
KISS might be appropriate. Have fun, in any case!
A figure that is missing is the attribution one. It shows we can’t explain the warming without the CO2 increase which is an important result from models to date.
On the other hand, another explanation, that increased heat production causes higher temperatures, is supported by the increased levels of CO2 which result from burning lots of hydrocarbons.
Normal physics backs this up as well.
A figure that is missing is the attribution one. It shows we can’t explain the warming without the CO2 increase which is an important result from models to date.
We can’t explain why earth warmed into the Roman and Medieval warm periods with out manmade CO2. This lack of attribution shows we just don’t really understand climate change.
Work to understand what has happened before or give up saying you understand what is happening now. Models don’t explain anything to do with natural climate changes. Models do the hockey stick thing, they say nothing changed before now, that is clearly wrong.
Perhaps if we had as many measurements back then we could, so this is beside the point.
At some point it might be worth mentioning that the 2C increase projected as a target has no basis or meaning in science (reality). This, of course, also reflects that no one has defined what “the perfect climate” should be.
What on earth do you have against the Minoan/ bronze age warming that was at least the equal of the roman and mwp?
Personally I always like to see the current warming set against previous warm periods.
We have plenty of good proxie records plus instrumental records for the latter warming period that would enable us to provide sorely needed context
That statement amounts to ‘diagnosis by exclusion’ and ASSUMES that all relevant variables are known and quantified. The fact that so many models cannot agree in their ‘projections’ indicates that such is not the case.
The fact that GCMs ‘project’ a warming at least twice the rate subsequently observed indicates that the ECS assumed is too high in virtually all cases.
The image indicates that the models can span the observed warming rate only when GHGs are accounted for.
Really? Is it wise to write that “it has to be CO2 because we don’t know what else it could be”?
That doesn’t make sense at all given that there are a lot variables we know we don’t know.
The drunk guy looking for his keys under the street light analogy comes to mind…
It has to be CO2 because the physics makes you expect it to be so, would be a better way to put it.
It has to not be CO2 because the warming in the atmosphere that the physics says has to happen does not show up in the measurements in those places. Clearly they have put something in the models that is wrong or they have left something important out, most likely both. They don’t take into consideration the robust natural cycles that stay in the same bounds even when large disturbances try to change things.
“It has to be CO2 because the physics makes you expect it”
And there you have it. The circularity of the models on parade.
It might be illuminating for the lawyers if you include how much extra area of cloud (or albedo change?) would be needed to offset the (hypothesized) increase in temperature given a doubling of CO2.
Even thought it might be even longer, an “executive summary” of answers to each question might be helpful. Then if they want to dig deeper, keep the more in-depth explanation.
Judith, see my GCM models post at WUWT, based on essay Models all the way down. Got very good feedback, since is irrefiutable and using lots,of your plus NCAR stuff. Perhaps an additional slide or two.
Reblogged this on Climate Collections.
> whether you think lawyers will understand this
Ask Lukas Berkamp.
> whether the arguments I’ve made are the appropriate ones
I don’t think so, as I think the whole lukewarm playbook incoherent at best.
> whether I’m missing anything
> anything that could be left out (its a bit long).
Every extraneous paragraph under a subtitle, and then most if not all adjectives, adverbs, weasel wording, and shameless plugs.
The presentation should hold with only the bullet points.
I’m a Aircraft Mechanic and found myself only occasionally re-reading certain sections to distill their essence. Assuming the reader is motivated even a smidgen, I don’t believe that this presentation is too long or overly complicated.
I agree that an executive summary might be useful as it would capture that segment of lazy persons who aren’t inclined to put the effort into slogging through the full text.
Speaking as a linguist :-), definitions of a few terms might help: ‘aerosol’ (which might otherwise conjure up images of spray-cans) and ‘parameter’ (which at least at one point seems to be used differently from ‘parameterization’) were two terms I noticed.
The terms ‘climate sensitivity’ and ‘equilibrium climate sensitivity’ are both defined (in successive paragraphs), and more or less identically; is it necessary to define both? If so, what is the difference?
BTW, it might be timely to mention another use of modeling: election predictions. The election of Trump is only one of several recent elections where an ensemble of models failed to make the correct prediction.
Your introduction implies certainty and accuracy. “These equations are based on fundamental physical principles, such as Newton’s Laws of Motion and the First Law of Thermodynamics.” Those laws are commonly cited by climate scientists to imply high certainty to the predictions.
Only later do you note: “The nonlinear dynamics of the atmosphere and oceans are described by the Navier-Stokes equations, based on Newton’s Laws of Motion, which form the basis of prediction winds and circulation in the atmosphere and oceans. The solution of Navier-Stokes equations is one of the most challenging problems in all of mathematics: the Clay Mathematics Institute has declared this to be one of the top 7 problems in all of mathematics and is offering a $1M prize for its solution (Millenium Prize Problems).”
I recommend clearly noting the chaotic nature of climate up front. e.g., for your second sentence I suggest:
“These models approximate chaotic atmospheric and oceanic flows by the Navier Stokes equations, Newton’s Laws of Motion and the Laws of Thermodynamics.”
Hagen ==> Absolutely — essential. I would add an asterisk or footnote # to the word chaotic — like chaotic sup>1 with a footnote defining chaotic in light of Chaos Theory — not just wildly random.
Well, superscripts don’t seem to be working…..
The election forecast models were wildly wrong and climate prediction models share one thing in common with them: even when they may be right their creators will not want to believe them if the predicted results do not correspond to the politically correct preconceived notions about how they should be…
The time has come for both the FDA and the EPA to be declared DOA and be seriously downsized, along with the government-education complex and the federal bureaucracy. It’s tighten, lighten, and brighten time. The preservation of individual liberty demands it.
It is clear to me but lengthy for business people.
They may want to know if the computer climate models solve a mathematical climate model or not. I was under the impression that only the mathematical portion of the energy load is incorrect. Your summary indicates that the fluid-dynamic portion of the model is not solved either. This make the available climate models robots dependent on “tuning” and adjustments.
In court cases to data, the Peabody case in particular, the lawyers for the alarmist side used expert witnesses that said that dangerous warming would be greater than the two degree C limit that has been set. The lawyers for the skeptic side used expert witnesses that testified that the dangerous warming would be less than the two degree C limit that has been set. The court decided to go with what was thought to be the safe side.
If an expert witness presented the real data that shows temperature is well inside the natural bounds of the past ten thousand years and testified that climate changes in natural cycles and man does not cause them, it would be a better case with a better chance.
If flawed climate model output is presented against real data that does not agree, the flawed climate model output should be rejected.
When flawed model output is compared with other flawed model output that is a little different, the court will go with what is viewed as the safe side.
The only way to win the CO2 consensus alarmist court cases is with understanding that natural climate cycles are proceeding just as they have for ten thousand years.
In court cases to data,
In court cases to date,
“If an expert witness presented the real data that shows temperature is well inside the natural bounds of the past ten thousand years and testified that climate changes in natural cycles and man does not cause them, it would be a better case with a better chance.”
Your request is difficult to meet, but it does cannot constitute basis for the contemplated regulations. These require hard numbers based on mathematical model accepted by the public, which are unavailable at this time. The mathematical model is the science and the available computer models are not. They do not simulate the mathematical model.
Unless this presentation is targeted at attorneys with engineering/science backgrounds, I would suggest a re-write to make it more accessible. A J.D. is a doctorate level degree, but it is a discipline that focuses more on thought processes, than data and analysis.
I would suggest writing as though addressing the general public. I think you will lose most attorneys with such a presentation. I have not followed the blog much recently, but your normal posts are quite readable, even by those not versed in the technical aspects of climate.
While no scientist, I have followed the climate debate for some time, and am familiar with the terminology, and most of the arguments regarding GCMs, etc. I would just advise not over estimating your audience’s capacity to follow a series of complex explanation in a condensed period of time.
For one example, i understand your discussion of “model resolutions” at the beginning, after figure 1, but only because I have read so much on climate models here and elsewhere. Your description is clear, concise, and I think will go right over the head of most lawyers with no scientific background.
My best advice would be to act as though you are writing a blog post, which you do with a clarity that is both approachable and interesting, to your fellow scientists as well as the uninitiated like myself.
I may be too close to it all, but I thought the text was fine and not too long. A set of definitions at the beginning (already suggested) would be a good idea. Legal documents usually have them.
Your early subhead ‘Why do scientists have confidence in climate models’ worried me, as any use of ‘scientists’ like that implies that all scientists, or nearly all of them, are of the same view. The media do this routinely.
Why not, ‘Why do some scientists…’ or, ‘Confidence in climate models’ and no more.
And, as others have said, good luck! Lawyers like clarity in language, so sending the draft out is an excellent plan. Whatever people don’t understand needs attention so that they eventually do.
Keep close to everyday terms. When listing phenomena too small for the grid cell, mention thunderstorms and tornadoes.
Maybe even open with the iconic George Box quote: ” All models are wrong, but some are useful.”
Under reliability is another place I would emphasize that models have no parameterization for centennial and millennial influences. This includes variations in frequency of major volcanic events, deep ocean current oscillations, long-term solar activity and even cloud seeding cosmic rays. Paleo temperature proxies indicate significant climate fluctuations pre-dating the industrial age.
I have been arguing these points for decades and I offer the following supporting information in hope that it will be helpful to you.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the past century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
the assumed degree of forcings resulting from human activity that produce warming
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
Nearly two decades ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which were greater than observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
His paper is online at
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:
And, importantly, Kiehl’s paper says:
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s Figure 2 is in his paper that can be read at the link I have provided. Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Putting that another way,
The models are fitted to temperature observations so are forced to agree with those observations but, therefore, they CANNOT be tested and verified against contemporary [temperature] data. And, very importantly, forcing a fit with temperature data makes the models disagree about other climate parameters (notably precipitation).
I cite the 2008 US CCSP report, Ron Miller and Gavin Schmidt.
The GCMs represent temperature reasonably well but fail to indicate precipitation accurately.
This failure probably results from the imperfect representation of cloud effects in the models. And it leads to different models indicating different local climate changes. For example, the 2008 CCSP report provided indications of precipitation over the continental U.S. as projected’ by a Canadian GCM and a British GCM. Where one GCM ‘projected’ greatly increased precipitation (with probable increase to flooding) the other GCM ‘projected’ greatly reduced precipitation (with probable increase to droughts), and vice versa.
It is difficult to see how provision of such different ‘projections’ “can help inform the public and private decision making at all levels” (which the CCSP report says was its “goal”).
Ron Miller and Gavin Schmidt, both of NASA GISS, provided an evaluation of the leading US GCM. They are U.S. climate modelers who use the NASA GISS GCM and they strongly promote the AGW hypothesis. Their paper tiltled ‘Ocean & Climate Modeling: Evaluating the NASA GISS GCM’ was updated on 2005-01-10 and is available at
They have published nothing subsequently to replace it.
Its abstract says:
“This preliminary investigation evaluated the performance of three versions of the NASA Goddard Institute for Space Studies’ recently updated General Circulation Model E (GCM). This effort became necessary when certain Fortran code was rewritten to speed up processing and to better represent some of the interactions (feedbacks) of climate variables in the model. For example, the representation of clouds in the model was made to agree more with the satellite observational data thus affecting the albedo feedback mechanism. The versions of the GCM studied vary in their treatments of the ocean. In the first version, the Fixed-SST, the sea surface temperatures are prescribed from the obsevered seasonal cycle and the atmospheric response is calculated by the model. The second, the Q-Flux model, computes the SST and its response to atmospheric changes, but assumes the transport of heat by ocean currents is constant. The third treatment, called a coupled GCM (CGCM) is a version where an ocean model is used to simulate the entire ocean state including SST and ocean currents, and their interaction with the atmosphere. Various datasets were obtained from satellite, ground-based and sea observations. Observed and simulated climatologies of surface air temperature sea level pressure (SLP) total cloud cover (TCC), precipitation (mm/day), and others were produced. These were analyzed for general global patterns and for regional discrepancies when compared to each other. In addition, difference maps of observed climatologies compared to simulated climatologies (model minus observed) and for different versions of the model (model version minus other model version) were prepared to better focus on discrepant areas and regions. T-tests were utilized to reveal significant differences found between the different treatments of the model. It was found that the model represented global patterns well (e.g. ITCZ, mid-latitude storm tracks, and seasonal monsoons). Divergence in the model from observations increased with the introduction of more feedbacks (fewer prescribed variables) progressing from the Fixed–SST, to the coupled model. The model had problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief. It was hypothesized that these problems arose from the way the model calculates the effects of vegetation, sea ice and cloud cover. The problem with relief stems from the model’s coarse resolution. These results have implications for modeling climate change based on global warming scenarios. The model will lead to better understanding of climate change and the further development of predictive capability. As a direct result of this research, the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation.”
This abstract was written by strong proponents of AGW but admits that the NASA GISS GCM has “problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief.” These are severe problems. For example, clouds reflect solar heat and a mere 2% increase to cloud cover would more than compensate for the maximum possible predicted warming due to a doubling of carbon dioxide in the air.
Good records of cloud cover are very short because cloud cover is measured by satellites that were not launched until the mid 1980s. But it appears that cloudiness decreased markedly between the mid 1980s and late 1990s.
(ref. Pinker, R. T., B. Zhang, and E. G. Dutton (2005), Do satellites detect trends in surface solar radiation?, Science, 308(5723), 850– 854.)
Over that period, the Earth’s reflectivity decreased to the extent that if there were a constant solar irradiance then the reduced cloudiness provided an extra surface warming of 5 to 10 W/sq metre. This is a lot of warming. It is between two and four times the entire warming estimated to have been caused by the build-up of human-caused greenhouse gases in the atmosphere since the industrial revolution. (The UN’s Intergovernmental Panel on Climate Change says that since the industrial revolution, the build-up of human-caused greenhouse gases in the atmosphere has had a warming effect of only 2.4 W/sq metre). So, the fact that the NASA GISS GCM has problems representing clouds must call into question the entire performance of the GCM.
The quoted abstract says;
but this adjustment is a ‘fiddle factor’ because both the radiance AND the saturation must be correct if the effect of the clouds is to be correct. There is no reason to suppose that the adjustment will not induce the model to diverge from reality if other changes – e.g. alterations to GHG concentration in the atmosphere – are introduced into the model. Indeed, this problem of erroneous representation of low level clouds could be expected to induce the model to provide incorrect indication of effects of changes to atmospheric GHGs because changes to clouds have much greater effect on climate than changes to GHGs.
In summation, there are good reasons to suppose the climate models provide wrong indications of effects of changes to atmospheric GHG concentration and different models provide very different indications of precipitation resulting from changes to atmospheric GHG concentration because the models are of different climate systems so – at most – only one of them represents the climate system of the real Earth.
Sorry for my formatting error.
The quoted abstract ends (so the quotation formatting should end) after
…instead of saturation.”
Lawyers are interested in money because it provides a means of comparison of many types of values. Science does not have a similar ease of intercomparison. (It should have been confidence limits, but they have been treated horribly).
Lawyers of the technical type often pride themselves on their logical analysis. In a detailed lawyer argument, more time will be on matters in dispute than matters resolved beforehand. The points of logic already recognised should be stressed and put before them for their convenience.
Now, you have presented a beautiful essay that covers a great deal of relevant ground. Some of it is opinion or trending that way. Not much is settled. Working back to my opening paras, opinion will be seen as in dispute and hence of low value for reaching an agreed judgement. Lawyers will be seeking killer points, but you have not specified any. Are you describing the mixed salad and dressing choice instead of the quarter pound steak?
To me, you have to come back to the request and address its dot points in turn, leaving your essay as supplementary information. The request was –
1. What is a Global Climate Model (GCM)?
2. What is the reliability of climate models?
3. What are the failings of climate models?
4. Are GCM’s are a reliable tool for predicting climate change?
You need to pull parts from your essay to address the dot points.
Point 1. What is a model? You have addressed this beautifully for the purpose. Give your summary.
Point 2. Reliability? Choose you main case history. I’d select reliability for forecasting, talk around your Figure 4. (Comparison of CMIP5 climate model simulations of global surface temperature anomalies with observations through 2014 (HadCRUT4). Figure 11.25 of the IPCC AR5)
Point 3. Failings? Choose your main failing. I’d choose the ECS case history, others might choose clouds, others chaos. If one failing is close to a killer, stress that one. You might not have noted strongly enough that the inherent failing is that a pseudo Earth cannot be replicated materially, only as a thought exercise i.e. a model.
Point 4. Prediction of climate change. Easy. I do not know of a GCM that has been agreed by experts to have done a useful, verifiable or verified prediction of a climate change.
I agree with Geoff: summarize the “request and address its dot points in turn” as an executive summary up front and leave the rest as supplementary information. The critical point is that many will only read the summary so make sure you address the dot points well. I agree with his suggestions for the dot points.
On overall “understanding the causes of 20th century climate change”, I’d suggest to be a bit more detailed on the ‘known unknown’ represented by paleoclimatology’s rough century-scale proxy-empirical association sun-climate. This is not incorporated in GCMs so much as the atmosphere and the space-climate communities are not quite there yet process-wise.
Engels and Van Geel came around this in 2012: http://www.swsc-journal.org/articles/swsc/pdf/2012/01/swsc120022.pdf
The word “simulate” is ambiguous. The phrase “calculated the flow of energy’ may be a clearer phrase in many cases: “Global climate models (GCMs) calculate the flow of energy [or heat] from the sun through the Earth’s climate system and back out to space.” When less energy leaves than enters – for example when rising GHGs slow down the rate at which energy escapes to space – then the planet must warm. To do these calculations …
“Conservation of Energy” would be more meaningful to attorneys than the “First Law of Thermodynamics”
It might be worth reminding lawyers that unlike with Law and politics, consensus doesn’t mean a thing; it’s all about how well a hypothesis accounts for what’s observed.
“it’s all about how well a hypothesis accounts for what’s observed.”
It is more than that. I would say:
It is all about proving a capability to reliably predict observable events.
Table 1 is a nice cherry pick of studies to present.
I recommend following your own strictures on responsible advocacy.
Judy, VTG makes a good observation but fumbles on the communication. I agree that Table 1 lacks other observational estimates of CS that would round out the range. It would help illustrate how there is uncertainty in the observational data as well.
As far as I understand, we do not have a reliable observational record of amounts of aerosols in the atmosphere, and certainly not from before the satellite era. And neither are we able to quantify in a reliable manner – traceable to observations and measurements – the influence of aerosols on reflection and absorption of electromagnetic energy or condensation of H2O in the atmosphere. Aerosol seems to me to be more a tuning parameter than anything else.
“Figure 4. Comparison of CMIP5 climate model simulations of global surface temperature anomalies with observations through 2014 (HadCRUT4). Figure 11.25 of the IPCC AR5
The observed global temperatures for the past decade are at the bottom bound of the 5-95% envelope of the CMIP5 climate model simulations. Overall, the trend in the climate model simulations is substantially larger than the observed trend over the past 15 years.”
That you should choose to put up a graph that is 2 years old.
How about an up to date one….
Or even 2 ….
BTW: The dashed line are for the realised forcings.
” are at the bottom bound of the 5-95% envelope of the CMIP5 climate model simulations. ”
Tony, What RCP is plotted here?
-whether you think lawyers will understand this
There are too many scientific terms that could be expressed in plainer English or have a plain English explanation attached.[For the non scientific lawyers , some lawyers have a high science knowledge]
-“Impenetrability of the model and formulation process; extremely large number of modeler degrees of freedom in terms of selecting parameters and parameterizations
-Concerns about a fundamental lack of predictability in a complex nonlinear system characterized by spatio-temporal chaos with changing boundary conditions.”
Try “The models have “millions of degrees of freedom ” so that no one but the programmer knows which choices have been made and every program rapidly has millions of different instructions which cannot be shared with other models. The complexity of these millions of of instructions branch out unpredictably in a very short amount of time and space.”
– whether the arguments I’ve made are the appropriate ones
Computers cannot really do “Projections of future risks of black swan events (e.g. climate surprises)”
– whether I’m missing anything
richardscourtney idea above which is far too long but is relevant
– anything that could be left out (its a bit long).
Yes could be shorter in places. But this is the first draft isn’t it.
IPCC coauthors Mauritsen et al. on climate model tuning:
“Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure.”
“The models are fitted to temperature observations so are forced to agree with those observations but, therefore, they CANNOT be tested and verified against contemporary [temperature] data. And, very importantly, forcing a fit with temperature data makes the models disagree about other climate parameters (notably precipitation).”
The essence of his long argument is this.
All models have a fit to recent real world observations hence all model the same scenario exactly over a period of the recent past.
All models have different parameters.
Hence it is impossible for them to have a period of time with the exact same fit, absolutely impossible.
We all know that.
The models therefore have to have a variable inserted to ensure that they fit the observations during the recorded period.
Richard suggests it is aerosol forcing
” b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.”
But it is worse than that. Aerosol forcing at a specified value would still result in massively different recent world observations.
What could be used would have to be a variable, eg Aerosol forcing or other, which changes daily as needed, to bring the observations back into the correct range.
Factor x which is different for each model.
This factor would then be dropped at the start of the real run.
The proof is simple. Any model run backwards from the end of the recorded period must end up with a different past history to all other models. If they end up the same they must each have applied their own factor X to the model to make it agree with the past history.
Thankyou for your comments on my post. I much appreciate them.
For clarity, I repeat that “essence” of my argument is this,
“In summation, there are good reasons to suppose the climate models provide wrong indications of effects of changes to atmospheric GHG concentration and different models provide very different indications of precipitation resulting from changes to atmospheric GHG concentration because the models are of different climate systems so – at most – only one of them represents the climate system of the real Earth.”
You say of that argument,
“Richard suggests it is aerosol forcing
” b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.”
But it is worse than that. Aerosol forcing at a specified value would still result in massively different recent world observations.
What could be used would have to be a variable, eg Aerosol forcing or other, which changes daily as needed, to bring the observations back into the correct range.”
No and yes.
No, because the models examined by Kiehl and me do each use a unique value of aerosol cooling to compensate for the degree that each model is assessed to ‘run hot’. I again provide the link to Kiehl’s paper because his explanation is so clear.
Yes, because those adjustments of the models does not provide a complete match with useful accuracy to variations in historic data over time. So, temperature data are altered to make them provide a better fit with what the models say historic temperatures were! These alterations to past temperature data are clear violation of the scientific principle that theory must agree with observations (n.b. not vica versa).
sorry if I was discourteous.
Your argument needs to be listened to, time will tell you are right.
I agree with catweazle666 and Nicholas Darby above. Lawyers are pretty sharp people very used to read long complex documents. You should not worry about length or try to make it more accessible to the layman. They are not the typical layman. As Nicholas Darby points they are only interested in that you provide what they need. It is very clear that they need to know the value of legal arguments based on GCM, whether to use them or combat them, within a litigation most likely based on GHG, including the possible responsibility of one of the parts due to having ignored scientific knowledge and/or advise.
They have probably come to you because they know you will not provide a one-sided defense of climate models that would be useless to their needs.
You have done a wonderful job with the document above. Please do not shorten it or simplify it at the expense of precision and fairness. I agree with most comments that you should include definitions of every scientific term starting with climate change and every acronym. Sharp people get frustrated about information that is not provided, not about excess of information.
I also agree with Jim D that you need to better address the role of GHGs at the light of GCM. This is where the meat is for a lawyer. This is where money is going to come their way. Do GCM strongly support the case for GHG responsibility? If so, how reliable is that attribution? Do not forget methane. A lot of methane is released accidentally or through normal procedures by oil and gas companies.
And this is a tough one to recommend and I hope you forgive me for doing so. I think you need to downplay your contribution to the climate sensitivity debate. There are no proper names in your report except for that part and there shouldn’t be proper names in that part either. Everyone of your readers is going to see a personal interest in that and they are going to automatically discount everything you say about that. The climate sensitivity part should be as neutral as the rest because otherwise it is of no value to them.
So in answering your request:
– Lawyers will understand anything you say provided the definitions to the terms used are there.
– The arguments are appropriate except in the climate sensitivity part, that sticks out from the rest.
– In my opinion you are missing a more thorough consideration of the GHG attribution problem.
– I don’t think you should worry about length at all. Complex matters cannot be dealt with in a concise manner without over-simplification. That is not what they want.
Judith, I would be very grateful if you would provide your final version. Your draft is already the best explanation I have read about climate models and it could be a landmark article to direct interested people so they understand what climate models are and what they do, why they are useful and how much should they be trusted to project a credible future climate.
Natural variability needs to be compared with CO2 forcing because the IPCC state they are only looking at human caused warming. A recent paper discussed SSN as being able to account for the temperature record over the period 1800 to date. I know it is contraversial and Willis won’t agree but natural cycles drove climate before the IPCC CO2 hypothesis so it is unscientific to not get a handle on it for comparison to GCM’s.
A lawyer is bound to ask how much of the warming is natural.
The GCM that is closest to the observed T is Russian I think? Don’t they rely heavily on Natural cycles and the sun (dismissed by the IPCC). Again lawyers will ask which model best fits reality.
I suggest this statement is an supported assumption:
It is entirely dependent on the damage function. And the damage function is not validated, highly uncertain and very little study has been done to determine it.
Relevant to my comment, I see Javier said:
I suggest, to not highlight that the GCM’s do not tell the economic impact of GHG emissions would be to “ignore [the lack of ] scientific knowledge” that could be critical for their cases.
It’s clear and well organized.
I’m seeing some criticism that it is long and dry. No, not compared to what lawyers do.
Lawyers can track this.
An issue is that presenting to “lawyers” is vague. What kind? Sometimes tailoring to issues they handle as comparisons are useful. But there is the trouble – how would you know those things. (PI lawyers routinely model or deal with models. Future economic costs are often dealt with via model (lost income, medical expenses) based on assumptions from forward from known start conditions. Range of jury verdict or settlement value.
Some types of lawyers will be keenly aware of the underlying principles. “This is a lot like determining future value of a brand name!” Others not so much (criminal lawyers focusing on what happened already, unworried about future, for example)
Typo, Judith. Two “are” in the heading “Are GCMs are a reliable tool for predicting climate change?” Scratch the second.
I like it a lot. :)
“Global climate models (GCMs) simulate the Earth’s climate system, with modules that simulate the atmosphere, ocean, land surface, sea ice and glaciers.”
That may be the intention with the Global climate models, whether they achieve that is questionable.
I think you should have a look at your concluding paragraph. The statement in the second sentence seems to be somewhat disconnected from the first sentence:
“The climate modeling community has been focused on the response of the climate to increased human caused emissions, and the policy community accepts (either explicitly or implicitly) the results of the 21st century GCM simulations as actual predictions. Hence we don’t have a good understanding of the relative climate impacts of the above or their potential impacts on the evolution of the 21st century climate.”
Not knowing the context of Judith’s missive its a bit difficult to see if what Judith has drafted will serve its purpose.
As far as I can tell, lawyers are no different from the rest of us as far as climate models are concerned and there seems no specific requirement for Judith’s paper apart from simply being understandable.
The same comment applies to judges and doctors and engineers as well. If writing for a lay person, its is important to keep the message short and simple and provide references for any interested reader to follow through if they so desire.
Problem with the very first row:
“Global climate models (GCMs) simulate the Earth’s climate system”
Should be “try to simulate”.
These are layers, after all :-)
Judith: You’ve focused on global temperatures, but there are other metrics that clearly show that climate models are not simulating Earth’s climate. These include:
– Sea ice (models underestimate sea ice loss in the Arctic Ocean, and they simulate losses in the Southern Ocean while data indicate sea ice gains there)
– Precipitation (on an absolute basis, models show too much global precipitation compared to satellite and rain gauge based observations, suggesting the water vapor and its greenhouse effect are too high in models)
– Polar amplification (models underestimate polar amplification of high latitude Northern Hemisphere temperatures since the mid-1970s, they fail to simulate the polar amplified cooling at NH high latitudes from the mid-1940s to the mid-1970s and they fail to simulate the polar amplified warming from the 1910s to the mid-1940s.)
I’ve got graphs to support those. You’ve got my email address if you’d like them.
Major excitation new zealand fault lines multiple earthquakes.
Pingback: Judith Curry’s guide to climate models | The k2p blog
I could follow this pretty well. As noted by several folks lawyers are used to dry documents and this is not likely an issue. I am used to explaining technical concepts (from the software world) to legal folks, and their instinct seems always to be to distill the concepts down to an absolutely clear algorithm which serves their purpose. Hence to some extent, one needs to know their purpose. And if any point within the algorithm seems woolly or uncertain, they will spend as much time as is necessary to chase that point down to it’s absolute basics, until the woolliness is removed. Hence the important thing is having clarity on all aspects, or at least the subset that happen to be important for their purpose. If for instance their purpose is to determine whether basing major policy and spend upon climate model outputs is legally responsible or irresponsible, then quite how uncertainties arise is not so much the issue, but a close characterization of those uncertainties, i.e. possibilistic or probabilistic, plus what these terms actually mean and what bounds or features they possess, and to what level of dispute each is subject. Agree with others that a terms definition section would be useful.
nitpick: below Fig 3 you use ‘the turn of the 19th century’ twice. This is an ambiguous phrase, but you seem to mean the beginning of that century whereas this is normally understood as the end. I.e. the turn into the 20th. So it should be turn of the 18th, or less ambiguous just say ‘around the beginning of the 19th century’.
What is the climate system? Over 90 % of global warming goes into the oceans. And what can a GCM say about this? There has clearly been an increase of ocean heat content during the 20th century. But how much and what temperature profile is highly unclear. GCMs have been unable to get it. There are huge differences in the estimates when they have tried, and they have not even tried to present an ensemble mean. So more than 90% of warming in the 20th century has not been accounted for in GMCs.
There is nothing clear about ocean heat content. The existing crude estimates are statistically worthless.
The essay is very good as a statement of present thinking. There is, however, a deep confusion in this thinking that might be worth mentioning. Specifically, if the system is never in equillibrium then ECS never occurs. That is, natural variability may cause the temperature to be very different from what ECS predicts, when CO2 actually doubles. The system is probably far from equillibrium, how far we do not know.
Yet ECS is often treated as an accurate prediction. The policy debate is often based on it, as is much of the science, especially the dangerous impact projections. These uses of ECS look to be a widespread mistake.
David Wojick: I agree with you that ECS is useless, but the system is very sharply in equilibrium:
Planetary emissivity (all-sky transfer function):
ep = OLR(all)/ULW = f(all) = 239.4 W/m2 / 399 W/m2 = 3/5.
Normalized all-sky greenhouse factor, g(all) = 2/5,
clear-sky greenhouse factor, g(clear) = 1/3, meaning that G(clear) = OLR(clear)/2, in steady state.
Single-layer IR-opaque cloud area fraction = planetary emissivity = 0.6.
Albedo = 1 – sin 45° = 1 – √2/2 = 0.293, “symmetrical and highly constrained” (Stephens et al. 2015).
Clear-sky and all-sky surface energy budget is connected to TOA fluxes:
E(SRF, clear) = 2OLR(clear),
E(SRF, all) = 2OLR(all) + LWCRE.
No sign of any GHG-forcing, these TOA energetic constraints equalize immediately (instantaneously?, within one hydrological cycle [10 days]? within one year?) any extra CO2 (or methane) LW-absorption.
All data are from Table 4.1 of CERES EBAF Edition 2.8, (March 2015), you may find them here:
Then why does global temperature, and many of the other central system parameters, oscillate so strongly?
David: My candidate to answer your question is the latent heat (evaporation) – water vapor content (GHG effect) – cloud area fraction (beta and planetary emissivity) – LW cloud effect (LWCRE) – SW reflectivity (albedo) interplay + plus internal SST variability from the decadal/centennial/millennial ocean deepwater cycle (ENSO etc).
We don’t really even know if there is an equilibrium.
If the system is “far from equilibrium” in the technical sense, that is chaotic, then there is indeed no equilibrium. Constant solar input might well be sufficient to keep global temperature oscillating forever across a range of scales. This would make ECS a false abstraction: a debate over a state of the system that cannot exist.
So far as I can tell this fundamental question is being ignored by the multi-billion dollar climate research program, and especially by the modeling community.
Yup. I often say ECS is a myth. Of course, I say many things are myths. Most people seem to just ignore it.
I wonder if Mr. Trump will pay attention. It would certainly bolster his promises to pull out of Paris and block expensive “climate” regulations.
David (and AK):
All of these parameters fluctuate (‘vibrate’) around the steady (equilibrium) state of beta=ep=f(all)=OLR(all)/ULW=3/5 and E(SRF, clear) = 2OLR(clear) and E(SRF, all) = 2OLR(all) + LWCRE. For example, total cloud cover (beta) ‘comes down’ from 0.615 to 0.605 in the past decades; planetary emissivity (ep) is also slightly higher (0.6005) than 3/5; but well within the observational one sigma uncertainty. So my candidate to answer your question is the latent heat (evaporation) – water vapor content (GHG effect) – cloud area fraction (beta and planetary emissivity) – cloud LW effect (LWCRE) – SW reflectivity (albedo) interplay + plus evident internal SST variability from the centennial/millennial ocean deepwater cycle.
The imbalance tells us we are below the equilibrium temperature, despite all the warming. This is an observational fact from the OHC trend that is crucial to the understanding.
Your list of uses of GCM is oriented towards climate experts. My list might be:
0) QUANTITATIVE STATEMENTS: Climate scientists don’t need GCMs to conclude that the planet has warmed about 1 degF since 1950; a variety of temperature records robustly support this conclusion. Climate scientists don’t need GCMs to conclude that anthropogenic GHGs have contributed to this warming. The interaction between GHGs and thermal infrared radiation measured in the laboratory proves that rising GHGs will cause some warming. However, consensus climate scientists rely on GCMS when making all QUANTITATIVE predictions about future warming and the human contribution to past warming. Every quantitative prediction made by the IPCC or the NAS should begin with the qualifying phrase: “If GCMs are correct, …”. (Evidence that GCMs are not always correct is discussed elsewhere).
1) GLOBAL CLIMATE CHAGE: GCMs are used to predict how GLOBAL climate (especially temperature) will change at various times in the future if the future atmosphere contains larger quantities of GHGs emitted by human activities. Recently, four future scenarios (Representative Concentration Pathways) have been used by GCMs: RCP 2.6 assumes that global annual GHG emissions peak between 2010-2020, with emissions declining substantially thereafter. Emissions in RCP 4.5 peak around 2040, then decline. In RCP 6.0, emissions peak around 2080, then decline. In RCP 8.5, emissions continue to rise throughout the 21st century. Since emissions have peaked in the developed world, the pathway the planet will follow depends on what happens in less developed countries, as well as in the developed countries (with only 1/3 as many people). RCP 4.5 requires immediate drastic cuts in emissions by developed countries and significant limitations on emissions growth in the developing world. A 50% cut in emissions in the developed world by 2050 and an 80% cut later in the the century are consistent with RCP 4.5. There is debate about whether business-as-usual is better represented by RCP 6.0 or 8.5, but so far we have slightly exceeded the RCP 8.5 scenario. There is also debate about whether RCP 8.5 is an overly pessimistic worst-case scenario. RCP 8.5 is sometime described as an “economic golden age” for the developing world fueled mostly by abundant and inexpensive coal. The RCP 8.5 scenario gets the greatest attention from politicians, policy advocates, and the press. (RCP 2.6 is an unrealistic scenario advocated by James Hansen and 350.org to guarantee the long-term stability of our planet’s ice sheets millennia into the future.)
2) REGIONAL CLIMATE CHANGE: GCMs are also used to predict how REGIONAL climate will change. Unlike global climate, there is significant disagreement between various models about regional climate. For example, some models predict an increase in rainfall in the Amazon basin while others predict a decrease and partial conversion to grassland. Some models predict a decrease in rainfall in the arid western US while others do not. They all predict a rise in temperature – and therefore evaporation – and therefore at least some increase in drought. All models predict an increase in evaporation with warming, but they disagree about where that water will come down.
3) DECADAL CLIMATE CHANGE: GCMs are being used to help local governments plan for changes coming in the next decade or two. (This is long-term planning for most government expenditures.) Recent experiments have shown that local changes over the next decade or two are more likely to follow the trends of the last several decades than the predictions of current GCMs.
4) ATTRIBUTING EXTREME WEATHER: GCMs are being use assert that current extreme weather (hurricanes, tornados, floods, droughts, and heat waves) has been made more likely by the higher current level of GHGs in our atmosphere. Given the disagreements about regional climate change and a lack of skill in predicting decadal climate change, such calculations are controversial. All GCMs make similar predictions about the direction of some future global climate trends, but when and where those changes will be observed against the background of normal climate variability with a reasonable degree of confidence is uncertain. The public statements made by the current President’s Science Advisor about a link between rising GHGs and recent snowy winters on the East Coast is an example of attributing current weather extremes to rising GHGs that is not widely accepted. According to the IPCC, the only changes in extreme weather that have been unambiguously detected by observation are an increase in short intense rain (that can produce flash flooding in some areas) and an increase in extremely hot days and a decrease in extremely cold nights.
5) SEA LEVEL RISE: Climate models are used to predict the thermal expansion of sea water associated with global warming and the disappearance of the Greenland and West Antarctic Ice Sheets, the two major contributors to future sea level rise. Thermal expansion of seawater is well understood, but the extent of warming and the rate at which warmth will be transferred into the deep ocean is not. Ice sheet melting with warming is understood, but surface darkening due to carbon black is complicated and the physics of the flow of ice sheets is very poorly understood. GCMs project a rise of 0.33 to 0.63 meter for RCP6.0, and 0.45 to 0.82 meter for RCP8.5. Projections of 1 meter or more are based on speculation that the transiently fast flow observed at some sites will turn into widespread collapse. On the other hand, sea level rose about 9 inches in the past century and for the last two decades at a rate of 1 inch per decade. The low end of the IPCC’ GCM projections therefore involves little acceleration. Needless to say, predictions of 1 m or more get most of the publicity despite little current evidence for a dangerous acceleration.
6) REPRODUCING PAST CLIMATE CHANGE: GCM’s have been used to “hindcast” the historical record of rising temperature over the last dozen decades and attribute that increase to rising GHGs. However, the parameters of climate models can be tuned to simultaneously represent both current climate and past climate change, so successful hindcasting does not validate the future predictions of GCMs. GCMs that make the same hindcasts about the past can differ by a factor of two when predicting future warming.
7) SOCIAL COST OF CARBON: The output from GCMs provides the raw data used to calculation of the social cost of carbon. Since damage rises exponentially with warming (and sea level rise), a low probability of a large amount of warming makes an large contribution to the calculated social cost of carbon. If climate scientists were sure that a doubling of CO2 would not cause more than 3.5 degC (the current IPCC consensus is 1.5 to 4.5 degC), the social cost of carbon would drop significantly.
CLIMATE SENSITIVITY: Can climate scientists make quantitative projections about the future without using GCMs? Yes, observations of past changes in GHGs/radiative forcing and temperature change allow us to calculate our planet’s sensitivity to rising GHG. TCR and ECS. RCP 6.0 is equivalent to a doubling of CO2. A TCR of 1.35 K means it will be 1.35 degC warmer near the end of the 21st century and an ECS of 1.6 K means temperature will rise another 0.25K before it stabilizes sometime in the 22nd century.
How can energy balance models and GCMs agree about past warming and disagree about future warming? This is currently the subject of much controversy and research. One likely reason is that GCMs have relied on unrealistic sensitivity to cooling by aerosols when predicting currently warming. On the other hand, some GCMs predict that energy balance models should be underestimating climate sensitivity.
If energy balance models are correct, the social cost of carbon is going to come down significantly….
Other evidence that GCMs can be wrong: Warming 1920-1940 nearly as rapid as 1975-1995, but not significantly driven by rising GHGs. Pause from 1998-2013 inconsistent with GCMs. Hot spot in tropical troposphere. None of these tell us much about how wrong. Could be over-warming or too little unforced variability.
Sorry that none of these words mesh with your current write-up. Like others, I suspect your current write up is too technical and needs a broader focus.
Very curious about the supposed variety of temperature records referred to in the first paragraph. To my knowledge there are no records of global temperature besides the satellite measurements begun in 1978. They show very little warming and that due to the giant ENSO, not GHG increases. Statistical guesses are not temperature records.
good suggestions here, thanks
This seems reasonably balanced, in contrast to what might be presented by activists and advocates.
don’t make it complicated. Our Swedish professor Gösta Walin, Gothenburg, stated the following in 2005:
I see it mostly as an expression of megalomania to think that the climate can be imaged in a computer.
When I first started to use simulations our expert consultatnt explained: If you have to guess more than 3 relations between variables it is mathematically impossible to get a sensible result.
You state: “There are literally thousands of different choices”. That means guesses, which all contain errors. These are multiplied with each other several million times to arrive at 2100.
Thousand guesses times millions of multiplications con only produce nonsense.
Use common horse sense to explain that climate models are play toys for overage kids with the worlds most expensive toys.
“These equations are based on fundamental physical principles, such as Newton’s Laws of Motion and the First Law of Thermodynamics.”
Omission of the Second Law merits highlighting. GCMs are based on the assumption that the atmosphere can be approximated as a linearly perturbed equilibrium aka isothermal system. The Second Law covers nonlinear and chaotic behavior – including black holes and dark matter, and introduces dissipation, free energy and entropy. The closest GCMs come to describing dissipation is a linear Navier-Stokes viscosity tensor. Thermal dissipation, a quite distinct physical process, remains an alien concept.
A simplistic thermodynamic model might suppose that, for every 240W incoming energy at 5800K, the earth emits 240W at 200K. The Second Law says 97% of the incoming energy has been thermally dissipated – work aka weather has reduced hot to cold. The contentious coefficient of climate sensitivity is basically how much the 240W value would change should the earth’s surface temperature increase by one degree.
One of the major achievements in physical mathematics is that most, if not all, laws have variational implications. Given a plethora of factors, is it not possible that nature might settle on that combination favoring a cool earth? There would seem ample opportunity for analytic exploration by the scientifically curious. Unfortunately, discovery runs the risk of politically unacceptable findings.
To put the modeling science in perspective, I recommend this quote:
“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”
Attributed to [John] von Neumann by Enrico Fermi, as quoted by Freeman Dyson in “A meeting with Enrico Fermi” in Nature 427 (22 January 2004) p. 297
And point out that the GCMs have hundreds of parameters.
Another point. A model that is produced by fitting a data set can ONLY be tested by its ability to match observations OUTSIDE that data set. Hence models fitted to the 20th century data can only be tested by data from other times.
Sorry, this is too waffly.
These lawyers need to be informed by the use of short sharp sentences.
It is essential to point out that GCMs have not been verified: therefore they are unusable.
Would they use a spreadsheet to do their hourly billing if the spreadsheet returned different answers each time they ran it?
Would they fly on an aircraft that had not been proven to be safe to a sufficient level of safety? Of course not.
Lawyers may have razor sharp minds and very good memories, but they have been exposed to the same drip – drip warming nonsense that the rest of the public have experienced over the years.
It will be hard to convince them without conclusive arguments, such as those outlined by others here.
climatereason makes a good point about “circumstantial” evidence.
In the past when legal cases were tried claiming damages from work environment conditions (eg. Asbestos dust) or from medications (eg. Thalidomide) the Bradford Hill factors were used to determine causality. This is pertinent because it is a method with legal precedent for deciding on scientific evidence which is mostly circumstantial.
How does this pertain to GCMs? I would like to see a paper addressing each of the major factors included in GCMs with the question: Is there any other explanation equally or more likely than the presumed relationship in the GCM? If the answer is yes, than causality in not proven, the parameter is legally only an opinion.
The post above addresses one of the key factors, namely CO2 sensitivity. From studies comparing CMIP5 models, there is a range of estimates for 4xCO2. That refers to temperature increases resulting directly from quadrupling CO2 concentrations in the air. Legal questions arise: On what basis is this parameter estimated? In what time frame is 4xCO2 expected?
As others above have discussed, a second GCM factor is atmospheric aerosols. The same legal analysis is needed. What are the range of estimates for aerosols effects on temperature, and what is the basis?
Another key issue is the effect of water vapor and droplets (clouds) in the atmosphere. Same thing: What is the range of estimates and what is the basis for choosing one or another?
From my looking at INMCM4 model, it turns out that climate system inertia is a fourth key parameter, more specifically the estimate of how long oceans take to respond to changes in other factors. All of the other CMIP5 models are way too volatile, compared to temperature observations.
Bradford Hill background:
Some good work comparing attributes of CMIP5 models has been done by O. GEOFFROY AND D. SAINT-MARTIN et al.
Centre National de Recherches Met eorologiques, Groupe d’etudes de l’Atmosphere Meteorologique (CNRM-GAME), Toulouse, France
Transient Climate Response in a Two-Layer Energy-Balance Model. Parts I & II
Very curious about the supposed variety of temperature records referred to in the first paragraph. To my knowledge there are no records of global temperature besides the satellite measurements begun in 1978. They show very little warming and that due to the giant ENSO, not GHG increases. Statistical guesses are not temperature records.
And land/ocean indices have in common the data sources.
I think sat records deserve an honorable mention here.
Respectfully suggest that you define the term “calibration” at its first mention. As it stands, the definition — “Continual ad hoc adjustments of the model (calibration) …” — is the fourth mention.
I must confess I did not read it all. Got to this statement:
‘the chaotic nature of the climate system and internal climate variability’
implying a lack of understanding of the origins of the modes of climate change that are natural in origin.
Until a climate model can predict the evolution of surface pressure in the Antarctic circumpolar trough next year and through to the next decade and beyond, say for a period of thirty years, it can be fairly remarked that the fundamental parameters responsible for the evolution of the climate system have escaped the attention of the engineers responsible for the model, and their work is of no value as a predictive tool.
The Southern Annular Mode is the prime source of climate variability across the globe. Change in geopotential height and shifts in atmospheric mass driving surface pressure and wind around the entire globe has its origin in high southern latitudes.
I think you need to lead in with this quote to set the stage for the whole discussion:
“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”
– IPCC TAR WG1, Working Group I: The Scientific Basis
Aren’t ALL models based on the forcing equation from Myhre ?
ΔF = 5.35 ln(C/Co) [W m-2]
Is the 5.35 correct ?
n’t ALL models based on the forcing equation from Myhre ?
ΔF = 5.35 ln(C/Co) [W m-2]
Is the 5.35 correct ?
The formula you cite is a FIT to data produced by LBL models. It is fit to the OUTPUT of those models
it is NOT USED IN GMS
GCMS use BAND MODELS .. band models are calibrated against LBL models.. Look up in the sky? see those satellites sensing the earth?
They use band models.
See your weather forecast it uses a band model
Band models approximate the LBL models
The effect of C02 is calculated by the band model.. it is not INPUT to the model.
And yes.. 5.35 is correct
Thank you Steve for the explanations.
Then the GCMs are recalculating band per band all possible energy absorptions (over a spectra of thousands of discrete lines for each molecule) in each of the the 5000+ million of 1 km2 per 1000 m cells, and this for every time interval, to integrate them into a radiative forcing ? Or do I interpret something incorrectly?
Given the central importance of this equation for climate change in general itvis alarming how little evidence there is forc it’s validation. Some attempts at confirmation have a high content of circular reasoning, eg measurements of spectral change through natural air columns.
One would think that laboratory to small field scale experiments would provide useful data. If you have found references to these, please post them. At the least, the imperfection of such experiments might constrain the variability of the 5.35 value or even show that it is beyond credible limits.
Measures of climate sensitivity, equated to the thought ‘does CO2 heat the atmosphere? are often defended by fuzzy argument of ideology rather than by hard observation and calculation. Look at how the IPCC data reworked by Lewis and Curry gave rather different ranges for sensitivity, ECS and TCR.
Simulation of future climate states, from decades to centuries, e.g. simulations of future climate states under different emissions scenarios.
I think it should be highlighted that these are *expectations* developed with incomplete information, and that therefore the unexpected is is inevitable.
In the IPCC AR5 WG1 report (page 920), ECS is defined as:
ECS = F2.CO2/α
where F2.CO2 is the primary forcing for doubling the CO2 concentration (5.35*ln2)
and α is the combination of all possible feedbacks to the primary forcing.
It is well possible that α would be zero or near zero Page 818.
Then ECS would be positively or negatively infinite !
I’ve never heard of a system that is unstable with zero feedback, or it should be intrinsically unstable… and we would not exist.
Ask the lawyers to imagine this year is 2116 and that to get the GCM’s to retrofit 2000 to 2116 requires calibration. Then ask them how useful they believe the models are at predicting the future 100 years in 2016?
Hi Judy – Glad you are doing this. These resources might be useful to you
What Are Climate Models? What Do They Do?https://pielkeclimatesci.wordpress.com/2005/07/15/what-are-climate-models-what-do-they-do/
Pielke, R.A., Sr., 2003: The Limitations of Models and Observations. COMET Symposium on Planetary Boundary Layer Processes, Boulder, Colorado, September 12, 2003. http://pielkeclimatesci.wordpress.com/files/2009/09/ppt-3.pdf
Quotes From Peer Reviewed Paper That Document That Skillful Multi-Decadal Regional Climate Predictions Do Not Yet Exist
Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008. http://pielkeclimatesci.files.wordpress.com/2012/02/r-361.pdf
On your use of the term “resolution” I assume you mean grid increment. At least 4 grid increments in each direction are needed to realistically resolve a feature in a model; see
Pielke, R.A., 1991: A recommended specific definition of “resolution”, Bull. Amer. Meteor. Soc., 12, 1914 http://pielkeclimatesci.files.wordpress.com/2009/09/nt-27.pdf
Pielke Sr., R.A., 2001: Further comments on “The differentiation between grid spacing and resolution and their application to numerical modeling”. Bull. Amer. Meteor. Soc., 82, 699. http://pielkeclimatesci.wordpress.com/files/2009/10/r-241.pdf
Laprise, R., 1992: The resolution of global spectral models. Bull. Amer. Meteor. Soc., 9, 1453-1454
The “relatively coarse spatial and temporal resolutions of the models” that fail to capture, “many important processes that occur on scales that are smaller than the model resolution (such as clouds and rainfall,” are brought into a sharper focus using ‘parameterizations.’ “Parameterizations of subgrid-scale processes are simple formulas based on observations or derivations from more detailed process models,” that must then be, “‘calibrated’ or ‘tuned’ so that the climate models perform adequately when compared with historical observations.” And, given all of this massaging of, “physical processes,” that are, “either poorly understood or too complex to include in the model given the constraints of the computer system,” we must rely on the integrity of the creators of the models and their knowledge of statistics if climatology is to be considered a real science and not just an exercise in numerology that should be given the seriousness we accord to the ancient science of astrology.
“based on Newton’s Laws of Motion, which form the basis of prediction winds and circulation in the atmosphere and oceans.”
Should this read ‘basis of predicting’ instead of ‘basis of prediction’? Or ‘basis of the prediction of’?
“narrowed this range of ECS – the 1979 National Academy of Sciences study – the so-called Charney Report – cited a likely range for ECS…”
Too many hyphens used for different purposes – maybe something like this instead:
“narrowed this range of ECS. The 1979 National Academy of Sciences study (the so-called Charney Report) cited a likely range for ECS… “
I would suggest the the term ‘calibration’ should not be used in the incorrect manor by climate modelers or if you feel you have to, then explain in the engineering world of computers and places where things ‘just have to work’ to reduce loss of life, the calibration *USUALLY* means: checking by comparison, or adjusting a value to match that of a *KNOWN* and *MORE ACCURATE VALUE*.
There must be legions of calibration engineers and technicians turning over in their graves at the spurious concept of calibrating the output of a computer even though the inputs to the computer program are many, and of *unknown* value.
Making “degrees of freedom” understandable is a tall order, especially since parameterization reduces the degrees of freedom. “Human society, like the climate system,” says Lindzen, ” has many degrees of freedom. The previous cases [i.e., changes in the scientific paradigm based on evolving cultural, political, observational and institutional factors] lasted from 20 to 30 years. The global warming issue is approaching 30 years since its American rollout in 1988 (though the issue did begin earlier). Perhaps such issues have a natural lifetime, and come to an end with whatever degrees of freedom society affords. This is not to diminish the importance of the efforts of some scientists to point out the internal inconsistencies. However, this is a polarized world where people are permitted to believe whatever they wish to believe. The mechanisms whereby such belief structures are altered are not well understood, but the evidence from previous cases offers hope that such peculiar belief structures do collapse.”
Judith – will your report be peer reviewed? Do you think it would have any chance of passing?
It may be helpful to delay your report until the 30th of this month when this contest closes.
“It has often been claimed that alarm about global warming is supported by observational evidence. I have argued that there is no observational evidence for global-warming alarm: rather, all claims of such evidence rely on invalid statistical analyses.
Some people, though, assert that the statistical analyses are valid. Those people assert, in particular, that they can determine, via statistical analysis, whether global temperatures have been increasing more than would be reasonably expected by random natural variation. Those people do not present any counter to my argument, but they make their assertions anyway.
In response to that, I am sponsoring a contest: the prize is $100 000. Anyone who can demonstrate, via statistical analysis, that the increase in global temperatures is probably not due to random natural variation should be able to win the contest.“. From:
I suspect that no one will be able to successfully identify which of the 1000 samples have a known, deliberate trend added.
Which leads to the strong implication that if no one can identify random data with a trend added, how can we identify if a global temperature series has ‘mans footprint added’?
You are dealing with lawyers. They expect a list of caveats, and some disclaimer (for them inappropriately using the data).
You have forecast to 2100. The past 2k years were very different from the previous 3k. We do not know where we are heading.
I am a patent attorney, and so one of your audience.
I had no trouble following the presentation and thought it was very good.
You might want to mention the 2100 results of the intermediate RCP – you discussed the two extremes, but not the middle ones (4.5 and 6.0) and lawyers will want to know about the one which is most likely to occur. It is my understanding that 8.5 is not very realistic, but then neither is the 2.6.
In the last topic “Are GCM’s are a reliable tool for predicting climate change?” you never really answered the question.
You should say NO somewhere and then discuss why modelling needs to be improved and we should continue to work on modelling – but that they are not ready for prime time yet.
Here is a comparison of the global surface temperatures as simulated by the Canadian climate model to the observations. The warming trend in the model is 4 times the observations.
The surface station measurements must be adjusted from HadCRUT4.5 for the effects of urban warming contamination. I used the result from McKitrick and Michaels, which is 0.042 C/decade. Several other studies gave very similar results. So what is the UN panel’s response to these studies? They dismiss them with the nonsense statement, “the locations of greatest socioeconomic development are also those that have been most warmed by [natural] atmospheric circulation changes.” While the IPCC dismisses natural climate change as insignificant, it invokes unspecified natural circulation changes that effect only where cities are located, but do not effect the surrounding countryside or farmland. This is utter lunacy.
The UN panel wrote in their last report, “Many empirical relationships have been reported between solar activity and some aspects of the climate system. The forcing from changes in total solar irradiance alone does not account for these observations, implying the existence of an amplifying mechanism.” That mean the solar effect is much greater than the direct heat effect. Solar activity also affects cloud cover.
The UN report then ignores solar activity despite the empirical evidence. That in anti-science. Many studies report solar activity accounts for more than half of the 20th century warming.
When we account for natural millennium warming and the urban warming effect and new Steven’s aerosol estimates, and assuming exponential increase of CO2 air concentrations, temperatures are forecast to increased from now by only 0.57 °C by the year 2100 due to greenhouse gases, not 3.5 °C as estimated by climate models with the extreme emissions scenario RCP8.5. https://friendsofscience.org/index.php?id=2230
Unforntunately your chart is wrong and McKittricks paper is wrong.
There, that told you.
Why you are wrong you are never likely to know unless mosh is feeling expansive.
McKitrick and Michaels 2007 are not wrong, De Laat and Maurellis 2006 (who got the same results) are not wrong, Watts et al are not wrong, Spencer 2010 was not wrong, and several others who studies the urban warming effect were not wrong. The IPCC AR5 report agrees that urban development is highly correlated with temperature rise, but ignores it.
De Laat and Maurellis 2006 and McKitrick 2010 proved that natural circulation changes are not correlated with the location of cities, so the IPCC statement is nonsense.
Mosh incorrectly think that McKitrick and Michaels are wrong because the database used assigned the country’s population and GDP to an island state that is part of the country, to determine the GDP growth rate. This assumes that different parts of a country have the same general rate of economic development. That growth rate is likely reasonably accurate even though the island state has a different population and GDP than the country. Ignoring the urban heat island contamination of the surface station record is a huge offense to science.
If I were a lawyer, which I am not, and read this draft I would take away 2 points….
Point 1 – verbatim from the most significant paragraph within this draft is that these GCMs seem to be an exercise in futility to explain the past or predict the future as there is way too much complexity still not understood.
[….arguably the most fundamental challenge with climate models lie in the coupling of two chaotic fluids – the ocean and the atmosphere. Weather has been characterized as being in state of deterministic chaos, owing to the sensitivity of weather forecast models to small perturbations in initial conditions of the atmosphere. The source of the chaos is nonlinearities in the Navier-Stokes equations. A consequence of sensitivity to initial conditions is that beyond a certain time the system will no longer be predictable; for weather this predictability time scale is weeks. Climate model simulations are also sensitive to initial conditions (even in an average sense). Coupling a nonlinear, chaotic atmospheric model to a nonlinear, chaotic ocean model gives rise to something much more complex than the deterministic chaos of the weather model, particularly under conditions of transient forcing (such as the case for increasing concentrations of CO2). Coupled atmosphere/ocean modes of internal variability arise on timescales of weeks, years, decades, centuries and millenia. These coupled modes give rise to bifurcation, instability and chaos. How to characterize such phenomena arising from transient forcing of the coupled atmosphere/ocean system defies classification by current theories of nonlinear dynamical systems, particularly in situations involving transient changes of parameter values. Stainforth et al. (2007) refer to this situation as “pandemonium.”]
Point 2: This draft points out how a few CGM groups go back in time and attempt to explain the variations and rise of surface temperatures over the last 400 years though all these explanations are terribly weak and fraught with much ambiguity so much so the need to introduce the term ‘grand hiatus’ (ie moving the goal posts).
I glossed over all the IPCC AR# stuff simply because of this draft’s explanation of how weak (unconvincing) the performance of the GCMs have been to really be used to make global policy (though the IPCC has believed otherwise). Maybe some points worth mentioning may be the phenomenon of global greening and or the reduction in N.American tornado and hurricane activity NOT predicted by the CGMs (as these models predicted more with increased strength).
This is a great document otherwise.
My suggestion is not to take suggestions from non experts for an Expert Report. ahem.
Were I opposing council, evidence of that would make for an interesting line of questioning.
Pielke, yes, by all means Listen to Roger.
maybe for typos and occasional wordsmithing for clarity
Fortunately there are some clowns who are well known.
As a lawyer/businessman who spent years understanding AGW in order to write about it, respectfully make the suggestion that you consider greatly simplifying the last sections along the following lines. (First sections are fine, in my opinion.)
You previously established that the computational intractability of sufficiently high resolution grids to physically model processes like convection cells, clouds, and precipitation is solved by parameterization. Your figure 1 and discussion. What you did not say is the enormous magnitude of the problem. The NCAR rule of thumb is doubling resolution requires 10x the computation. Todays numerical weather models use grids of 2-4km. That means the computational intractability f GCMs is 6-7 orders of magnitude. No amount of investment in better supercomputers will solve that in the next few decades.
The requisite unavoidable parameters are calibrated or tuned to best hindcast the past. For CMIP5 the experimental design was expressly from YE2005 back to 1975, three decades.
Now (Lindzen slide and point to be added), the warming from ~1920-1945 is essentially indistinguishable from the warming from ~1975-2000. (There has been little warming since 2000 except for the now rapidly cooling 2015 El Nino blip, which is natural and not GHE. Note further that ~1/3 of the total atmospheric CO2 increase since 1958 [Keeling curve period] was added after 2000 when there has been effectively no warming except the natural blip. Further ‘legal’ evidence that CO2 is not the control knob.) IPCC AR4 SPM fig 8.2 makes the clear point that the first period cannot be attributed to GHE; there simply was not enough change in GHG, especially CO2. It is some not causally understood natural variation. A strong ‘legal’ point. This clearly raises the attribution question. The parameter calibrations in both CMIP3 and CMIP5 expressly assume CO2 is the control knob and attribute the observed warming from ~1975 to 2000 to GHE. That is ‘legally’ wrong.
The fundamental ‘legal’ issue is, natural variation did not suddenly cease to exist in 1975. So the GCMs as tuned must be overly sensitive to CO2; that logical ‘legal’ conclusion explains the model/observation sensitivity discrepancies noted in the third section. It follows that there is unstated uncertainty as well as certain GHE warming bias in the models. It further follows that basing policy on 2100 model projections is unsound from legal prudence/negligent due diligence perspectives. There is no preponderance of actual observational evidence for primarily GHE attribution in the tuning period warming; it is merely assumed. That foundational assumption has just been demonstrated false using incontrovertable ‘facts’ about 20th century climate change and ‘legal’ reasoning.
Except models are not tuned for sensitivity to c02.
“Except models are not tuned for sensitivity to c02.”
Indeed so, they are not. The problem is much, much worse than that,
Please read my above explanation at
The conclusion of the explanation says,
“In summation, there are good reasons to suppose the climate models provide wrong indications of effects of changes to atmospheric GHG concentration and different models provide very different indications of precipitation resulting from changes to atmospheric GHG concentration because the models are of different climate systems so – at most – only one of them represents the climate system of the real Earth.”
Please read it because it is possible that (for once) you may learn something
Also please take note of my subsequent above response for angech which is at
that includes this clarification him/her
“No and yes.
No, because the models examined by Kiehl and me do each use a unique value of aerosol cooling to compensate for the degree that each model is assessed to ‘run hot’. I again provide the link to Kiehl’s paper because his explanation is so clear.
Yes, because those adjustments of the models does not provide a complete match with useful accuracy to variations in historic data over time. So, temperature data are altered to make them provide a better fit with what the models say historic temperatures were! These alterations to past temperature data are clear violation of the scientific principle that theory must agree with observations (n.b. not vica versa).”
“Please read it because it is possible that (for once) you may learn something”
I hope you’re not holding your breath, Richard!
Lawyers are familiar with the idea of hypothesis testing: “The defendent could not have committed the crime because he was somewhere else at the time”. Climate models are hypotheses in numerical form. Because they are not complete physical descriptions, they contain a number of fudge factors which must be tuned to known data. Then they can be tested against new data. It is essential that the tuning data set and the test data set are kept separate. When this is done properly, ALL the models fail such a test – none of them predicted future temperatures accurately as can be seen for 2011 and 2012 in Figure 4. Until a model can be found which does actually work then all discussion of climate sensitivity is irrelevant because it is based on hypotheses which have already been disproved. I think most lawyers could handle that concept.
good point, thx
Except the models don’t fail.
“Except the models don’t fail.”
Really? They don’t?
In an above comment
“For example, the 2008 CCSP report provided indications of precipitation over the continental U.S. as projected’ by a Canadian GCM and a British GCM. Where one GCM ‘projected’ greatly increased precipitation (with probable increase to flooding) the other GCM ‘projected’ greatly reduced precipitation (with probable increase to droughts), and vice versa.”
One GCM projects drought and the other flood. It is certain that at least one of these projections is wrong because they are mutually contradictory, and there is no reason to suppose either projection is right.
Please explain your assertion that this is not failure of the models.
pretty simple richard
the OP wrote
“When this is done properly, ALL the models fail such a test – none of them predicted future temperatures accurately as can be seen for 2011 and 2012 in Figure 4”
In the most trivial sense all models fail.. because models never reproduce reality exactly. That is why we never require models to reproduce reality exactly. We require them to have more skill than a naive forecast.
If there a was a specification ( the model shall get temperture correct within 1C) then the model would fail validation. But there is no such specification.
further, some models do quite well. other not so well. So its really kind of silly to talk about them in BULK when they all differ.
Some do fine on precipitation others do fine on temperature.
The cross examination of this expert witness would prove quite interesting.
All models differ. “Judge, which model do you want me to use?”
Your attempt to pretend you were not wrong is silly,.
1. You said, “Except the models don’t fail.”
2. I pointed out that I had already provided a clear example of model failure; viz. “One GCM projects drought and the other flood. It is certain that at least one of these projections is wrong because they are mutually contradictory, and there is no reason to suppose either projection is right.”
3. You have ignored that and asserted (wrongly) of the models,
“Some do fine on precipitation others do fine on temperature.”
4. Assuming your untrue assertion were correct, then models would fail either in their projections of precipitation or of temperature and there would be no method to determine which provide wrong projections of future temperature and which provide wrong projections of future precipitation.
I repeat my suggestion that you read my above explanation. It may enable you to make sensible comments.
Steven Mosher: “Except the models don’t fail.”
Utter twaddle, they’re not even a joke in bad taste.
Stop making stuff up.
John: “they are not complete physical descriptions” – I can add two easy examples here (there are more); law for lawyers.
– One of the most universal physical laws is the principle of energy minimum: a ball at the bottom of a potential hole, or the cooling of a hot stove in a cold room. Now the hot surface of the Earth cools two different ways: by turbulent (latent + sensible) heat release, and by radiative cooling. The most efficient (maximized) latent cooling maximizes the atmospheric water vapor content (greenhouse gas), leading to the narrowest (tightest) atmospheric window (lowest IR transparency). The most efficient radiative cooling of the surface into the space requires the widest atmospheric window (highest IR transparency). On our aqua-planet, there is a dynamic fluctuation around a sharp equilibrium between the two: the ratio of the turbulent/radiative cooling is predetermined by physical constants: sea flux/Planck-flux is a given value. Is this ratio built into the GCMs? If not, they cannot give a complete description of physical reality.
– From similar physical laws: when talking about 2xCO2, we are talking about increased atmospheric LW absorption. The annual global mean atmospheric LW absorption is double of the upward LW atmospheric emission, and is equal to the downward LW atmospheric emission plus one LWCRE (longwave cloud effect). Are these ratios built into the GCMs? If not, they cannot give a complete description of physical reality.
I agree with Mr. Reid’s point. We’d find that very interesting.
I’m a lawyer who was sent here by a friend. I’ve worked with people at the FAA who do risk assessment for launch failure, so I’m familiar with a lot of the terminology, but don’t know the weedy climate details. I haven’t read all the comments, so my apologies if I repeat others.
Lawyers need definitions of the following terms: resolution, parameters, parameterization, sensitivity, calibration, tuning. Because context helps, I wouldn’t create a list of definitions up top and rely solely on that. Instead, I’d give a little parenthetical definition at first use of each of these terms. There are a couple reasons for this. 1. it tells us what they mean, and I promise not all lawyers know what each of these means (also, I’m still, personally in the dark about parameterization), and 2. it signals the relevance. For an attorney, the import of these words is that they show the deficiencies in modelling. If you are writing for regulatory attorneys, the failure to account for a relevant fact or factor really matters as a matter of law. (Also, I like “accounts for” rather than “assume” or “consider.” The latter sound very subjective to an attorney’s ears, much like “confidence levels” which I’ve seen a court misunderstand.)
It was very helpful that you described what you are talking about. We need that. One of the reasons I’ve struggled to understand some folks over the years is because they assume I know what something is and start with a description of how it works. So, don’t drop the descriptions at the beginning.
p.s. I tried to put my real name in, but WordPress fought me.
thanks, i am working on a little glossary and some more inline definitions and context.
In general, this is pretty good Judith. There are a couple of places where the impression might be left that if we had adequate grid resolution, we could resolve all the processes in detail and get the “right” answer. Of course there is no guarantee of that result as new results for LES are showing. That’s however, a very minor point.
I spent most of my career presenting quite complex financial reports to lawyers. In my view this needs substantial revision. I would start by discussing the debate over the models – in a way asking the audience to be a jury. Then look at the evidence, which would include a test of reality versus prediction, the various versions of reality and prediction, and the construction of the models and their defects. I would simplify things into almost an executive summary and back up detail. I think you will lose the audience with this over thorough approach. Lawyers get bored easily but oddly like detail. So be prepared to dip into areas of detail, but don’t necessarily start at the beginning and wade through to the end. These are smart people but generally can’t do numbers so well. They like words and pictures. They can understand and dissect an argument. So present and explain the debate.
VERY GOOD POINTS
+100. Judith needs to provide the main points in an executive summary and leave the detailed footnotes for those who may be interested.
Well, methinks there are no ‘climate models’. Not a single one! First considered in 2013 at http://tinyurl.com/n7kvbff , more recently a year ago at http://tinyurl.com/qjxakew and http://tinyurl.com/zdbwujx . Happy to learn of any refutation of anything I collected there so that I could consider changing my opinion.
Write a one page or shorter summary with your most basic conclusions as though it had to be understood by 6th or 8th graders. (with footnotes or some other reference to the body of your report that supports the conclusions.) Ultimately, your report may come before judges. If they don’t understand it on a very basic level, you are wasting your time. Judges are generalists who have to understand, criminal law, numerous categories of civil law, constitutional law et cet. They have a lot of work to do. You have to keep a summary as simple as possible, and if the judges understand that, you have the body of your report to support your basic conclusions.
already done, thx
Agreed. If you understand your subject, you should be able to communicate it to a reasonably intelligent 12 year old. You should also be able to answer any questions from said 12 year old without resorting to being patronising, dismissive or resorting to jargon.
My view, anyway.
MF I agree with your comment 100%.
Judith, I don’t think you sufficiently emphasized that CO2 is only one of a number of “greenhouse” gases.
Water vapour is far more important.
My understanding is that H2O concentration in the atmosphere is quite variable, also not modelled, not to mention huge cloud effects on albedo.
You did not point out that the top 700 meters of the oceans contains more than three orders of magnitude more “variable” enthaly than the rest of the weather system put together.
You only touched on solar variability, and the possibility of large “amplification” factors.
You did however conclude that GCM models are not fit for purpose.
The sheer complexity of the subject would make this a difficult article to write for an unsophisticated audience!
Personally, I wonder if anomalous cold in Europe, and Eurasia might just start making CAGW advocates nervous even without considering a recent election result.
Thanks Judith, a great work summarizing the picture for scientists; I suspect lawyers will need a format of exec summary and main points, along with copious footnotes for the hard stuff. Unfortunately this doubles or trebles the work required.
My main criticism: your discussion of the likely role of natural variations is small. I strongly suspect that the next 20 years will be the time where science will unpick the contributions of natural variations – and we have to look longer than just decadal changes (El Nino/La Nina), for example ~65yr ocean oscillations (various pointers to the IPO and AMO exist in the literature, but an externally forced cycle should also be considered as a possible cause) or externally forced cycles probably with an astronomical origin (eg the ~210yr deVries cycle; 2300 Hallstatt cycle and harmonics of ~1000yr which tend to align with Roman-Medieval-current warming). If these cycles are written into future GCMs by parameterization (since we dont know the mechanisms) or by physical laws (when we do establish an understanding of causes and mechanisms) then the ECS will probably reduce to some 50-25% of its current value. Roy Spencer and Nicola Scafetta have already been there in peer reviewed papers (as have several others), but their papers seem not to be widely quoted for reasons we can only surmise.
I suggest an explicit statement of consequences of (say) a halved TCS would help. No proof either way at present, but spelling out the consequences of the uncertainties on our various trillion-dollar commitments would be quite timely.
The biggest problem with GCMs is lack of practical method for scientific validation.
Dr. Curry, I’d state upfront that GCMs were not adequate for their current use: Justifying political policies to fundamentally alter world social, economic and energy systems.
this is a good statement
Dr. Curry, I think many people will read the first paragraph or two and then skim the rest. To that point, I would say in the first sentence that the GCM’s “attempt” to simulate. An not that they use “fundamental physical principles” ala ATTP, but perhaps that they try to apply fundamental principles in such away as to model natural processes. Might even go onto state that you will discuss the choices and possible alternatives
This dot point seems to be an incorrect representation of the use of GCMs (or at least what they can justifiably be used for) :
I suggest, it should say instead:
For calibrating the climate parameters needed in IAMs, so IAM’s can be tuned to provide “Guidance for emissions reduction policies”
For the purpose of cohesiveness I would recommend that the sentence:
“There is growing evidence that climate models are warming too much and that climate sensitivity to CO2 is on the lower end of the range provided by the IPCC.”
Be changed to something like the following to clarify that observation work on sensitivity is not verifying that all warming is due to CO2.
“There is growing evidence that climate models are warming too much and that climate sensitivity to CO2, assuming that CO2 is the only control knob, is on the lower end of the range provided by the IPCC. “
Excellent article on GCM for non-scientists
this is a good essay, surprised this is the first time I’ve seen it
Judith, should Fig. 6 in the final draft say through 2015? It says 2014 in two places but just above the graph it says extended through 2015.
thx for catching this
More recent essays is Frank, P., Negligence, Non-Science, and Consensus Climatology. Energy & Environment, 2015. 26(3): p. 391-416.
Posting by Frank here: https://wattsupwiththat.com/2015/05/20/do-climate-projections-have-any-physical-meaning/
Reblogged this on 4timesayear's Blog.
I suggest this statement needs to be made more clearly and bluntly, and highlighted in the executive summary:
I have a lawyer in my family; my impression is that out of the narrow field of specialisation they depend on the established/recognised experts’ opinions. Since most of information about climate change lawyers get from the media and hear about such things as ‘scientist say’, ‘majority of experts agree’, etc it is going to be a hard work to get through with contrary view on a very complex subject.
Perhaps to start on a positive note, e.g. models do great job in engineering, construction, aerospace and other industries, where most of physical variables and their effects are well known (perhaps highlighting successes of the well known specific examples, creating an impression of familiarity).
In the areas of science where effect of a variable is less certain, such as the medical and economic sciences models often provide more or less fair but not exact solutions, but occasionally they do fail completely (specific example or two of success, but more so of the known failures, this in order to prepare the terrain for what is about follow).
Climate science encompasses large number of complex parameters to be considered, whereby effects of some aren’t or can’t be precisely quantified. As a consequence, by nature of things as they are, despite of all scientific expertise available, the climate models slide further down the scale of reliability (by now the failure of climate models may not be such a shock to the audience).
Etc …… etc….
Statements like these may be useful …
“When you read about law in the popular press you likely find it to be at best, incomplete and often just wrong. When I read about climate in the popular press I find it to be at best, incomplete and often just wrong.”
“Engineers use models to design aircraft but first flights carry two pilots and no one else. Further flights move gradually to the design limits of the plane and the results are sometimes surprising. And we know a lot more about aircraft models than we do about climate models.”
“How many models of the recent presidential election were right?”
“How many models predicted correctly the recent mortgage meltdown?”
1) the flu test that takes two weeks to get certain results.
2) the injunction that takes two years to process.
First flights do not always have two pilots ding dong
Further if the model says the plane will crash everyone believes the model.
GCMs basically say the earth will crash.
yet you’all want to proceed with flight test
Yep, Mr. Mosher. After the two test pilots successfully flew the airplane, GCMs predicted it wouldn’t fly.
You better hope there are non-CAGW Weed Patches people are willing to pay you to wander. President The Donald will stop the gravy train.
Did you read the report from Dr. Curry? She says to continue flight testing because models uncertain. That’s getting perilously close to d-word territory.
Dear Judith: Your updated final version still does not reflect the simple fact that fundamental physical constraints and boundary conditions are missing from the models, therefore recent GCMs cannot provide us with a valid description of the future of climate.
I’d say it’s too much into climate science without considering the role of mass delusion.
That any of this could be complete nonsense, in other words.
The revised paper is excellent. Much better with the lead in statements up front.
Looking at your PDF document, I’ll mention that
AFAIKMicrosoft Word, like all modern word processing programs, has ways to assure that a heading line doesn’t get separated from the first paragraph after.
I’m not a scientist, I’m not even well educated, but I have spent 30 years in sales and marketing.
I have read most of the comments here (not all) whilst most make valid points on written detail and scientific detail, but I don’t think there is any mention of your audience other than being lawyers, and I suspect we all understand why you can’t comment any further.
The key to getting a message across is to understand your final audience, in this case, only you know who that ultimately is. Ideally, you want to understand their background; are they educated or not, are they a politician or a lawyer and what will be done with the document i.e. will it be used as the basis for a larger document or will it be read in isolation.
If it’s to be the preamble to, say, a meeting or presentation then it should leave lots of opportunity for questions as you want to engage and discuss with your audience to project your personality and knowledge. If you appear to give them all the answers in a short document, they will digest it, interpret it to their own agenda and possibly not bother with the meeting because they now ‘know it all’.
If it’s to form part of a larger report it should have a few (say 3 or 4) vital, easily quotable, short statements, so they can be cut and pasted into the document. You can be certain that a larger report will not include your entire text. If it does, it is likely to be consigned to an appendix. However, if the quotations from it stimulate interest, it may well be read.
Now, this is the important bit.
If this offers an opportunity to meet with the person it’s intended for, then your objective is to get the meeting, not to try to get across all the detail in one submission.
If your audience is well educated then they probably only need an executive summary, very brief, very concise, to assess whether they want/need to meet you.
And that’s the tough bit. You want to get all the information across but that’s impossible in a short article (and this is a short article). So provoke a meeting rather than leave your audience to their own devices.
I recently coached my daughter to apply for her first job. Her covering letter had to stand out amongst all the other (hundreds) of letters a busy HR department would receive. The objective was to pair the letter down to the bare essentials so, at the very least, it wouldn’t be consigned to the bin on first reading because it was too long.
It was short,very punchy and stimulated further inquiry within seconds of reading it. She got the interview because that was our objective.
It was based on many years of my own failed attempts to get the job, not the interview.
And if anyone has a position for a recently graduated Russel Group Zoologist, please let me know. She got the interview but not the job.
Nor did I find your submission difficult to understand (although the science is largely beyond me) or too long. The explanations of the graphs are very clear and made me wonder how I had missed some really obvious facts. If that will appeal to your audience I’m certain it is perfect.
And just to say thank you for the years you have spent maintaining this site. I have been a lurker because the guys on here are way more intelligent than me and don’t need my dumb questions.
It would be useful to have a ‘layman’s’ spin-off, particularly in light of recent political events. There are lots of people looking for answers now the whole AGW debate is about to get a massive kick up the backside.
And I’m just constantly knocked out by comments and articles written by Denizens who, when I check their ‘resumes’ here, express more concern for the immediate fate of the underdeveloped world’s poverty stricken than they do about high-handed science. I can’t think of the world’s poor being acknowledged at all on alarmist’s sites other for daft explanations on how wind and solar will benefit them……how?
Thank you, Dr. Curry, for using my suggested wording verbatim. I am honored.
Dave Fair, formerly Charlie Skeptic
The pdf has formatting problems and a couple extra pages at the end.
If the GCMs can’t inform policy, what can? You should offer something from climate science that could help inform policy now. If you don’t believe there is any evidence or analysis from climate science that could possibly inform policy, you should say that. Otherwise, you offer the audience nothing but a request for more money, more research, more non-answers.
I think a comment on the changes to global temperatures and the different types of measurements might be helpful to the report.
If I was on the other side I would be using the GISS data and showing how they match models… and then hoping to hell you don’t have the same source from 2010, 2000, 1990 and 1980 on hand.
A glance at this will demonstrate your point
Thank you Judith. An invaluable resource. And probably very well timed, given the inevitable reviews of climate change policy associated with the incoming Trump administration.
1) pls add page numbers
2) p.9 instrinsic – spelling error
3) p.13. Heading 4. – grammatical error
4) final sentence – has a page break within the sentence.
Maybe too late.
A bit of white space between each dot point will add clarity and emphasis to
each point made.
The same as putting space between paragraphs.
You could give the lawyers this, from an expert.
” GCMs are only off by about 10%
the spaghetti graphs of 30 models are annoying and not pretty.
Basically GCMs do an astounding job with a rather complex system.
Its a miracle that they get the absolute temperature as close to the real thing as they do.”
There is a time frame which is rather important, 10% off in only 30 years could be > 100% off in 120 years and since we are using them for longterm projections this should be acknowledged.
There is a number of models which is rather important, 300 or 3000 models would only be the start of a reasonable sample.
In medicine and maths you would be laughed at using such a small sample size
Since there are only 30 and the time frame is only 30+ years and they are all calibrated to match at the start it would be a miracle if they were not in the ballpark really.
Basically GCMs do an astounding con job on trying to deal with a rather complex system.
“There is a number of models which is rather important, 300 or 3000 models would only be the start of a reasonable sample.
In medicine and maths you would be laughed at using such a small sample size”
Sorry, but that is incorrect for two reasons.
Firstly,the output of each model must be demonstrated to approximate reality if the sample size is to have any validity.
Average wrong is wrong.
Secondly, the samples must be a random selection of examples of the same thing if they are to be averaged, but each climate model is of a different and unique climate system (this is explained in my above comment at https://judithcurry.com/2016/11/12/climate-models-for-lawyers/#comment-823407).
Average apples may have meaning, but average apples and oranges doesn’t..
the output of each model must be demonstrated to approximate reality if the sample size is to have any validity.
The validity in sample size is only a result of the sample size. Always.
The individual model does not have to approximate reality to be a model. It is only useful if the model does approximate reality which of course cannot be known in advance.All samples can and must have outliers. Otherwise you end up with Cowtan and Way. Error is actually a way of measuring whether cheating or rigging of the sample is going on.
“each climate model is of a different and unique climate system”
Each climate model is of the one unique climate system. The earth’s climate.
In other words they can be considered as all “apples” if we call each model a model of the earth’s climate.
Each model uses different and unique parameters to evaluate the one object, the earth’s climate so they are all different apples.
Yes they are different but the extent to which they are useful is what is being averaged, so I respectfully, semantically, disagree.
Average apples do have meaning.
“Each climate model is of the one unique climate system. The earth’s climate.”
Please read my explanation above at https://judithcurry.com/2016/11/12/climate-models-for-lawyers/#comment-823407
“In summation, there are good reasons to suppose the climate models provide wrong indications of effects of changes to atmospheric GHG concentration and different models provide very different indications of precipitation resulting from changes to atmospheric GHG concentration because the models are of different climate systems so – at most – only one of them represents the climate system of the real Earth.”
I am surprised you now dispute this because in this thread you said you agree it at https://judithcurry.com/2016/11/12/climate-models-for-lawyers/#comment-823447
A model of a boat is not a model of a horse whether or not somebody thinks it is.
Call the climate models whatever you want, but they are what they are; i.e. individual emulations of different climate systems.
I repeat, average wrong is wrong.
And the validity of sample size is affected by the variance of the data, always.
Judith, the final looks very good and significantly improved. I think one of the most important things is in the write-up but I will highlight / summarize here … it may be a matter of emphasis and clarification to the lawyers. That is the models are not predictive models – the cannot predict the future; for one, due to their focus on GHGs and lack of focus on natural causes; and due to lack of fidelity – the 50 or so models produce a wide wide range of output which the climate groupies and advocates (IPCC) – which ones do you pick or focus on? The models are not predictive models… therefore they can be used ONLY for scenario case comparisons A vs. B. Because these models are tuned to GHGs and miss significant contribution and variability due to natural causes, they are truly only useful to look at what-if cases for changes in GHG effects. But they are not useful to understand future climate trajectories but only sensitivity (case A vs. B) of climate to hypothetical modeled changes in the output – global temperature to GHGs. There is a significant logical fallacy of bringing attention to the change in final output as being anything other than a scenario / comparison study to focus on GHG/CO2. The models are not specified to include all known causes, only all known manmade causes. However, the advocates, policy makers, liberal mainstream media either are very clever in doing this (to not inform on this fallacy) or too lacking in detailed understanding to understand. The answer of course if the former. The entire discussion on climate change is based on the assumption of mankind causing climate change – c.f. the 1993 preamble to the UNFCCC Rio protocol. I think this is a very important point to get across.
I write to support your very, very important point that
“The models are not predictive models…”
No model’s predictions should be trusted unless the model has demonstrated forecasting skill. But none of the climate models has existed for 20, 50 or 100 years so it is not possible to assess their predictive capability on the basis of their demonstrated forecasting skill; i.e. they have no demonstrated forecasting skill and, therefore, their predictions are unreliable. Put bluntly, predictions of the future provided by existing climate models have the same degree of demonstrated reliability as has the casting of chicken bones for predicting the future.
Not only have the models not shown skill for 20, 50 or 100 years, they have failed to show skill at anything under the scientific method. For example, they fail to predict the ratio trends of NH changes to SH, surface to lower troposphere or lower to upper troposphere.
Interesting stuff… I switched sides on the climate debate after finding out about the penguins http://bit.ly/2cf9aaO
So, I assume that now you are a climate skeptic. Good for you!
Or are you saying that you buy the charge that global warming is responsible for every iceberg, especially in Antarctica which has shown an “absence of regional warming since the late 1990s.” http://www.nature.com/nature/journal/v535/n7612/full/nature18645.html
The graphic on page two of the pdf has such a low resolution that it’s hard to make out the text even when enlarged. There is a better one here https://upload.wikimedia.org/wikipedia/commons/7/73/AtmosphericModelSchematic.png or here: http://celebrating200years.noaa.gov/breakthroughs/climate_model/modeling_schematic.html
If “GCMs are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems,” the obvious question is why, nevertheless, alarmists insist on doing just that. The alarmists say that those opposing them are bought and paid for by the energy industry. What motivates them? The pure love of truth? It might be useful to point out the degree of uncertainty a person might say still justifies immediate action given a projected cataclysm, and to point out various incentives that are involved.
Single word on final page.
scientists that produce climate models tend to believe the results are accurate for a very simple reason. if the result of the model seems unbelievable to the scientists involved, they will change (tune) the model until it gives them a more believable result. Once the model delivers a result that the scientists find believable, they will stop changing the model.
What climate models excel at is mimicry. they mimic the belief system of the scientists involved as a means of perpetuating the current version of code.
Really just a point on the meaning of words. Scenarios do predict the future, but strictly on the basis of projecting the stated assumptions forward in time.
In fact the RCPs used to create the scenario assumptions are, as described, “representative” of the range of concentration scenarios in the literature. In this sense it is possible to use the RCP scenarios via Bayesian analysis to give a pdf of the temps as inferred from the expert knowledge of those working in the field.
apologies this was in response to Danley Wolfe https://judithcurry.com/2016/11/12/climate-models-for-lawyers/#comment-824118 above
Several statements in this article reinforce the widely held impression that climate models can be used for policy analysis. But is this true? Can or should GCM’s be used for policy analysis? Policy analysis requires an assessment of the economic benefits that are likely to be achieved (and the uncertainties) if the proposed policy is implemented and maintained to completion. The economic costs and benefit are needed to justify the policy. Without rational economic analysis the policy can only be justified (irrationally) by assumptions and scaremongering innuendo.
I feel you should state this.
Thanks Judith–I have a lecture on all this coming up soon and this helps immensely in keeping it up to date.
Zeke Hausfather, energy systems analyst and environmental economist at Berkeley Earth:
“ I certainly expect to be talking a lot more about geoengineering and overshoot scenarios now than I did a few days ago.”
They did not exist before but in the new age one adapts to Climate change.
Is an overshoot scenario alternative lingo for “GCM’s are wrong by 10%” ?
The updated report is a great read for anyone. I reblogged excerpts here:
I apologize for commenting so late. I think the point should be made that if one is trying to model Global Mean Surface Temperature (GMST) variations as a function of atmospheric GHG variations, a simple model derived from Conservation of Energy principles and validated with atmospheric GHG and aerosol data and GMST measurements over a long period of time, is far superior to attempting to use much more complex and un-validated GCM models containing too many “fiddle factors” discussed in many previous comments. Using actual data, lets Mother Nature tell us the net effects of many factors such as clouds and feedbacks that cannot currently be confidently modeled in GCMs. I would claim, and I expect a broad spectrum if US scientists and engineers from fields outside of the climate science community, would agree that the very consistent estimates of ECS and TCR from data constrained peer-reviewed publications such as Lewis and Curry (2014) and Lewis (2016), are far more believable than much higher estimates provided by GCMs.
The large claimed uncertainty in ECS results primarily from un-validated GCM model results, not simple models, validated and constrained by actual GHG and GMST data. Moreover, you didn’t mention TCR that has much less uncertainty than ECS, and that I would propose is a much better metric that scientists could agree on for use in near-term public policy decision-making. Use of simple and rigorously derived models validated with actual data, and use of TCR for forecasts, would prove with little uncertainty, that any GHG-related climate problems will develop slowly. Confidence that we should experience slow near-term development of any climate issues, would provide the confidence that we can take the time to remove so much uncertainty from climate alarm. The proper focused research, over say a 5 year period going forward, with no funded studies with un-validated models, could remove the large uncertainty in a true scientific consensus for forecasts extending beyond the validity of TCR based forecasts. Why has climate science not been able to remove its factor of 3 uncertainty in ECS over 3 decades of study? Maybe such failure indicates a new approach to evaluating the AGW threat is needed. At NASA, policies governing safety critical decisions for design or operations, forbids use of un-validated models (see NASA-STD-7009). Therefore, use of un-validated GCM’s for decision-making would automatically be ruled out in favor of simple models derived from first principles, but that are validated by actual data.
Equally important for assessing the true atmospheric GHG threat, is more focused research work on reasonable “business as usual” RCPs for the future. RCP8.5 as the only official “business as usual” scenario recognized by the climate science community, does not represent the best science our nation can apply to this problem. The Right Climate Stuff research team (www.TheRightClimateStuff.com) developed a “business as usual” RCP6.0 scenario that was constrained by actual official US government Energy Information Agency (EIA) documentation of current world-wide reserves of coal, oil and natural gas, and that indicate a transition to other forms of energy generation will have to begin at about 2060 with only 585 ppm CO2 in the atmosphere in 2100. This transition was estimated to be complete by 2150 with a maximum of 600 ppm atmospheric concentration of CO2. World-wide energy consumption forecasts for 2040 later published by Exxon Mobil and BP in 2015, provide a projected forward path for fossil fuel consumption that agreed very closely with our RCP6.0 projection for 2040. Exxon Mobil and BP use these forecasts for making decisions on the many $billions of capital investments they must make to meet energy demand. I submit that these companies with big skin in the game have more incentives to make accurate forecasts than academics in the climate science community.
Lower GHG climate sensitivity based on data observations and better scientific work on reasonably conservative “business as usual” RCPs will provide a much more certain forecast for AGW that is not alarming and that a broad spectrum of the US scientific and engineering community could support. These scientific issues should be decided by the broad US scientific community with in an open debate governed by rules of The Scientific Method, not lawyers and judges without the requisite training and experience to decide these issues for US public policy decisions.
A useful comment with which I agree. I’d only say in defense of the authors of the RCP scenarios that they didn’t see RCP 8.5 as “business as usual”, they described it as the conservative business as usual scenario, in the top 10% of the then scenario literature. Perhaps very unlikely?
Yes, but the IPCC AR5 report offered no other “business as usual” scenario and RCP8.5 is all I have seen the climate science community quote for business as usual projections. I don’t blame the authors of RCP8.5, I blame the IPCC AR5 Report authors for this very biased presentation of what policy makers should expect if they don’t enact CO2 emission controls and the academics who testify to our Congress that AR5 Report models running the RCP8.5 scenario is the GMST projection they should expect for 2100 if the US does not enact CO2 emission controls.
I went looking once and couldn’t find “business as usual” mentioned in AR5 WG1. WG2 discusses the origin of the term through the earlier IPCC reports. RCP 8.5 was simply acknowledging its genesis by back reference, adding the rider that it was conservative. So I think the misuse came after the IPCC reports.
“Lower GHG climate sensitivity based on data observations and better scientific work on reasonably conservative “business as usual” RCPs will provide a much more certain forecast for AGW that is not alarming and that a broad spectrum of the US scientific and engineering community could support.”
True, but this would put the kabash on alarmism and a bunch of alarmists would have to find something else to be alarmed about.
“These scientific issues should be decided by the broad US scientific community with in an open debate governed by rules of The Scientific Method, not lawyers and judges without the requisite training and experience to decide these issues for US public policy decisions.”
True, but activists have lost their case with the electorate, and are now focused on the judicial path for imposing anti-fossil fuel policies. The legal profession needs a deeper understanding of the field to avoid uninformed, knee-jerk liberal reactions (like the 3 Massachusetts judges who ruled CO2 a pollutant and subject to EPA regulation.)
Might be a good idea for ECS to define doubling of CO2 to be pre-industrial concentration. Without that, the assumption is present (400) to 800?
I am sure EPA lawyers will have some questions for you after reading this Updated and Final version.
“Hence we don’t have a good understanding of the relative climate impacts of the above or their potential impacts on the evolution of the 21st century climate.”
Nice take-home message.
My take-home message: basing a Trillion + dollar policy on what remains a will-o-the-whisp science project is likely to get politicians out of office, sooner than later.
I am humbly happy to note you adopted my remark. :-)
Pingback: Weekly Climate and Energy News Roundup #249 | Watts Up With That?
Dr. Curry, I am a lawyer and found your post very helpful, but I may not be representative, since I have written on this subject. http://wvlawreview.wvu.edu/files/d/fa4f5670-4682-4097-9736-6d42800c6a94/yaussy-corrected.pdf I have long hoped that the climate debate would end up in court, where cross-examination might help reveal untruths and half-truths. This looks like an expert witness report, and I think for that purpose it would be fine. A good attorney would help refine it for presentation to a jury or a judge, if it goes to trial.
Thanks for your voice of reason in the climate wars.
Great general audience article. One thing that might be missing is the notion of confounding factors as a discussion segment. You mention them in context but did not present them in their own segment. For the reader here, confounding factors are unidentified variables that cause or interfere with both the independent and the dependent variables in a research design. For a simplified discussion of confounding variables see https://explorable.com/confounding-variables. In climate systems, confounding variables are around every turn and possibly in every cell of a GCM. The 1 million dollar prize related to solving this wickedly complex system in terms of math is likely an undervalued amount.
But what if, 1) given a constant Solar source, 2) impacted by various orbital mechanics, 3) traveling through a variable intrinsic atmospheric filter, and 4) insulation comes and goes regarding escaping heat, the oceanic battery is the source cause of cyclic stadials/interstadials? Oceans serve to both store and release heat, sometimes at equalibrium and sometimes in net evaporation or net absorption. Wickedly, oceans are not just made of pure water, they vary in depth and incident location, and insulation of each oceanic body of water is highly variable. Therefore there is no linear water temperature function that would work here. But there may be a cyclical function due to its tremendously large capacity to slowly store more heat than it releases, and then rather suddenly because it can’t store anymore, switch to evaporation. Certainly paleo-ice core proxies record cyclical sudden steeply rising atmospheric warmth followed by a jagged stepping function back down. Since oceans do have the capacity to warm us or cool us and no other paleo proxy points to some other reason for these significant swings, the first encountered pathology points to the oceans.
So let me wax crudely. Instead of trying to find tiny forcings that require amplification by some factor, scientists should instead first rule out friggen huge ones that need no such amplification.
Pingback: Best Arguments For and Against Climate Model Credibility – The Trump Phenomenon
In a presentation tailored to lawyers, I suggest mentioning the intentional misrepresentation of surface data (UHI, temps from locations that have no thermometers) and satellite data (NASA smoothing out the LIA and MWP). This is something they will understand immediately.
Pingback: Year in review – Climate Etc.’s greatest ‘hits’ | Climate Etc.
In a “presentation tailored to lawyers” – what the heck does that even mean? Lawyers are just people, with different backgrounds, life experiences, education, biases, methods of thinking, gullibility, etc.
How do lawyers think? Depends on the lawyer. In my experience (I am a constitutional and criminal defense lawyer and a chemical engineer (M.S. and B.S.) and a former translator and interrogator – I don’t think I think like most lawyers, or like most engineers, or like most anything else. And I am sure most lawyers think they are unique. Pity the poor fool who self- describes as a person who thinks like a lawyer.
If you want to generalize about lawyers, I would say that most have liberal arts backgrounds, a very high percentage are technophobic, and most want (or pretend) to believe that the law is usually correct and the legal system most often produces justice, and that the government is competent to do what it attempts to do. Which means that most lawyers are very gullible, or at least they pretend to be. Most litigators believe that courts should be used to obtain judgments – either civil or criminal – that are to be backed by the force of law – i.e., cops with guns. But generalization about lawyers, or any other class of people, is disrespectful as a rule.
So how are you going to decide on an appropriate presentation to explain climate models to lawyers? Which type of lawyer thinking are you trying to reach? That of litigators, transactional lawyers, academics, international lawyers, IP lawyers, federal trial judges, federal appellate judges, supreme court judges?
From the explanation you offered, you are simply presenting one viewpoint, with its own selective biases.
I don’t think your explanation is helpful, at all. Models attempt to simulate reality, they don’t reproduce it. Climate models can’t be tested in the real world. Too many variables, too many unknowns, too many hopelessly complex relationships. No way to run controlled experiments. No way to account for unknown complications. No way to account for human action.
Models may be useful for exercises in academic thinking, or for other non-coercive uses, but never to justify political action.
While this was a brilliant article on the mechanisms of global warming models by Ms. Curry…
I ask, what is the point? In the long term, I know of no multivariate model (that includes in finance or marketing, let alone the sciences) that can be truly “accurate” in the long term, but can only be very broad ranges for rough estimates only. Chaos ensures that uncertainty over the long term. I could have done that far more simply that Ms. Curry’s advanced treatise.
Sadly, I felt the effect of Ms. Curry’s article was to focus solely on these mechanics — which served to to BURY the casual reader with details, with the unfortunate consequence of obfuscating very important insights of the risks of climate change and their physical consequences. I explain below.
Examples of what is missing:
#1 Most general readers are clueless that a 1 degree Celsius increase in global average temperatures is not the same thing as a 1 degree Celsius increase in he weather temperature:
The global average temperature has remained very consistent over the last 10,000 years — with the difference between the Little Ice Age and today is only about 1-1.2 degrees Celsius. That’s huge with our .7-.8 degrees Celsius increase to date.
#2 Per Ms. Curry, “If the ECS is less than 2oC, versus more than 4oC, then the conclusions regarding the causes of 20st century warming and the amount of 21st century warming are substantially different.”
What is missing is what even a lower scaled “1.5 degrees or even 2 degrees” warming translates into. (let’s just talk about the rising sea levels)
what this physically means.
We do know, that together, the Antarctic and Greenland ice sheets contain more than 99 percent of the freshwater ice on Earth
–If the Greenland Ice Sheet melted, scientists estimate that sea level would rise about 6 meters (20 feet).
— If the Antarctic Ice Sheet melted, sea level would rise by about 60 meters (200 feet).
What no one knows how long this will take to melt.
It was previously thought it would be over a century, but more recent studies are seeing evidence the Arctic fast melting is translating into rapidly melting Greenland ice sheets, that were previously predicted.
This is very relevant to what 2 degrees Celsius translates into.
per Science Daily:
“Greenland might be especially vulnerable to melting because that area of Earth sees about 50 percent more warming than the global average. Arctic sea ice, when it exists, reflects the sun’s energy back through the atmosphere, but when the sea ice melts and there is open water, the water absorbs the sun’s energy and reradiates it back into the air as heat. Arctic sea ice coverage has decreased over the last few decades, and that decrease will probably continue in the future, leading to accelerated temperature rise over Greenland. Floating ice does not add to sea level, but the Greenland Ice Sheet rests on bedrock that is above sea level.
“Our analysis suggests that the benefits of reducing greenhouse gas emissions, in terms of avoided sea level rise from the Greenland Ice Sheet, may be greatest if emissions reductions begin before large temperature increases have been realized,” the researchers state in a recent issue of Climate Dynamics.Currently, about a billion people live in areas that would be flooded by a three-foot sea level rise.
“If we are going to do something to mitigate sea-level rise, we need to do it earlier rather than later,” said Applegate. “The longer we wait, the more rapidly the changes will take place and the more difficult it will be to change
The point: Why not translate to us the RANGE of physical dangers of sea level rise at even a 2 degree increase in global warming means by this century.
Taken with #1, the casual reader will say — big deal, 2 degrees means nothing to me weather wise.
#3 Last, I felt all honest discussions of models must talk about the large chaotic role FEEDBACKS play on global warming.
No One knows what the impact will be. Basically all scientists admit this. I find it pretty amazing Ms. Curry says it cannot be worse than 2 degrees.
If long term models are unknown: This goes in both directions.
While warming associated with CO2 is logarithmic, means each doubling of CO2 leads to about the same increase in temperature.
It is the INDIRECT effects of CO2 warming (ie the FEEDBACKS) that worry scientists the most.
–warming from CO2 means the atmosphere can hold more water vapor (another global warming gas); Because there are more clouds, this creates more powerful weather systems — including in some areas more rain AND more snow (clouds shut out sunlight)
— the ice caps are melting– NASA and NOAA satellites confirm it. The ice acts to reflect sunlight back into space, therefore more heat will be absorbed by the Earth, when the ice has fully melted.
–underneath the ice are LARGE deposits of methane. Methane is a more powerful global warming gas, than CO2.
— the oceans are currently absorbing around 80% of the additional CO2 from human causes. This is creating problems with the corals and algae in higher latitudes now and studies indicate will creep into lower attitudes. In addition, there is concern that the oceans will saturate and start releasing more CO2 into the atmosphere.
–Then of course the additional CO2 is creating acidification of the ocean waters, and many coral species and other plankton life will die from this, affecting the food chain. And it is happening so rapidly, scientists worry if new species will have enough time to develop or not.
#4 In summary: Without providing this kind of background, Ms. Curry’s beautiful mathematical explanations are just fodder for ideologues to declare all our knowledge is effectively the “same” as knowing … nothing!
It is not the same as knowing nothing. The science gives us a flavor of the huge risks we are; or are not willing to inflict on our only planet for future generations.
Sadly some people use religion/ideology… whatever to happily ignore that.
Dr. Curry – excellent analysis – thank you for creating it.
Spent 38 years in consumer goods processing – sometimes working with technologists trying to create statistical models for better process control. My favorite saying in those discussions was always: All models are wrong. Some are useful.
That saying often helped head off more time and money in search of perfect when the modeling effort to date was already helping us exert greater control. That greater control often made some other parameter become our new worst offender as the swamp of all losses was drained.