by Judith Curry
A new paper describing the latest version of the NCAR climate model has just been published at the Journal of Climate. This is the version of the model that is being used for the IPCC AR5.
The Community Climate System Model Version 4
Peter R. Gent, Gokhan Danabasoglu1, Leo J. Donner, Marika M. Holland, Elizabeth C. Hunke, Steve R. Jayne, David M. Lawrence1Richard B. Neale1, Philip J. Rasch, Mariana Vertenstein1 Patrick H. Worley6, Zong-Liang Yang, and Minghua Zhang
ABSTRACT
The fourth version of the Community Climate System Model (CCSM4) was recently completed and released to the climate community. This paper describes developments to all CCSM components, and documents fully coupled pre-industrial control runs compared to the previous version, CCSM3. Using the standard atmosphere and land resolution of 1◦ results in the sea surface temperature biases in the major upwelling regions being comparable to the 1.4◦ resolution CCSM3. Two changes to the deep convection scheme in the atmosphere component result in CCSM4 producing El Nino/Southern Oscillation variability with a much more realistic frequency distribution than CCSM3, although the amplitude is too large compared to observations. They also improve the Madden-Julian Oscillation, and the frequency distribution of tropical precipitation. A new overflow parameterization in the ocean component leads to an improved simulation of the Gulf Stream path and the North Atlantic Ocean meridional overturning circulation. Changes to CCSM4 land component lead to a much improved annual cycle of water storage, especially in the tropics. The CCSM4 sea ice component uses much more realistic albedos than CCSM3, and for several reasons the Arctic sea ice concentration is improved in CCSM4. An ensemble of 20th century simulations produces a pretty good match to the observed September Arctic sea ice extent from 1979 to 2005. The CCSM4 ensemble mean increase in globally-averaged surface temperature between 1850 and 2005 is larger than the observed increase by about 0.4◦C. This is consistent with the fact that CCSM4 does not include a representation of the indirect effects of aerosols, although other factors may come into play. The CCSM4 still has significant biases, such as the mean precipitation distribution in the tropical Pacific Ocean, too much low cloud in the Arctic, and the latitudinal distributions of short-wave and long-wave cloud forcings.
Journal of Climate 2011 ; e-View
doi: 10.1175/2011JCLI4083.1
http://journals.ametsoc.org/doi/abs/10.1175/2011JCLI4083.1
The solar output anomaly timeseries is described in Lean et al. (2005), and is added to the 1360.9 W m−2 used in the 1850 Control run. The CCSM4 volcanic activity is included by a timeseries of varying aerosol op- tical depths, exactly as in CCSM3 (Ammann et al. 2003). The CO2 and other greenhouse gases (methane and nitrous oxide) are specified as in the IPCC 3rd Assessment Report. Atmosphere aerosol burden (sulphate, organic carbon and sea salt), aerosol deposition (black carbon and dust) onto snow and nitrogen deposition also vary with time. The burdens and deposition rates were obtained from a 20th century run with the CCSM chemistry component active, that is forced with prescribed historical emissions (Lamarque et al. 2010).
All four components are final- ized independently by the respective working groups using stand-alone runs, such as AMIP integrations and runs of the individual ocean, land or sea ice components forced by atmospheric observations. Once the components are cou- pled, then the only parameter settings that are usually al- lowed to change are the sea ice albedos and a single param- eter in the atmosphere component. This is the relative hu- midity threshold above which low clouds are formed, and it is used to balance the coupled model at the TOA. A few 100 year coupled runs are required to find the best values for these parameters based on the Arctic sea ice thickness and a good TOA heat balance.
.
There are several possibilities for the differences between the models and reality. Neither model includes the indirect effects of aerosols, which has cooled the earth some- what over the 20th century. This implies that both models should warm faster than the observations, and the fact that CCSM3 did not do so suggests that probably the cooling ef- fect of volcanoes is too strong in that model. Volcanoes are implemented in exactly the same way in both models, and Fig. 12 clearly shows the quite large temperature response to large eruptions that is not reflected in the observations. This could possibly be a problem with the temperature re- construction, which has only sparse data in the early part of the record, and a temperature drop might show up better using data just over land. However, there are other possi- bilities for model errors, such as a poor representation of the direct effect of aerosols, or the climate sensitivity is in- correct. In addition, the heat uptake by the ocean maybe too small, although Gent et al. (2006) show that CCSM3 heat uptake is larger than observations suggest, and uptake of chlorofluorocarbon-11 agrees well with observations. It is very difficult to say definitively which of these possibilities causes CCSM4 too large surface temperature increase over the 20th century. [JC comment: no mention of decadal scale variability, PDO, etc. in discussing descrepancies with observations; they still assume that everything can be explained by external forcing]
The transient climate sensitivity of the 1◦ version is 1.60◦C compared to 1.50◦C for CCSM3 T85 version (Kiehl et al. 2006). The CCSM4 1◦ version equilibrium climate sensitivity due to a doubling of CO2 is 3.0◦ ± 0.1◦C (Bitz et al. 2011 submitted to J. Climate), whereas CCSM3 T85 version sensitivity is 2.7◦ ± 0.1◦C (Kiehl et al. 2006). The reasons for this increase in CCSM4 equilibrium climate sensitivity are analysed in detail in Bitz et al. (2011 submitted to J. Climate).
The second conclusion is that CCSM4 still has signifi- cant biases that need to be worked on and improved. Fig- ure 5 shows that the improvements to deep convection in CAM4 have not eliminated the double ITCZ problem, even in the 1◦ version. Gent et al. (2010) show that the 0.5◦ resolution version of CCSM3.5 also had a double ITCZ, so that just increasing atmosphere resolution may not elimi- nate the double ITCZ; further parameterization improve- ments are almost certainly required. The CCSM4 still has biases compared to observations in the latitudinal distribution of both the shortwave and longwave cloud forcing (not shown). Unfortunately, these biases do not get smaller when higher horizontal resolution of 0.5◦ is used in the atmosphere component, because the cloud distribution in CCSM4 is not sufficiently accurate compared to observa- tions. There is still too much low cloud in the Arctic region, despite including the freeze-dry parameterization of Vavrus and Waliser (2008). Figure 11 shows room for improvement in the surface temperature over the continents, which has significant areas where the mean bias is > 2◦C compared to the Willmott and Matsuura (2000) observations.
The third conclusion is that the missing indirect effects of aerosols in CCSM4 is very likely a major factor causing the larger increase in globally-averaged surface tempera- ture over the 20th century than in observations shown in Fig. 12. However, there are other possibilities for this too large increase, such as a poor representation of the di- rect effect of aerosols, ocean heat uptake is too small, or the model climate sensitivity is too large. The absence of the aerosol indirect effects means that projections of future temperature rise due to increased CO2 and other greenhouse gases will be larger than if CCSM4 did include the aerosol indirect effects. These last two conclusions clearly point out the necessity of an improved atmosphere component that includes a better representation of cloud physics and aerosols that allows the feedback of the indirect effects of aerosols. A new version of CAM that includes these processes, and other improved parameterizations, has been under development for some time, and is ready to be in- corporated into CCSM. Results using this new atmosphere component will be documented in the very near future.
@judithcurry why aren’t sensitivity studies conducted to assess the impact of these uncertainties on the attribution?
Of course sensitivity studies should be done. But if the models are extremely sensitive to inputs and run rather wildly out of control when tested that way, who wants to show that off?
Clearly something is missing if the model can’t account for existing data well. Are there historical measurements of global cloud cover and type?
It rather strains credulity to imagine that they HAVEN’T conducted such sensitivity studies – I can’t imagine anyone seriously interested in the subject not doing so. Reporting what they found, now that might be a different matter. One of the problems with models that isn’t often discussed is what happens to all the “dummy” runs? In a real experiment, as many modelers claim their models are, EVERY run would constitute data, not just the runs the modelers choose to talk about.
They are not interested in sensitivity, except as a problem to avoid. They are not doing science, rather they are doing engineering, just trying to get the thing to work, so they can analyze scenarios for the IPCC. See my longer comment below at http://judithcurry.com/2011/05/08/ncar-community-climate-system-model-version-4/#comment-67568
Thinking they are doing science instead of engineering is the biggest confusion with modeling.
Reading the abstract, my impression is that, rather than engineering, it is “cargo cult” pseudoscience…”Let’s hammer on longer boards to simulate the propellers better.”
Your response is not helpful. It is not even sensible, so you are wasting our time. This is one of the leading climate models. Intelligent analysis is what we are looking for.
“Clearly something is missing if the model can’t account for existing data well.”
This is my sticking point as well. If anything, it’s a REAL indication that your current model is, if not totally wrong, at least seriously lacking.
It’s great that they’ve released all the details mind, props for that.
isn’t there an overuse of “improved” and “better” where one would need “good” and hard figures?
also if.things are much better now isn’t that an indirect critique of AR4 ?
better in terms of physically sound parameterizations and tuning methods. Not uniformly better in terms of comparison with observations, but then that may be partly a forcing issue.
“The second conclusion is that CCSM4 still has signifi- cant biases that need to be worked on and improved.”
Indeed. And since this model effectively reflects the whole AGW enterprise, this says a lot.
“Once the components are coupled, then the only parameter settings that are usually al- lowed to change are the sea ice albedos and a single parameter in the atmosphere component.”
This curve fitting exercise pretty much guarantees that any model able to predict that past will have zero skill predicting the future. However, it will have skill in eventually matching the expectations of climate science, as per the experimenter-expectation effect. It won’t actually perform better than chance, but a consensus of scientists will agree that it matches their expectations for future climate.
‘However, it will have skill in eventually matching the expectations of climate science, as per the experimenter-expectation effect.’
Still beating that dead horse, eh ferd? I guess I should have expected it.
Time will tell which of us is flogging the dead horse. Based of observation, it would appear my horse is running quite well. The post 2000 projection of the model is a much closer fit to the expectations of mainstream climate scientists than it is to actual climate. Which matches what I’ve predicted. That climate models will do a much better job of predicting what climate scientists expect than they will of predicting future climate, because of poor experimental design. By allowing the model builders to tune the input parameters based upon the observed outputs, they are training the models to meet their expectations. Which is why the vast majority of climate models have not performed as well as chance post 2000. Even the simplest of numerical models should have caught the 60 year cycle in the temperature record, and would have except for the efforts of mainstream climate science to fit CO2 and aerosols to the data.
“It won’t actually perform better than chance, but a consensus of scientists will agree that it matches their expectations for future climate.”
Yes they will. But, really, so what? Seems pretty simple to get a “consensus of scientists” to agree on anything with suitable incentives.
It was interested to read that their team feels that they have made improvements on the hydrological cycle in some ways (water storage is what they explicitly say). As far as I understand it, and I am willing to be corrected, the NCAR model has lots of trouble reproducing the variation in the rates and amounts of precipitation over a given time period. That is, the model can basically only make it drizzle.
After hearing that from a former NCAR post-doc, I’m surprised that this fact (if it is a fact) was not mentioned as something to improve or a deficiency. Maybe I just missed the wording somewhere.
I also wonder when the models will start accounting for fluctuations in the spectral content of sunlight. Given last year’s fairly stupendous Nature paper on this topic, it seems like an important parameter for decadal to multi-decadal modeling.
So there is a problem in replicating the non-warming of the last decade, and I don’t see anything in their toolbox which will be able to account for it. There has been no explotion in aerosol production and no significant volcanoes. As long as they keep treating ENSO, solar and other cycles as noise, they are lost trying to explain this through external forcings.
Unless they find a small, largely unknown, but extremely capable volcano somewhere…
There is always a way, I guess.
‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior. Plausibility criteria are qualitative and loosely quantitative, because there are many relevant measures of plausibility that cannot all be specified or fit precisely.’ http://www.pnas.org/content/104/21/8709.full
A priori formulation must be judged as inherently implausible for aerosols, secular changes in cloud cover, changes in surface albedo, the Interdecadal Pacific Oscillation, solar UV drift – amongst the major parameterizations and couplings. Aerosols not simply from lack of reliable data on natural sources or atmospheric concentrations or with credible clouds feedbacks – but as well for interactions of black carbon and sulphides in which the sunlight reflected from sulphides in mixed aerosols is subsequently absorbed by black carbon.
A posteriori solution behavior involves a subjective selection of a ‘plausible’ solution from divergent solutions possible from plausibly formulated models as a result of sensitive dependence and structural instability inherent in AOS models.
The leap of faith is from exploring climate physics with models to predicting the future. In reality both climate and models are chaotic and prediction is best be done with systematic evaluation of model phase space and as a probability rather than as a ‘projection’.
Again and in reality – most people fail to appreciate the nature of AOS through a lack of understanding of dynamical complexity. But as long as this not made explicit by all of the modellers – and they are well aware of non-linearity in AOS – it is a great intellectual dishonesty perpetrated by a cohort that has strayed from the path of science in pursuit of careers and funding.
“My main critique of this is the experimental design and interpretation of model results”
I have trouble with the use of the term experimental design here. Since when to mathematical models have anything to do with experiments? I mean other than that metaphorical, ‘we know it’s not really an experiment’ sense.
Experimental design is critical, in terms of what boundary conditions/forcing is used, how many simulations are conducted (varying initial conditions, parameters, etc.)
Judith,
I am used to creating experimental designs using the tool of Design of Experiments (DOE). I have wondered for awhile whether this type of experimental design could be used in modeling, but have not brought it up with anyone here because the discussions never seemed appropriate. Your comment on the criticality of experimental design is an opportunity for me to ask, though I don’t know if you are familiar with DOE as described by Taguchi.
DOE requires the response variables to be measured accurately using a reliable method in order to gain any valuable information, but models only make predictions as outputs. Would this end up just being an exercise in GIGO if one were to use a DOE for climate modeling? Also, would it be practical to use DOE from a time constraint perspective because my limited understanding of climate modeling is that a single run (experiment) can take a long computational time to complete and a DOE matrix of 8, 16, 32 experiments depending on the number of input variables used is typical.
The idea of seeing how much the various input variables interact with one another and their main factor contributions to the system, i.e. the type of information one can get from statistical experimental design, seems like it would be potentially enlightening. Any ideas on this… and I guess I don’t leave this as a question for JC only… anyone else who may have an idea I would be interested in hearing from.
DOE with GCM is complicated by the huge parameter space and by the sheer time it takes to do a run. A typical GCM might have 30 parameters that need to be set. So, you simply dont have the time to do a factorial design or a fractional factorial. one approach i’ve seen is to select a few parameters to vary. Then a statistical emulation is constructed from the results. This emulation is run to predict the output as if you ran the real model. Then the emulation can be checked. This is one of the ways they find bugs in the GCM.
“A typical GCM might have 30 parameters that need to be set. So, you simply dont have the time to do a factorial design or a fractional factorial.”
That’s what I thought.
“one approach i’ve seen is to select a few parameters to vary.”
To keep the number of experiments down to a manageable number, I do this frequently. It helps sort out the variables that are not strong contributors to the system and find the ones that are. I gather this is what the emulation runs do?
Emulations are used to fill in the gaps. So sample the parameter space. build a regression of that. make a prediction. Then test that by running the full model. do another regression. like so..
Wish I had a link to a paper. It was a kool thing I saw at AGU.
Oh, I’m pretty sure its the basis of some the METs work on regional prediction.
maybe peter webster knows what the hell I’m trying to remember.. peter, you around?
Judith,
Do you really want to know what is completely wrong with this model and all models?
Scientists treat this planet like a round pizza with no concern that this is a globe and through rotation, the parameters are quite different from the equator to the poles due to the biggest rotating mass is at the equator and all the rest the planet is smaller in size.
Also, the heights of the land mass are not taken into consideration as well, as the land mass is higher in the atmosphere to the oceans.
Currently, I am finding that we have two magnetic fields coming from this planet. One at the core and a much weaker one from the planets surface due to the density of these mass objects.
Given the temperature records etc. can the new model tell us what the atmospheric CO2 concentration was in 1850? Compared with
http://www.warwickhughes.com/icecore/
The model uses C02 concentration as an INPUT.
GIYF
look for Ar5 forcing data and you’ll find the figures you seek.
There is an interactive carbon cycle that will predict atmospheric CO2, but that is incorporated into the next version. As mentioned in the Gent et al paper, results will be released in the near future.
Does it postdict CO2 as well? BTW using the NSF URL does not work.
BTW there are major problems with Jaworowski’s papers at the site I quoted.
So major as to invalidate his conclusions.
Thanks Judith for raising pertinent questions.
Obviously, the science is not settled!
Is it possible that the models exaggerate the effect of human emission of CO2 on global mean temperature?
At least I am grateful for the acknowledgement: “the differences between the models and reality”
Climate etc bloggers, no more arguing with you on this point any more.
It is a fact: THE MODELS AND REALITY ARE DIFFERENT!
The models exaggerate the observed warming.
models and reality are ALWAYS different. you probably think that plank’s constant is known exactly. more precisely;
the models of physics and the models of observation are always different at some fundamental level. or, reality, is a model.
In my engineering work, for simple problems with close form solutions, the difference between the theoretical and numerical differential equation solutions and experimental results are ALWAYS less than 0.1%. And with mesh refinement, you can usually have a difference of 0.00% between the theoretical and the numerical solution.
When your theory matches the observations perfectly in every case, let us know. That way, we will know you are cheating.
steven mosher
You wrote:
That’s also true, Steven, “when your theory consistently gives higher values than the observations” (as appears to have been the case here). IOW we know they were “cheating”.
Right?
Max
I would add “significantly” to higher to get past Steve’s nit picking about “exact vs some difference”
It seems at least at the interface between the science and policy that the scientists shift from model results to the real world too easily. Surely when somebody predicts the future global temperature that is what is happening?
Is CCSM3, which is mentioned here as the precursor to CCSM4, the same as COLA CCSM3 that is referred to here: http://iri.columbia.edu/climate/ENSO/currentinfo/SST_table.html. If so I have gathered a data set which compares the model predictions of NINO3.4 temperature anomaly with the later measured values. There are 30 samples for each month of predictions between 2-10 months. Predictions for 3 months forward and beyond have SD of error of 0.89, 0.94, 0.95, 1.01 degrees C . When the expected range of NINO3.4 temp is only about +- 2 deg, this doesn’t seem to be a very good prediction.
The mean error of CCSM3 predictions is also considerable while many of the other models have quite small mean errors.
My own P5MM (PJB’s 5 minute model) which assumes that the NINO3.4 anomaly will reduce linearly to 0 degrees over the next 6 months and then remain at 0 for ever, does considerably better. (Would have liked a bit more time to check these results)
no, the cola model is for seasonal to interannual predictions, a different model
“The CCSM4 sea ice component uses much more realistic albedos than CCSM3, and for several reasons the Arctic sea ice concentration is improved in CCSM4. An ensemble of 20th century simulations produces a pretty good match to the observed September Arctic sea ice extent from 1979 to 2005.”
They say the new model albedos are much more realistic. Yet only tout September Arctic sea ice extent as being a good match. What about the other 11 months of the year and Antarctic sea ice?
If sea ice affects albedo, and 11 months of the year and the southern hemisphere have a poor match to sea ice, doesn’t that indicate their “much more realistic albedos” are still a pretty poor representation?
There’s a special test of a good climate model.
This test separates the wheat from the chaff, the sheep from the goats, the genuine climate scientists from the wannabees who have wandered in from less-worthy disciplines like physics or mathematics. Or – perish the thought – people who have ever worked in the mining industry.
The test applies to the output of the model and it’s a simple binary test with the answer of true or false.
Is the output “worse than we thought” – yes or no ?
No
striking 0.4C overestimate of early 21st century temperature anomalies
http://bit.ly/gaA9kS
Is the output “worse than we thought” – yes or no ?
No: http://bit.ly/cIeBz0
Oh, so now the new line is:
“It’s actually much better than we thought.” (NCAR ver. 4)
Although I’m not sure exactly how best to determine the individual forcings, it appears that the total model climate response was similar to that simulated by utilizing only greenhouse gas forcings – see Simulations, whereas one might have expected the ghg response to have been significantly offset by other, negative forcings. Although total aerosol forcings were slightly negative, sulfate forcing appeared to be close to zero if I interpreted the figure correctly. These results suggest that the warm bias in the simulations may have been due mainly to failure to adequately simulate the full effects of aerosol negative forcings. This assumes that the individual simulations could simply have been added together, which is probably an oversimplifcation, but I would be interested in the perspective of Dr. Curry and of climate model designers on these points.
“These results suggest that the warm bias in the simulations may have been due mainly to failure to adequately simulate the full effects of aerosol negative forcings.”
Fred, what other negative forcings would you have also expected to be possible contributors?
Is it possible that the model is totally useless? After running the model and finding it misses “reality” or some statistic of it, by almost 50%, we are looking for extra model reasons to explain the difference. This is truly pathetic.
Getting my eyes checked tomorrow.
I just found myself agreeing with everything maxwell wrote, and almost everything Chief did.
Maybe it’s hayfever?
JC. do I have this correct?
Lets see If I have this correct.
The model consists of 4 modules:
1. An ocean model
2. a land model
3. an atmosphere model
4. and ice model.
these 4 models are coupled or integrated together
“Once the components are cou- pled, then the only parameter settings that are usually al- lowed to change are the sea ice albedos and a single param- eter in the atmosphere component. This is the relative hu- midity threshold above which low clouds are formed, and it is used to balance the coupled model at the TOA. A few 100 year coupled runs are required to find the best values for these parameters based on the Arctic sea ice thickness and a good TOA heat balance. ”
Then the models are TUNED. people should note that there are only two KNOBS that are turned. No curve fitting to the temperature. no curve fitting to the SENSITIVITY.
KNOB 1. “This is the relative hu- midity threshold above which low clouds are formed”
KNOB 2. sea ice albedos
So, there are no knobs turned in the land model and no knobs turned in the Ocean model.
The knob setting works like this. A few 100 years runs are completed.
Knob 1 is set to get the balance at the TOA set correctly. its not set to get the SSTs to match or to get the air temp to match. Knob 2 is set to get the ice thickness (not extent) to match.
Since balance at the TOA does not determine the exact trajectory of the global average temp, setting a knob to balance THAT doesnt determine
the temperature in a deterministic fashion. Since sea ice thickness does not determine sea ice extent, setting a knob for albedo does not force a fit with area or extent.
So the question is: why is knob 1 uncertain and why is knob 2 uncertain.
In other words, why are we forced to set these values by experiment with the model?
But, steven, isn’t this the essence of the, “It’s a boundary value problem”, and also, “The imbalance at the TOA is the error signal that drives the system back to an ( new ) equilibrium state”, guiding principles? And then is critically important relative to energy content and thus the some-kind-of, temporal-range-not-specified average temperature.
Thanks for any corrections.
Also interesting that the balance at TOA is the metric for fitting the free parameters.
a little history (warning: big vid), ~ 600s in, assume TOA balance in 1850, unfortunately not much high quality data to verify this assumption
Is the TOA balance with pre-industrial forcings what is used as the cost-function for fitting the two parameters?
incorecto; the components are tuned individually and then the two knobs you mention are tweeked for the coupled set.
Input forcings. Don’t forget these. They appear to be rather pliable based on need. Hansen was babbling on about 40% errors in aerosol forcings recently. Input forcings are best guesses, errr….estimates.
Anyone believing a message that these models are “simply physics” turned loose is naive. Turbulence is “simply physics”, orbital mechanics of more than 2 bodies is “simply physics”. Guess what? We don’t know the initial boundary conditions, and we aren’t going to find out. It’s not even clear we even know what initial conditions we really need to know.
There appears to be a built in chaos factor that makes these systems inherently unpredictable to start with, the sooner this cat is let out of the bag, the better. The future of climate is “not knowable” kind of dilutes the political message though, doesn’t it?
As an absolute statement, this is trivial though, the questions is to what accuracy can you predict the climate givens it’s inherent unknowability (TM)?
Given the current quick divergence from reality, it is either very POOR, or the modelers are unfathomably unlucky regarding the chaotic components.
From a higher level perspective I see this drive to further complexity of the models directly at odds with my intuition. One should not further complexiate (TM) the models until adequate performance has been demonstrated at higher levels. We now have better resolution of very inaccurate models. Tax dollars not well spent.
Kudo’s to the paper for mentioning the obvious: “…or the model climate sensitivity is too large”. I was beginning to believe this was not allowed to be spoken out loud. Glad I was wrong about that.
Thanks, Professor Curry, for bringing us this news.
Yes, “NCAR is to be commended for the open access to its” revised climate model . . . “, but lack of confidence in climate predictions by NCAR will probably not be restored by a revised climate model.
It is hard to take seriously revisions made by a scientific organization that first claimed predictions based on their former climate model were beyond debate.
Well, this is a bit puzzling: I would have thought that you know that better representation of aerosol processes in models is hindered by a lack of observational data that can constrain and validate key parameters. Besides ongoing observational fieldwork discussed in the current literature, the new Glory satellite was to have been a U.S. contribution to the collection of information that could study different types of aerosols (using polarization of light) for future inclusion of these sensitivity features in modeling. Not to be. :-(
As you suggest, current models can already calculate first-order aerosol climate feedbacks; but further development is obviously a focus not just for AR5 but at least the next decade, given the demands on resources.
“…given the striking 0.4C overestimate of early 21st century temperature anomalies, perhaps it is time for the climate modelers to start wondering whether their sensitivity is too high and whether the multi-decadal ocean oscillations are playing a dominant role in determining global average temperature anomalies”
Judith, one would have to be completely unable to understand the concept of a ‘trend’ and an ‘average’ of model simulations to buy your not-so-subtle rhetoric. Instead of more of the same, forever and ever, why not simply state that your primary policy concern for Americans is and always has been adaptation (especially for vulnerable coastal communities) and clarify/update that you used to support emissions reductions/mitigation in addition to adaptation, but now you don’t — and why. That or something similarly frank, could be refreshing.
No, “Martha”, it would be refreshing to read anything from you that, just once, would not be full of bile and hate. Perhaps, next time.
Martha.
Like Judith I too think its time that modelers consider that they may have the sensitivity too high and further that they may underestimate the amplitude and variability and coupling of oceanic cycles.
So, how are you going to take that scientific concern and twist it into something that it’s NOT in my case?
Steve
I may have misunderstood, but in a previous thread/comment of yours I thought you said that climate sensitivity was a model output rather than a parameter? Your latest remark seems to suggest otherwise. Thanks for any clarification.
Based on the description reproduced in the opening post it appears clear that the models are not tuned to agree with temperature trends, but the trends are an output of the model. It’s, however, obvious that the development of the models has in may ways been influenced by observed temperature trends over various historical periods. The final tuning is restricted to two parameters, but very many parameters and other details are adjusted during the development process to reach better agreement with various data sets.
The people developing the models have certainly also learned by experience, how certain choices done on these other parameters and details will finally influence temperature trends and climate sensitivity. They are not really forced by the model from fundamental input that cannot be adjusted, but they result from a combination of these fundamental inputs and subjective choices made by the model developers, who by now know roughly, where these choices are going to lead.
All the above is unavoidable for a system that is as complex and as badly understood in many details, as the Earth system is. All projections of the models must be interpreted keeping this in mind. I cannot estimate at all the quantitative level of uncertainties, but the list of caveats presented by the modelers tells clearly that the issue is important.
This thread contains several comments on the role of decadal and multidecadal oscillations. The non-periodic climate shifts belong to the same class of issues. Based on the model description, the model is not capable of telling much, if anything, on these issues, which are generally recognized as important by the climate science community. That leaves two possibilities for reaching satisfactory agreement with empirical results on time scales for which reasonably detailed data exists. Either the models must be developed to describe these processes, or the unmodeled variations must be introduced as external influences. Both approaches appear to be far from a satisfactory solution. That means that we cannot test well the ability of models to describe the real dynamics of the Earth system over periods of decades (meaning 10-100 years leaving longer periods aside).
All the above conclusions are based on generic knowledge on modeling and information that I have been able to digest on the dynamics of climate and the whole Earth system. I would be very happy to read comments by climate modelers. Perhaps the situation is better, than I have been led to think and somebody can present good arguments to support that view. I know that there are other sites, like Isaac Held’s Blog, but so far I’m unaware of any site that would improve essentially my understanding. That may be too much to ask, as there may be too few people with requirements similar to mine (of a former theoretical physicist from another field with extensive modeling experience, but not involved at research level with atmospheric or other Earth sciences).
Pekka, it seems that version 3.0 was tuned to give the correct 20th century variability. Version 4.0 tuning did not do that; most of the tuning was done on the preindustrial simulations. So version 4.0 should avoid some of the circular reasoning that plagued the attribution of 20th century temperature (but there are still issues with the forcing data).
JC,
But if V4.0 is a modification of V3.0 then it means somewhere in it’s history it was tuned to the 20th century. From your description in the post I don’t get any sense that V4.0 does any better job at matching the 20th than V3.0 so the changes between the two versions don’t seem to impact this aspect of the output so much. I’m not completely convinced this point you make is strictly accurate or relevant unless they scrapped everything and started again with V4.0.
well, if you dig into the docs/diagnostics you’ll see that subsequent models actually fits some things worse than their parents; these improvements are not simple model fitting exercises
If it is going to cost trillions, why try to solve a now likely non-existent problem?
Better make sure before you spend trillions while the world have billions in poverty living now.
Martha – we are all aware of the problems of sulphate measurement or lack thereof. The problems of space based observations do not appear to start or end there.
But you completely miss the importance of decadal ocean variability as the source of ‘internal climate variability’.
This is a PNAS study that uses the PDO as a proxy for Pacific Ocean states. It follows a trend in peer-reviewed literature identifying this specific mechanism. ‘A negative tendency of the predicted PDO phase in the coming decade will enhance the rising trend in surface air-temperature (SAT) over east Asia and over the KOE region, and suppress it along the west coasts of North and South America and over the equatorial Pacific. This suppression will contribute to a slowing down of the global-mean SAT rise.’ http://www.pnas.org/content/107/5/1833.short
The realisation the potential for a continued lack of warming for another decade at least does not necessarily translate into a lack of support for a policy to reduce emissions.
‘The global coupled atmosphere–ocean–land–cryosphere system exhibits a
wide range of physical and dynamical phenomena with associated physical,
biological, and chemical feedbacks that collectively result in a continuum of
temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’
Given sensitive dependence and dynamical complexity – caution is warranted. But I expect that the ‘science’ is a mere rationale for a limits to growth fanatic – and no alternative to a radical destruction of industrial society can be contemplated.
There are actually quite a few measurements of atmospheric aerosol, from satellites, in situ ground measurements, and aircraft campaigns. At this point, the main issue is the details of how aerosols interact in clouds, and then how to scale up this understanding to the coarse resolution of the climate models. There have been a number of major efforts over the past several decades to address these issues. Frankly, several of the major modeling groups are far ahead of NCAR in dealing with this issue.
This seems about typical – http://www.doas-bremen.de/paper/jgr_11_lee.pdf
Table 1. Annual Global Sulfur Emissions in the GEOS‐Chem
Model for the year 2006
Source Emission Rate, Tg S yr−1
Fossil fuel on land 51.55
Ships 4.72
Biomass burning 1.22
Biofuel burning 0.12
Aircraft 0.07
Volcano 6.55
Dimethyl sulfide (DMS) 21.05
‘This study could benefit from future work in several areas. Differences between SCIAMACHY and OMI SO2 columns need to be reconciled. An a posteriori estimate that combines bottom‐up and top‐down emissions, weighted by their uncertainties, would better represent the true emission
distribution. Inversion at higher spatial resolution should better account for spatial variance in the SO2 lifetime. A more sophisticated inversion could improve its accuracy, could better account for smearing, and could enable extension to open ocean. A better understanding of SO2 loss
processes in winter is needed.’
‘Top‐down emissions for China which has higher SO2 emissions are also highly spatially correlated (r = 0.90 for SCIAMACHY and r = 0.93 for
OMI) with bottom‐up emissions, but they are lower by 50% for SCIAMACHY and 30% for OMI, except for near Beijing where they are higher by 30%.’ http://www.doas-bremen.de/paper/jgr_11_lee.pdf
Is there a different problem that emerges especially in places where black carbon is well mixed in a high proportion with sulphates? What implications does this have for the effects of dimethyl sulphate over oceans – i.e. minimum black carbon? Do we know how the natural emissions change over time with less or greater upwelling of nutrients – with resultant impacts on phytoplankton populations?
‘Compiling all the data, we show that solar-absorption efficiency was positively correlated with the ratio of black carbon to sulphate. furthermore, we show that fossil-fuel-dominated black-carbon plumes were approximately 100% more efficient warming agents than biomass-burning-dominated plumes.We suggest that climate-change-mitigation policies should aim at reducing fossil-fuel black-carbon emissions, together with the atmospheric ratio of black carbon to sulphate.’
That is before we get to cloud condensation nuclei.
Calculating just a “TREND” is not adequate. If the model has a bias at all times (always too hot for example), this means the radiative physics can’t be right. If the bias is that after 1970 (as in this case) the model warms more than the real globe, this either means climate sensitivity is too high, or internal cycles (oceans) are missed, or aerosols are missed, or something. That is, these errors are diagnostic. If a model has a bias (too hot, too wet), it can’t be used for agricultural forecasting, for example.
“If the model has a bias at all times (always too hot for example), this means the radiative physics can’t be right.”
And what about the thermodynamics? This touches a point I’d like to see discussed. If the ensemble of models all produce the same temperature trend in response to the same CO2 forcing, but each runs at a different absolute temperature, then doesn’t that mean that all of them, except perhaps the ones which run at the actual temperatures of the planet (which perforce must be thermodynamically correct), are thermodynamically flawed? If that is so, shouldn’t projections be received only from those models that are running at absolute temperatures found within the range of errors of the historical temperature record?
“All four components are final- ized independently by the respective working groups using stand-alone runs, such as AMIP integrations and runs of the individual ocean, land or sea ice components forced by atmospheric observations. Once the components are cou- pled, then the only parameter settings that are usually al- lowed to change are the sea ice albedos and a single param- eter in the atmosphere component. This is the relative hu- midity threshold above which low clouds are formed, and it is used to balance the coupled model at the TOA. A few 100 year coupled runs are required to find the best values for these parameters based on the Arctic sea ice thickness and a good TOA heat balance”
This seems odd to me, i am very uncomfortable with the ‘tuning’ method as i can’t see how it can give anything but an unwarrented sense of security; specifically, even if you do manage to ‘tune’ the results to match current and historical data, you have next to no confidence that the model will have predictive power.
The very fact that ‘tuning’ is required suggests that the fundamental aspects of the climate are not understood, or at least, not modelled well.
I find this whole approach to be very confusing.
Labmunkey- in a standard engineering situation once you’ve done your best and it still strikingly differs from reality you recognize your best is not good enough and go back to square one. That cannot happen here as they have the IPCC deadline to meet. So they proceed by “improvement” without having to admit that if you missed yesterday by a mile and today by 1600 meters then yes, you have improved, but it still ain’t any good.
Expect the AR5 to mrntion all these problems at page 537 of 3,489, with sadly no space for them in the SPM.
Yeah, that’s my whole issue in a nut shell (que Austin Powers sketch..).
I’m from a cGMP background and have flirted with engineering through my job. I’d have abandoned these models ages ago (these specific ones, not to say i’d abondon modelling). Hell, my director would have FORCED me to after making this kind of progress.
I agree, ‘they’ seem to be in a situation where they cannot possibly be allowed to fail. You can refine garbage until the point of insanity, but in the end it still remains, as ever, garbage.
I keep thinking i MUST be missing something fundamental in the models or their validation/processes as i simply cannot accept that this whole field is so fundamentally borked.
I agree, although i don’t work directly in an engineering environment (I’m more cGMP) my work has involved engineering projects and collaborations and I’m familiar with the type of processes involved (in some respects cGMP science and engineering are very similar, at least procedurally so).
I just cannot ‘square this circel’ re: the climate models. You can adjust and improve a model all you like, but if it’s fundamentally junk, then even after the adjustments, it’s still liable to be junk. Or to use a popular phrase:
You can polish a [insert noun of choice] all you want, but in the end it is still a [insert same noun].
I keep coming back to the thought that i MUST be missing something. Surely, those in this particular subset of the climate field cannot be SO incompetent as to allow this sort of thing to be continually used.
damn- double post- for some reason the first didn’t appear in my browser- despite re-freshing)…. hmm. disregard either, as they both say roughly the same thing!
See my previous post for why tuning is required.
http://judithcurry.com/2010/10/03/what-can-we-learn-from-climate-models/
The way they did tuning this time around is much better than for version 3, which explicitly tuned to give a good 20th century simulation
Dr Curry,
I understand WHY tuning is required- however my sticking point is that it is required and (seems to me to be) so arbitrarily applied. Unless i’m understanding this wrong- the tuning is different depending on the model, belying a fundamental knowledge gap.
I suppose it boils down to what the tuning is for; specifically, are they just ‘fine tuning’, i.e. the model answer is within 10%, so the ‘fine tune’ is required to tighten it up, in which case i’m all for it (provided it’s well documented) or are they, as is my understanding, using the tuning as a blunt instrument to bring inaccurate models into some sort of accuracy/agreement?
Tuning is indeed an important part of model development, however i have never been given the impression that this is how climate science uses them- i could be completely wrong on this mind, but in this case it would seem more work is required on getting the model to more accuratley reflect reality in the FIRST place, rather than back-fit it in hindsight.
the problem is the complexity of linking four different sub models: atmosphere, ocean, land, sea ice. You can have each working “perfectly” in stand alone mode (i.e. with the other components as boundary conditions), but once you couple them, all sorts of nonlinear feedbacks kick in and the model needs to be retuned. NCAR only tuned the sea ice albedo (probably the melt pond parameterization) and low cloud microphysics, which are two areas of substantial uncertainty anyways.
I see.
Do the individual models work perfectly though? I.e. can you accuratley predict/hindcast using these models if limiting them and the predictions to their discrete areas?
For example, do the land models accuratley model land temps (i’m assuming that this is the main ‘target’ parameter due to the IPCC’s methodology) over time?
Further, if the individual models are working well, then that’s sufficient to make good predictions, however, if they’re not and the coupling is used in an attempt to improve the predictive aspect (be it hindcasting or future states) then i don’t see how you can argue that the individual aspects work- which then brings me back to my point on the adjustments.
Additionally, it is entirely possible that the tuning factors are masking a natural process that is NOT being modelled, again giving fasle confidence in the results.
Sorry if i’m being a simpleton here, i’m really trying to get my head around this!
The tuning might somehow make the model too “stiff” so that the multidecadal ocean oscillations are overly damped, However it seems that the new ocean parameterizations should be better in this regard. tuning arctic sea ice or low level clouds wouldn’t affect this much.
I, personally wouldn’t be too worried about dampening- as long as the multidecadal oscillation trends ARE there you can justify the mis-match as a model by-prouct, just so long as it IS represented and it CAn be tied in.
I’m still getting hung up on the adjustments though, i could allow one adjustment factor, but two, seperate adjustment factors is surely akin to changing two variables in an experiment and hoping to get meaninglful results.
I guess my worry is, that regardless of how well they manage to adjust the models to give results that match recent trends, if the basic assumptions are wrong/off slightly (which would be suggested by the NEED for adjustments), then we have no gaurantee that the future state predictions will be right- regardless of any success via hindcasting.
I can see the inherant difficulties here and i fully realise that they (the moddlers) are wrestling with a particularly unruly multiheaded beast here, but i was always trained to work from the bottom up (i.e. the model fundamentals) rather than from the top down (post model adjustments).
Perhaps my experience in more practical aspects is muddling my thought processes on this one- i just can’t get past these adjustments- i’m sure i’m missing something.
Thanks for your patience Dr Curry.
In addition to the tuning done once all the components are brought together, there are also resolution dependent parameter changes for some of the components (e.g. pp 179). I think this is part of the source for the “kludge” or “unprincipled” criticisms of GCMs.
It would be interesting to see a guest post with more details on this:
How do you know when you are tuning the parts of the models to work together that you are not building your own expectations into the model? History shows time and time again that the unconcious expectations of the experimenters work their way into the result, often in very subtle and unexpected fashion, unless appropriate controls are incorporated into the experimental design.
This is one of my concerns.
The fundamental question is:
Who decides when the tuning is “right”? This is clearly where human bias gets into the models.
One can easily imagine that a model written in the 1990’s that output a flattening of temperatures for a decade in the 2000’s would have been judged as having poor tuning.
This is where path’s diverge from skeptics and warmists. One side believes an honest broker is involved in the judgment of the tuning, the other side does not.
As Fred says, unconscious expectations of the experimenters work their way into the result (akin to the Law of Expected Results) but this is not necessarily or even usually dishonest.
that’s ferd…
It is clear from the writeup that they are not doing science; they are doing engineering. This is the big confusion. They are not exploring or testing hypotheses, except at the detailed operational level. Uncertainty is irrelevant; they are trying to get the thing to work, using the specific set of mechanisms they selected long ago. The design is locked in.
If you look at it as engineering it all makes sense. They are trying to build a climate forecasting model, a direct analog to the weather forecasting models. They are building it for the IPCC, who wants to use it to use it to test various emissions scenarios, from which flow various policy options.
The only scientific work here lies in the formulation of the detailed sub-process algorithms. Most of this is tweaking, which may or may not elucidate our understanding of the physical processes. They are not exploring the basic alternative hypotheses, nor the speculative hypotheses, or much of the science at all. They are not exploring uncertainty.
They are not testing anything for its own sake, for the sake of understanding. They are just trying to get the model to work. This is not science, it is engineering.
You obviously do not know or understand what engineers do. Any engineer worth his salt would be insulted by such comparisons.
I think an apology is in order to the people who built everything from pyramids to the International space station.
what springs to mind is not the ISS, but Apollo 1
They are not even that far along, by far. This is still applied research. An analog might be chem e’s trying to scale up from a bench process to a prototype. They are a long way from production, and may never get there. To pursue the space analog they are a Goddard rocket.
Actually Vukcevic, I am an engineer, in addition being a scientist and philosopher. I am a registered professional civil engineer and have worked on some world class problems. I have also done research on what engineers do. So how precisely am I wrong?
..and philosopher.
Read your post again, and if still not clear read:
http://en.wikipedia.org/wiki/Engineering
You have ducked my question. Try a real answer. Here is my background: http://www.stemed.info/engineer_tackles_confusion.html
I did 40 years of practical engineering so I’ll put it as plainly as I can:
Engineers design things, they build things, they take things apart and put them together again, they measure and often test to destruction. Ultimately their ‘thing’, has to work That is what engineers do. That has nothing in common with the climate models. Anyone can ‘engineer’ a repeated failure.
Well, the modelers are taking the climate apart and failing to put it back together again. It’s engineering, in the sandbox. And it’s as scientific as absently draining sand through your fingers.
=====================
vukcevik,
You are correct: engineers make things work.
Dr. Pielke, Sr. years ago pointed out that the models used in the climate work are engineering models, and are not modeling basic physics any more than an airplane or chemical processing model is basic physics.
My interpretation of what David is saying is that he is on the same track as Pielke on this.
My take is that you are actually saying something similar to this as well.
Engineers also do research. It is called applied science, and I have done quite a bit myself. Applied research has its own standards and that is what these folks are doing. The US basic research budget is about $60 billion but the applied research budget is $400 billion or more, mostly defense and NASA. Building weather models is applied research at NOAH.
Applied is bigger because they are actually trying to build things, as these modelers are. But they are hiding in the basic research budget, claiming to do science, which they are not. It would be very useful to get them into the applied research budget, where performance counts.
Nor have the climate models failed, much less repeatedly. In fact the modelers have claimed that these models are “pretty good” for at least 15 years. In applied research we have ways to measure performance, as with any engineering. This is what the climate models need.
vukcevic,
Why the antagonism?
Probably because engineers do honorable, useful work, and the IPCC scientists do not. They are practicing “cargo cult” pseudoscience.
Huffman, some of us are trying to have a serious, learned discussion here.
If the modelers were doing basic science there would be a bunch of different models, each exploring different hypotheses regarding mechanisms of climate change. For example, I remember a paper in JGR around 1993 that showed how a simple non-linear model of ocean upwelling could explain the 20th century temperature profile. Where is the model exploring that hypothesis, or other ocean oscillations and subsystems? Where is the indirect solar forcing model, or models? Or the Little Ice Age model? Or my personal favorite, the chaotic climate models, that pushes the chaos to see how much it can explain.
These scientific models would each try to see how much their respective hypotheses can explain. This is what research models do throughout science, they explore and test specific hypotheses. Instead we have a clone of a universal weather forecasting model, with a fixed basket of mechanisms. Tweaking these mechanisms is not basic research, it is applied research and engineering.
A humbling experience for any model builder is to apply Monte Carlo simulation to the model and assess the outcome. Monte Carlo simulation requires that the modeller identify the most important input variables, and apply known probability ranges to each input factor. The program (for example @Risk from Palisades) then runs, say, 1000 iterations, selecting from the input probability ranges for each factor.
Unless the situation is unusually clear, this analysis reveals the intrinsic uncertainty or lack of reliability of the model. Typically the result is expressed in terms of kurtosis. For example, in my experience, a financial model that gives an NPV with 1 std deviation of 200% is clearly not a reliable model, whereas another that gives an NPV with a 1 std deviation of 10% is a model that can be relied upon. Modellers will tell you that the latter outcome is very rare.
It is surprising to me that discussions on models, such as this, can proceed without Monte Carlo simulation as a measure of model reliability being raised.
Climate models are large pieces of code. CCSM4 is nearly a million lines of code. They are expensive to run, and computer time is limited. Still, there are some efforts toward “uncertainty quantification” which will hopefully someday make the tuning procedure automatic and optimal. Example: http://journals.ametsoc.org/doi/full/10.1175/2008JCLI2112.1
First of all, the size of the program is no reason for ignoring it’s uncertainties as too hard to calculate. It is a reason for admitting them, or rather for admitting that they are unknown. Second, automating the tuner merely makes it invisible. The concept of uncertainty does not seem to be in play here.
I asked about Monte Carlo in another blog and was told that, other than with individual small modules, running the climate model just takes way too much processing power and time to practically run a plateful of Monte Carlo variances. Any truth in this?
Judith,
The fallacy of climate models is that they believe all actions are from pole to pole.
Changes on this planet take can take a few days to decades to materialize just from the equator to the pole. Add in cloud cover and storms never cross the equatorial region from planetary motion.
This then means that new gases or material in the air may take hundreds of years to go from pole to pole.
Ops, forgot to add in the gases density changes of warm and cold air to pressure in the atmosphere.
There is also a bias that most of the world measurements are from the northern hemisphere. This means a great deal of smoothing has to cover vastly more areas in the southern hemisphere.
Could someone with access to the article describe the dissipative parameters of this model? From a thermodynamic perspective, sensitivity and feedbacks are directly related to the rate of free energy loss and this free energy flux defines tropospheric dissipation.
The NCAR model has extensively online documentation. The atmosphere model has a pdf that describes the choice of governing equations, operator splitting and discretization schemes.
Thanks for the link to cam4_desc.pdf. A hasty perusal tends to support my skepticism, e.g:
3.1.5 A mass, momentum, and total energy conserving mapping algorithm
The frictional heating is a physical process that maintains the conservation of the total energy in a closed system.
4.6 Dry Adiabatic Adjustment
If there are any unstable layers in the top three model layers, the temperature is adjusted so that (4.166) is satisfied everywhere in the column.
4.11.1 Free atmosphere turbulent diffusivities
Since the lowest model level is always greater than 30 m in depth, lc is simply set to 30 m in CAM 4.0.
The atmospheric model is already in version 5. Thus the correct documentation is cam5_desc.pdf.
The version numbers of the full model and the atmospheric submodel are not the same.
well, the post is about CCSM4 (which ships with CAM4); the name of the entire system’s been subsequently changed to CESM, and CESM1 works with CAM4 or CAM5; so yes, the version numbers are not the same, and neither are the names : – )
Thanks for putting the matter right. I was misled by the following sentence from the CAM4 document:
“The CAM 4.0 provides a model to the CCSM BiogechemsitryWorking Group (BGCWG) for interactive coupled biogeochemistry science requiring a significant period of time for spin-up activities. The final stage of this development cycle will be the release of CAM5 scheduled for June 1st, 2010.”
That made me think that the version must by now be outdated.
It seems like a contradiction to me that they would argue that the cooling effects of aerosols from volcanoes were probably overestimated and then go on later to argue the cooling effects from aerosols may have been underestimated thus explaining the lack of warming. Is there a free version of the paper out there for us tightwads? :)
steven,
Aerosols are the get out of jail card they acknowledge. Clouds are the factor they don’t seem to understand, and most acknowledge.
The idea that CO2 is a forcing but H2O is not seems to be as big a get out of jail card as aerosols.
I still wonder about the impact that having triple point in the atmosphere has on the system.
They argue that cooling from aerosols is underestimated because they do not use aerosols input.
Craig, they don’t include the indirect effects. I am commenting under the assumption they also don’t include the indirect effects of volcanoes but not having access to the paper I am limeted to assuming which we all know is dangerous.
The important volcanoes produce stratospheric dust which is long-lived because it is above cloud levels. It also has no indirect effect there because of the lack of clouds in the stratosphere.
Yes, and that which did go in the troposphere would be gone too fast to make much of a difference. I see the error of my ways, thanks.
Judith,
Have you ever wondered why lightening in storms is not pulled to the core?
How about 99% of the world lightening strikes are within 40 degrees north and south latitudes of the equator?
These all have to do with a second magnetic field that is separate from the planets core.
Saturn’s rings are held at the equator like the way the sun holds the planets at the suns equator. It is all the same process of magnetic fields and the repulsion to move and hold circular motion.
Flip a planet and the rotation is backwards. So, are the magnetic fields.
Interesting areas of study considering the vastly different sizes and densities of the planets that are in sequence to the suns rotation with all the positive and negative alinement’s to the sun’s magnetic influence. Especially after 4.5 billion years, their should be a vastly difference in rotational speeds if the sun did NOT influence these planets.
Joe,
Please post some links and citations.
I think there are better explanations about the lightning strikes than an unknown magnetic field.
Also check on Saturn again- the rings are gravity from everything I have read. If you have something showing a magnetic factor, please share it.
Hunter,
That is just it.
You will NOT find any links or citations on these factors.
I have many calculations and drawings that does not fit into current science criteria, so I will not look to publish until the system changes.
As for what you read on Saturn, how can gravity hold the rings exactly at the equator? Another theory? If anything to do with gravity, it would pull it towards the planet.
No, it is the magnetic field in a vacuum.
Judith Curry
You point out:
This observation is so obvious that it hurts.
But I’m afraid it lies “outside the paradigm box” (and hence in the “blind spot”) for the true “believers” in CAGW.
“2xCO2 = 3C” has become the paradigm or “dogma”, regardless of what the actual record has shown.
But, looking at the positive side, maybe the new model version 4 will provide better information than its predecessors (provided GIGO mistakes are avoided).
But I see that the emphasis here is still mostly on projected changes from the North Pole to the South Pole and global impacts, which really do not mean very much. And I suspect that the “impact” reports will concentrate on those that are negative (unless IPCC makes basic changes in its agenda, which appears unlikely).
You have touched on the point of regional “winner and losers” from global warming, but this is a subject that has not been researched very much (except, of course, the “losers” side, which has been the main thrust of IPCC reports and the studies cited there).
Der Spiegel had a chapter on this last year
http://www.spiegel.de/international/world/0,1518,686697-7,00.html
Let’s assume we are talking about an increase in GMTA of 1.5°C above today’s average. This is what we would expect by year 2100 if:
– Atmospheric CO2 continues to increase at a CAGR of 0.4% per year (to an estimated level of 556 ppmv)
– The IPCC estimate of 2xCO2 climate sensitivity of 3°C is correct*
– All other anthropogenic factors cancel one another out (as IPCC has assumed for the past)
– There are no changes resulting from natural factors (a BIG assumption – but also essentially what IPCC has assumed)
*Note: If the 2xCO2 sensitivity is only 1.5°C, then we would see this rise in 180 years, or by year 2190, instead of 2100 (all other things being equal).
Obvious winners would be Canada, Russia, Mongolia, Northern China, Alaska and Scandinavia (longer growing season, higher crop yields, milder winters with fewer cold weather-related deaths).
Northern Europe (Germany, Poland, etc.) would also gain from a milder climate, but could face greater risk of floods.
The Mediterranean countries of Europe might have greater risk pf droughts, as could the southern USA, Australia and South Africa (and possibly the temperate regions of South America).
The Sahara and Sahel regions might get more rain and become greener and more fertile (as they were in warmer times of the past).
Central African nations would apparently face no major changes, as would the northern part of South America.
Several types of crops have been shown to grow more quickly with slightly increased atmospheric CO2 levels, so this could also be a plus.
It is apparent that more work needs to be done to explore the “winners and losers” from global warming.
Climate models have a pretty poor track record for simulating global climate changes. In addition to poor parameterization (of clouds, for example) they suffer from questionable input assumptions on natural forcing factors, feedbacks, etc. Maybe the new versions will be a bit better.
Not being a climate modeler, I can’t answer this question, but could climate models do a better job predicting local or regional changes (both “positive” and “negative”) resulting from a slightly warmer world with slightly higher atmospheric CO2 concentrations? [I know that the UK MetOffice record has been dismal in this regard with their “BBQ summers”, “milder-than-normal snow-free winters”, etc., but was this due to the models or over-eager prognosticators trying to sell a story?]
“Winners and losers” might be an interesting thread to run, if you can find some worthwhile articles on this subject as a catalyst.
Just a thought.
Max
Your attitude of ‘‘winners and losers’ is very telling.
e.g. Canada. Canada is a Northern country with northern indigenous peoples experiencing melting permafrost – on which all infrastructure is built. Both your ignorance and racism are on display. Try some ‘articles’ from a national Inuit association. The sooner the better. Don’t wait for a thread, to educate yourself about people around the world.
If climate and biological niches did not evolve, where would we be?
========
Martha,
In what way is his comment “racist”? He seems to have failed to take the Inuit into account when putting together his assessment, but I don’t see any indication of malice behind it.
Martha….
Could you please point out the “racism” in Manacker’s comment above? I missed it and read it over twice. Are you seeing something I’m not? Also, if all CO2 is stopped now, right now, how long will it take to get back to normal (please define your normal) conditions? I am just interested in what it would take to reverse the awful conditions we have now and go back to permanent ice everywhere.
Martha,
Why don’t you link the article(s) that you’d like us to read?
Andrew
What is ‘telling” is Martha’s continual disregard for actual facts and data and her propensity to make unsupported dramatic claims.
Max made a reasonably accurate assessment of nations which would likely generally benefit (be winners) as a result of potential climate change vs. those nations which might be generally expect negative consequences (losers) as a result of climate change.
Max’s evaluation at this point is doomed to not be very accurate as climate models do not have the ability to forecast reliably at a regional level. Max did not state that all people in these countries would benefit, but that the country would be a “winner” overall.
Martha throws out the “prejudice” term to be dramatic, but her use is inaccurate in this case as Max is not acting in a prejudicial manner in his summary.
Rob –
What is ‘telling” is Martha’s continual disregard for actual facts and data and her propensity to make unsupported dramatic claims.
What is more “telling” is that she used the word “racism” so easily. When used derogatorily it’s nearly always indicates a high level of racism on the part of the one who uses it. I’ve found the most racist persons to be those who accuse others of it without specific and verifiable reason.
Progressives come from a long line of racism, from fighting against emancipation in the Civil War, to instituting Jim Crow, to turning fire hoses, dogs, and nooses on civil rights demonstrators. The latest form of racism is to portray minorities as permanent victims, unable to survive without the white progressives’ form of government support. From permanent affirmative action, to excusing barbarity from certain culture (disguised as multi-culturalism), to the bizarre suggestion that “indigenous peoples experiencing melting permafrost – on which all infrastructure is built” are incapable of adapting, progressives’ inherent belief in the inferiority of others is projected on to those who do not share it.
It’s not surprising though. Racists always think everyone else is racist too. Reminds me of Supreme Court Justice Ruth Bader Ginsburg being quoted the New York Times Magazine saying: “Frankly I had thought that at the time [Roe v. Wade] was decided,there was concern about population growth and particularly growth in populations that we don’t want to have too many of.”
I’m guessing Martha does not consider Inuits “incapable of adapting” – I’m sure she’s aware of the rapid spread of snow-mobiles and cell phones – but that doesn’t mean they want their ecosystems transformed in their lifetimes.
It also does not make it racist to go againest what a minority wants if the alternative is better for society overall
I”m guessing that Martha thinks that everybody, Inuits included, needs the government to solve their problems. Her racism is independent of her lust for centralization. But it’s still there. Her reflexive projection of racism on those who disagree with her, and her seeing non-whites as victims first, are just evidence of it.
Oh, and Paul, I’m wondering why you had nothing to say about Martha’s unsupported accusation of racism against Manacker, but felt the need to come to the defense of Martha?
Martha,
Well, I did as you suggested and read the material from the Inuit Tapiriit Kanatami (ITK), the national Inuit organization you mention. They do not seem at all to be a helpless bunch of aboriginals, with the ground melting under them. That organization appears to be a very capable and forward think political agency.
http://lawinquebec.wordpress.com/2011/01/10/inuit-using-land-claims-agreements-to-address-the-environmental-challenges/
“As underscored by a report penned by Inuit Tapiriit Kanatami, a national Inuit organization in Canada, the Inuit are “going to have to find new ways to make a living from the land, and whatever form that takes, it will not be what Inuit would have wished for, it will not be ideal.” Land claims agreements, however, can be the “building blocks” to develop and implement strategies to address the challenges posed by climate change”
Nope, not a helpless bunch of abo’s, those folks. They are people with a plan. Do not underestimate them.
Gary –
Nope, not a helpless bunch of abo’s, those folks. They are people with a plan. Do not underestimate them.
Ten years ago I was offered a job by one of the several Alaskan Native American corporations. They had opened a branch at Goddard Space Flight Center and had started expanding into the “space” business – and very successfully, at that. Afterward, I found that their operations bore a great resemblance to an octopus – many branches, many different business areas. And if they were all run half as competently as the one at GSFC, I’d suggest that Martha has her head buried in that melting permafrost. As you said – “Do not underestimate them.”
No, I didn’t take the job, but I got to deal with them for the next 5 years. They were one of the better parts of what was a very good job (as such things go).
Martha
Sorry, I did not realize that you are a disgruntled Inuit, and I certainly did not intend to offend you (or anyone else). As a matter of fact, the data I cited did not cite the Inuit as “winners” or “losers”.
I simply listed the geographical regions that would likely be winners and losers with a modest warming of 1.5°C, as one might expect late this century or in the next century, based on the very sketchy information that exists on this subject to date.
You write:
Since I have no racism, it is more likely that your “ignorance is on display”. But let’s leave the “ad homs” in the garbage can, where they belong.
The territories listed as likely to be “winners” contain a wide variety of ethnical groups (my estimates in millions, based on various sources):
110 Han Chinese
51 Mongols, Manchus, Uighurs, Kazakhs, Turkic groups, other non-European
207 European origin (Scandinavia, Russia, Canada, etc.)
So roughly 44% of the total is not of European ethnicity.
In addition to Canada, it is likely that some of the northern states of the USA (Minnesota, the Dakotas, Wisconsin, etc.) might benefit from a slightly warmer climate, but these have not been included.
I think it would be very helpful for a serious study to be made of the “winners and losers”, but so far I have seen nothing very convincing.
Have you?
Max
A huge chunk of Canada is NOT underlain by permafrost and would not “collapse”. Melting permafrost will be a temporary problem for structures built there, but not necessarily a long-term problem.
Just suppose for a minute that after all the research and modeling and actual experience of the next decade or few decades, it is determined that contrary to the alarmists, that the global warming or climate change that has occurred, to whatever extent, is determined to be “natural” and not human-caused.
What then are the Inuits or any other adversely-affected population supposed to do? Human history says “adapt.” It sounds as though the Inuits are already working on that.
Then suppose instead that it doesn’t become clear in the next decades or century whether what is happening to the various climates around the world is natural or human-caused. Again, what will affected populations need to do? Adaptation seems a better solution in the face of uncertainty than taking actions that may not solve the climate problem, but which will cause other, more serious problems.
I know that the catastrophic, anthropomorphic global warming crowd will claim “tipping points” and such so that their proposed de-industrialization solutions are hailed as the “only” solution. But I suggest that adaptation is the more logical solution unless there is so much certainty for CAGW that every human being sees no other alternative. Humans are used to adapting. Unfortunately some human beings are also used to deciding that they are qualified to make decisions for every other human being, and not at all ashamed of using whatever means necessary to carry out their grandiose plans for all.
Martha, you are a troll. Read more about yourself here: http://chickgeekgames.blogspot.com/2011/04/know-how-internet-trolls-and-how-to.html.
Max
‘Obvious’ is a funny word.
It apparently means, “overlooking all the downsides,” when you use it.
I think the term, “Pollyannaish” more conventional.
Look at the temperature range of Edmonton, Alberta. Over 84C from -46C to 38C, it’s unlikely to ‘benefit’ from ‘only’ -44.5C as a trade-off for hitting 39.5C. Granted, these are record, not typical Edmonton ranges, but is the prediction for more moderate weather or more extreme weather?
Heat wave deaths, smog deaths related to high temperature days, extended pollen-allergy seasons, these do not spell so much winning as #WINNING.
And what makes Canada, Russia, Mongolia, Northern China, Alaska and Scandinavia less susceptible to floods than Poland in this #WINNING future? Canada’s had widespread record floods this spring, across an area comparable to half of Europe, from the prairies to Quebec. The others, too, have had plentiful report of extreme — though ‘unattributable’ — weather events recently.
And growth seasons; if you accept warming is linked to AGW, then you accept the growth seasons are increasing in length (overall a good thing), but the last frost of spring and first frost of autumn are generally not moving as rapidly, leaving more likelihood of killer frost ruining plantings and harvests, on top of the increased risk of extremes of precipitation and plant heat death, and conditions generally more favorable to pests.
And, though Martha simplifies the permafrost issue greatly, as well as oversimplifies Inuit attitudes toward AGW so far as I can tell, loss of permafrost means (probably dramatic) increase in methane emission, and much poorer and riskier overland transport across the subarctic and arctic.
Perhaps there are deserts that will stop expanding as it gets warmer, or perhaps they will become deserts surrounded by hotter, less hospitable, but slightly greener semi-desert margins.
Let’s visit “an estimated level of 556 ppmv”.
Growers are cautioned not to let their greenhouses go above 500 ppmv CO2 for planting many valuable seedlings. Tomatoes are one example. Perhaps some geneticist will perfect a CO2-tolerant GMO by 2100. Perhaps tomatoes will simply cease to germinate. Do we know? No.
CO2 is a powerful hormone-suppressant or hormone-analog in plants. Would you promote administering uncontrolled levels of steroids to schoolchildren? Would you be surprised if some parents objected?
Where does anyone derive the consent to go forward with such a plan, to continue on such a course?
Where do they derive the arrogance?
CO2 is a powerful hormone-suppressant or hormone-analog in plants.
Reference?
Would you promote administering uncontrolled levels of steroids to schoolchildren? Would you be surprised if some parents objected?
You’re presenting a “false choice”. And in any case, your “horror” scenario has been played out in the schools for years, being promoted and administered by people of your mindset. Does the word “Ritalin” mean anything to you?
Jim Owen
My mindset?
I have a mindset?
Please, pray tell, enlighten me.
What’s my mindset?
References below, btw. Not exhaustive on the subject, but I’ve done this subject elsethread with more references, and it’s only a Google away.
Though not sure where ‘false choice’ comes in. I’m presenting a choice at all?
Yes, Bart – you have a mindset. Go read your own stuff – it comes through loud and clear.
Your references are insufficient to support your contentions. AND IIRC, they are contradicted by other sources in previous threads.
Would you promote administering uncontrolled levels of steroids to schoolchildren?
Your choice is “do you or don’t you?” with the implication that one’s choice here is equivalent to either agreement or disagreement with your viewpoint on CO2 . And that equivalence is false.
My own stuff talks about Ritalin?
News to me.
My references are insufficient?
They do say so themselves.
I acknowledge that.
But then, I’m not the one trying to support doing a thing without the consent of those involved, then, am I?
Why do I have to prove the harm of the trespass?
Where have any ever had to before on any other issue?
And how do you propose to study multiple generations of plants under the conditions of higher CO2 in the wild, when there’s so little agreement on what those conditions will be?
My own stuff talks about Ritalin?
And what else would you have been talking about?
But then, I’m not the one trying to support doing a thing without the consent of those involved, then, am I?
Of course you are. Do you believe you get consent from me – or manacker – or hunter – or…….?
Why do I have to prove the harm of the trespass?
Why not? BTW – why are you breathing my air? :-)
And how do you propose to study multiple generations of plants under the conditions of higher CO2 in the wild,
It’s done all the time – talk to a wildlife biologist.
Jim Owen
When I said steroids, I meant steroids.
See? Simple.
As for your wildlife biologist.. Cite?
Jim Owen
“Your references are insufficient to support your contentions. AND IIRC, they are contradicted by other sources in previous threads.”
I was remiss and overhasty, I think, to merely wave at the other thread from (mainly with Ferdinand Englebeen, but you were in the thread too, Jim) in February-March:
http://judithcurry.com/2011/02/26/agreeing/#comment-49163 through http://judithcurry.com/2011/02/26/agreeing/#comment-51976
To demonstrate both sides, and to fairly give people a chance to read positions on both sides.
This all brings to mind for me, where is the Steve McIntyre of CO2 Science? Some skeptical auditor to pierce and deflate the antiskeptical, rabidly biased position that CO2 is purely beneficial, or so beneficial at any rate that any downside ought be ignored?
Where is the site that treats Idsos like WUWT treats the IPCC?
Aren’t any of us skeptics?
Bart wrote: Growers are cautioned not to let their greenhouses go above 500 ppmv CO2 for planting many valuable seedlings. Tomatoes are one example. Perhaps some geneticist will perfect a CO2-tolerant GMO by 2100. Perhaps tomatoes will simply cease to germinate. Do we know? No
According to this study, the tomatoes will be ok at higher concerntrations :
R. M. Wheeler, C. L. Mackowiak, G. W. Stutte, N. C. Yorio, W. L. Berry, Effect of elevated carbon dioxide on nutritional quality of tomato, Advances in Space Research, 1997
oneuniverse
I don’t for a moment propose that tomatoes will be ‘not ok’ at higher CO2 concentrations in all ways.
However, tomatoes and other seeds may not be ‘more ok’ at higher CO2 in all ways. But it’s a complex question, and not easy to study.
“Carbon dioxide, the end product of respiration, also has marked effects on seed viability. If it accumulates inside the seed or in the soil environment surrounding the seed, injury may result.
The role of carbon dioxide is difficult to study, because gas concentrations inside and outside the seed may differ widely and the effects vary with the temperature. Research has shown, however, that the activity of most oxidative, energy-releasing enzymes is reduced by high levels of carbon dioxide. “
(http://www.healthguidance.org/entry/6422/1/Life-Processes-of-the-Living-Seed.html)
So in some ways tomatoes, and many plants, will be ‘less ok’ at higher CO2 levels through their seeds (whether from CO2 concentration, or temperature, or earlier germination and frost kills, or precipitation changes, or heat waves) regardless of how that first generation thrives on big muscular tough leaves and huge tomatoes imbued by the steroid injections you’re pushing.
More fibre and calcium per tomato, more mass of tomotoes, more efficient water use, all sounds good from the point of view of a nutritional study on a single generation of plants.
Shriveled seeds, and smaller, less vital plants in subsequent generations? Not so much.
This is said to be true for tomatoes. Fruit does well. Seeds shrivel. Second generation is less vital than the first. Which your study doesn’t look at.
I have one relevant for soy:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.6373&rep=rep1&type=pdf
The nutritional quality of the seeds (not fruit) of plants also generally decreases with increased CO2, despite increased yields.
30% more wheat, 28% less nutritious, how does that sound like a good thing?
And what about effects on plants of higher NxO emissions of microbes in soil at higher CO2E levels?
So, sure, it’s ‘ok’ to give your little tomatoes steroids, if you aren’t expecting them to have kids.
Hi Bart, from the conclusion of that soybean study : “While increased [CO2] increased seed number and seed yield, it had no effect on seed composition.”. The shrivelled seeds were from increased temperatures (over 4C) not increased CO2.
Could you please cite for the 28% reduction in wheat nutrition?
oneuniverse
28%? Same source, buried deep inside:
“Both barley and wheat had a 30% increase in yield accompanied by a 28% decrease in grain N concentration. Such a decrease in grain N concentration would make a significant impact on flour quality available for breadmaking because of its negative effect on seed protein concentration (Thompson and Woodward, 1992).”
The conclusion isn’t the entire study.
It concludes in this case the shrivelling was due the higher temperature (a putative result of higher CO2), not the CO2 itself.
This wouldn’t be a universal result.
Some plants thrive at higher temperatures, as would some of the seeds of some of these plants. As would some of these with a higher NxO level. And late and early frosts compared to the length of the growing season. And that needing less water thing, a great boon — when the expectation is for much more precipitation.
I’m not asserting proof of the harm. I’m asserting the unknown risk, with references to authorities that claim the risk, and that establish reasons for the difficulty of studying the risk.
My bad – it was a 200 page document on soybeans, I didn’t expect it to come from there.
A 28% reduciton in N (nitrogen) concentration doesn’t mean the plant is 28% less nutritious (although N is a nutrient for plants).
In your quote, you also omitted the other study referred to in the previous sentence : “Sanhewe et al. (1996) observed no change in the seed germination quality of wheat grown at relatively cool temperatures in two levels of [CO2] (380 and 684 μmol mol-1).”
The following review for wheat (and barley) found that the results of different studies vary, and that for some species, N content did not change significantly. “Elevated CO2, drought and soil nitrogen effects on wheat grain quality”, Kimbal ea 2000:
The following found that elevated CO2 improves N-use efficiency for elevated CO2 conditions:
Estimating the Excess Investment in Ribulose-1,5-Bisphosphate Carboxylase/Oxygenase in Leaves of Spring Wheat Grown under Elevated CO2, Theobald ea 1998, Plant Physiology
Kim et al. 2001 found an increase in N uptake for rice, even under low N conditions: Growth and nitrogen uptake of CO2-enriched rice under field conditions, Kim ea 2001
Agreed
It’s a big topic, one I’m not expert in, and one I’m not claiming proof for.
I’m claiming potential of harm, therefore Risk.
Maybe crops might get big and the world will be lollypop land with unlimited coal burning. It could be.
But (a) there’s some indication there could be substantial downsides that haven’t been well studied and (b) the question of consent still goes unanswered.
You could feed ~500 New Yorkers with just one of these Alaskan cucumbers.
Sorry, Bart
The estimates are not mine.
Although a 1.5C increase in average temperature in Edmonton might not seem to be a big deal, the added growing season and crop yields might be, especially since the northern regions are supposed to warm more than the global average, and a good part of this should supposedly occur in the winter months.
Rather than “wringing our hands” about the catastrophic impact that a slight warming of our planet might have with the assumption that all will be negative, let’s get some serious studies made regarding likely “winners and losers”.
And, let’s not count on IPCC to do this. It’s not part of it’s brief or agenda.
Max
Max
That’s just it.
The IPCC has only the relatively simple topic of climate on its plate.
It’s a tiny little relatively easy to manage field of scientific inquiry.
And yet, for all that, the IPCC — who we must set at equal in ability to any like group one could compose in the world, given the vagueries of recruitment and resources — admits severe limits on what it can discover and what it can predict.
Plants in the wild, the biosphere, by comparison, is many orders of magnitude more complex. For one thing, the biosphere largely derives its behaviors from the climate, so no understanding of how plants and animals will respond can do better than approach the level of understanding of the climate.. which we agree will be a long time in coming.
And even then, plants still will always be in a complex relationship with microbes, with each other, with us, too.
There’s no predicting with plants.
There’s just the knowledge that something worked for 10-15 million years up until 1750, and that now it will be working differently. That thing that worked is complex, and may simply fall apart into new and unrecognized, unstable forms.
Or not.
We can’t know beforehand.
Why would we risk that?
And who obtained our consent that we ought?
So, yes, we may reach 550.
It’s possible that’s inevitable.
It may even be likely it’s inevitable.
But do we have to gallup toward it, and on whose say-so?
Bart R- It is really simple. In the world today, it is more beneficial for humans overall to continue releasing CO2 into the atmosphere than it is to discontinue said emissions. The US is already decreasing per capita CO2 emissions. This trend will continue. Currently undeveloped or less developed countries are, and will continue to increase their per capita CO2 emissions. Both of these trends will continue for decades. Significantly steepening the curve to speed the US reduction would have virtually no impact on the worldwide climate; but would be very costly to US taxpayers.
What is it about these very simple concepts that you do not agree with?
Rob Starkey
“What is it about these very simple concepts that you do not agree with?”
1. “In the world today, it is more beneficial for humans overall to continue releasing CO2 into the atmosphere than it is to discontinue said emissions.”
This is an assertion.
It can’t be proven; it’s only a belief. Not all the bookkeeping or studies in the world can change the nature of the assertion.
True believers may think they can establish a proof, but the nature of the claim makes this simply not logically possible.
How is it more beneficial?
To which humans?
For how long?
What prescience can foretell such things?
The tenets of the democratic principle are not that one may by fiat make a rule because you believe it of benefit to all humans; that’s the principle tenet of tyranny.
In a democracy, one obtains consent.
When did you do this?
2. “Currently undeveloped or less developed countries are, and will continue to increase their per capita CO2 emissions.”
That’s an assertion that paints with a very wide brush, most especially in the ‘will continue to’ part. Policy changes in the generally oligarchic regimes in control of many development-focused nations could reverse this trend overnight. Taste or technology or demographic changes or regime changes, likewise, may happen.
If America promotes the best examples within itself, it will influence these regimes, these tastes, technologies and societies. Just as it does now, not always for the best.
3. “Significantly steepening the curve to speed the US reduction..”
a) “..would have virtually no impact on the worldwide climate;”
Whether it has direct dramatic impact on global temperature or not, there’s very little information on the direct and indirect impact on all non-temperature influences of CO2E reduction and the consequent particulate emission reductions combined of the US, which is still the #2 emitter worldwide. Yours is a premature and too narrow claim.
b) “.. but would be very costly to US taxpayers.”
*squint*
This cost thing again.
Suppose my tired old, careworn scheme were put in place: carbon rent charged by CO2E emission for every fossil resource at point of sale, set at the price that gives the maximum revenue, and all revenues distributed per capita to every American.
This can be done at minimal marginal increase to the cost of the current sales tax and income tax systems; indeed through reduced churn, will actually cost these systems less than the current arrangement.
There you have it, absolutely no net cost to America.
Will it influence behavior? Maybe, I can’t say; the Market would have to determine this, wouldn’t it?
But then, at least we’re all being compensated for the use of our common shared resource, and if we choose to spend that compensation continuing to use that common shared resource the same way — as opposed to buying a better, more fuel efficient car and/or a home in a better neighborhood closer to work and school.. well, that’s our right.
And it won’t cost America a penny more.
Bart R,
Speaking of assertions, yours about the Hansen-esque chimera of revenue neutral carbon tax is a pretty dar big one.
And your conclusion is downright fanciful.
with more money in their pockets people will consume and emit more thus zeroing the emission reduction . so government will extract more taxes, with the consequence of belyingthe tax-neutral myth.
I wrote- “In the world today, it is more beneficial for humans overall to continue releasing CO2 into the atmosphere than it is to discontinue said emissions.”
Bart- you are correct that this is an assertion, but it is one that is supported by numerous facts. Without CO2 emissions food production and distribution would be drastically reduced. Any reasonable analysis will determine that to be true today. This fact has nothing to do with democracy.
I wrote- “Currently undeveloped or less developed countries are, and will continue to increase their per capita CO2 emissions.” You are correct that this is also an assertion, but again it is one based upon the realities of the world.
People generally wish to have adequate amounts of food to eat, they want electricity and the devices that are powered by electricity. They wish to be able to get from point “A” to point “B” more efficiently than by walking or relying on an animal. All of these things currently require CO2 emissions to be increased in their countries for them to have these benefits.
America’s best examples will not change the 250 poor countries on planet earth from wanting the benefits that come from electricity, or medicine, etc. They want these things as quickly as possible and at the lowest possible cost. The only reason they do not have these things is that they have not been able to afford them.
By your words “your tired old scheme” is completely flawed from an economic standpoint. The gist of your scheme (which I will summarize) is to implement a fuel or gas tax in order to discourage consumption. Since it has been pointed out to you than many of the “US fuel users” are relatively poor and that it would be a hardship if prices were to increase; you have recommended a “rebate” to the poorer US taxpayers. What you did not seem to realize is that such a rebate would also negate that consequence of the higher cost of the fuel and thereby virtually eliminate the entire benefit of your proposed fuel tax. There would be little or no motivation for people to reduce consumption. Rich people don’t really get affected, and under your plan, poorer people wouldn’t care either, so why would they reduce consumption?
Yes Bart, that money thing comes into play again and again. Having just returned from a week in East Asia, I assure you it is the issue driving much of the rhetoric on this topic there.
Rob Starkey
Skipping past the religious debate portion of what you say to the substantive question:
“What you did not seem to realize is that such a rebate would also negate that consequence of the higher cost of the fuel and thereby virtually eliminate the entire benefit of your proposed fuel tax. There would be little or no motivation for people to reduce consumption. Rich people don’t really get affected, and under your plan, poorer people wouldn’t care either, so why would they reduce consumption?”
Why _might_ people reduce consumption?
Truthfully, I don’t know that people would reduce consumption with a $300/ton price on CO2E and an extra $1,000/yr+ in their pockets.
They might democratically decide to spend all $1000 on emitting exactly the same amount of CO2E.
Or, being over $1,000/yr wealthier, with the cost of emitting CO2E finally being charged instead of being given away, they might sit down, look at their options, decide they’re happier to have a new 60+ mpg car for the same price and pretty much the same features as a 15 mpg car, and save 75% of the part of that $1,000 they might have used on driving to instead buy something that improves the quality of their life that isn’t CO2E-intensive.
They might insulate their house, get the drafty windows and doors fixed, go for solar water heating or a heat pump, do any of the many things that they could do now to save themselves money in the long run, except that the switching cost at this time keeps them from taking the plunge.
Might happen.
We don’t know.
What we do know is that right now, fossil is subsidized, and CO2E budget, a scarce resource we all own a share of, is being squandered without return to us, its owners.
We know plans like this one are feasible, because we’ve seen probably the strongest economy on the planet (British Columbia) start along that path in 2008, and it only got stronger relative to the rest of the world through the economic downturn.
How will the rhetoric in East Asia turn when people in neighboring countries get $1,000/yr payment for CO2E that they themselves are denied?
blockquote>
Yes Bart, that money thing comes into play again and again. Having just returned from a week in East Asia, I assure you it is the issue driving much of the rhetoric on this topic there.
Rob, I agree with that.
Can not government, without raising tax, simply legislate all new power plants to be powered by non-coal sources?
They would not do that, as they are after the revenue.
Rob, I agree with that.
Cannot governments, without raising tax, simply legislate all new power plants to be powered by non-coal sources?
They would not do that, as they are after the revenue.
Bart R—Once again you seem to ramble without making clear concise points. In summary:
Your proposed tax would not do anything (at least not anything significant) to reduce consumption of hydrocarbons or the emissions of CO2. If the proposed tax will not achieve the stated goal, it is a bad idea to implement that tax/scheme.
Girma asked- couldn’t government legislate that all new power plants to be powered by non-coal sources?”
Answer- certainly it could. The question is why should this be done, and would the constituents of that government support that policy decision. The answer to that question leys with what the alternative form of electricity production are and what they will cost.
Rob Starkey
“Bart R—Once again you seem to ramble without making clear concise points.
This is news?
“In summary:
Your proposed tax would not do anything (at least not anything significant) to reduce consumption of hydrocarbons or the emissions of CO2.”
See, you call it a tax.
For historical reason, it’s going to be called a tax. But this is the same in every way as calling the price of any good in the Market a tax; the government enforces you paying for goods (ie laws against theft), the government collects payment from many revenue streams that it then redistributes to owners (license fees for cell phone bandwidth to the general revenues of the owners of that common shared bandwidth – all of us, eg). My proposal is _less_ like a tax than those in that integral to it is we get paid per capita. What sort of tax has more money in everyone’s pocket?
You wrongly identify it as a fuel tax, when it isn’t. It’s a CO2E tax. Hydrogen is a fuel, it wouldn’t be tagged with a penny of carbon ‘tax’ and while coal would be charged the highest levels, methane would have a pretty low rate.
By the way, I’m all for having a similar rent or fee on particulates (iethe particulate ceilings), if it were practical to administer.
“If the proposed tax will not achieve the stated goal, it is a bad idea to implement that tax/scheme.”
Right, right. If it were a behavior-altering proposal, it might have the stated goal of behavior-altering, and I might discuss the evidence for some behavior-altering assertion. You can look to Ross McKitrick for discussions of this, if you like. Or Mark Jaccard. Or Pigou. Not my issue.
My issue is fairness.
I’m being stolen from by free riders.
I own a share in a common resource. It’s limited, and it’s being used up. I want compensation.
So tax the parasites who waste my CO2E budget, and pay me my fair share.
That cash in my pocket is the stated goal.
Let me then freely decide how to spend that cash, whether on $20/gallon gasoline (or whatever it’s price will be by then, though long run a revenue-neutral CO2E ‘tax’ ought drive the price of gasoline lower) or on something I’d rather have.
“Girma asked- couldn’t government legislate that all new power plants to be powered by non-coal sources?”
Answer- certainly it could. The question is why should this be done, and would the constituents of that government support that policy decision. The answer to that question leys with what the alternative form of electricity production are and what they will cost.”
The more fundamental answer is why would we want to endorse a bigger government and Girma’s big anti-democratic command-and-control measures?
I mean, I’d still be being robbed by free riders using up my CO2E resource, but I’d be paying public servants to tell me, and them, how I couldn’t spend my money.
If it turns out there _isn’t_ a feasible way to administer a “particulates tax” and internalize the costs of particulate emissions to the purchase decision of each individual in coal, then the particulates Externality might need such a measure to stop this greater but unpriceable harm than the goods provide.
I’d regret that.
I oppose larger than minimally necessary government.
Which is why I support a revenue-neutral carbon tax, all revenues to all of us per capita, and not to government.
Bart R
It’s more restricted than that, Bart, and yet also a bit broader.
The role of the IPCC, as defined at its inception:
So IPCC is concerned with the „risk of human-induced climate change, its potential impacts and options for adaptation and mitigation“
It is NOT concerned with naturally induced climate change. [This is why AR4 WG1 is so weak in this area.]
It IS concerned with „potential impacts and options for adaptation and mitigation“ resulting from „human-induced climate change“. [See AR4 WG2 and WG3]
Max
Bart R
Back to your post.
Believe it or not, we will reach a level of 550 pmv CO2 some day in the future, no matter what we try to do today to stop this.
I have seen no actionable proposals to date, which will stop this from happening, so I have to conclude that it will happen. Have you seen any actionable proposals to avoid reaching 550 ppmv CO2 in the atmosphere, Bart? If so, what are they?
So let’s figure out what 550 ppmv CO2 REALLY means for this world. Who will be the winners and losers, what adaptation measures will be required, etc.?
That’s all I’m asking for.
Max
What makes something an “actionable proposal”?
Anything that’s actually workable.
That is to say, something you think will work?
The workability of any policy proposal is a matter of judgment, not least about what political actors will come to find desirable.
It seems rather transparent to me that if a relatively short list of political actors decided that stabilization below 550 was a worthy goal, policies could be found to achieve it. Lots of different analysts and organizations have provided scenarios that are technologically plausible.
That is to say, something you think will work?
Nobody’s asked me – but since you asked —
The workability of any policy proposal is a matter of judgment, not least about what political actors will come to find desirable.
Maybe, but I doubt it.
1) How much do you think people will give up for an ephemeral non-crisis? Doing another force-feed legislation like the healthcare bill in the US would be political suicide. Do you think Cap & Trade – or Bart’s carbon tax would fly? Really? What do you imagine the backlash might be?
2) Alternative energy? Have you run the numbers? Solar, wind, geothermal? Do you understand just how short they come up for replacement of present energy sources? Or are you into brownouts and blackouts like they’re headed for in the UK? How politically popular will that be? And what do you imagine the backlash might be?
if a relatively short list of political actors decided that stabilization below 550 was a worthy goal, policies could be found to achieve it.
I seriously doubt that. Have you talked to the Chinese – or the Indians – or the Brazilians? Or are you into force as a motivator?
Lots of different analysts and organizations have provided scenarios that are technologically plausible.
I seriously doubt that, too. The only proposals I’ve seen have been laughable from a practical engineering standpoint. But then I probably haven’t seen everything yet. Why don’t you provide some links? I’m not averse to new ideas. But I am skeptical.
re: alternative energy.
There is only one source of non-carbon energy with a hope of competing – nuclear. Nuclear energy becomes price competitive with a small carbon tax. So the politicians will go for the smallest possible carbon tax and nuclear energy is a goer and Al Gore will be happy and so will the pro-nuclear warmist crowd. And the green movement will one day figure out they have been outmanoeuvred strategically.
Jim Owen
“What do you imagine the backlash might be? “
What a good question.
What would the backlash be of an extra $1,000.00 a year in your pocket?
What would the backlash be of free riders having to pay for the benefits they obtain at our expense?
Your healthcare straw man, how is that helpful? This is not only not healthcare, it is also not Cap & Trade. And it’s also something that is really started, is really popular, and is really working in a strong, healthy economy much like the US economy, except much smaller in scale, and at so far only ten percent of the way there.
So while you may have your doubts, they don’t have much of a leg to stand on.
If it does work and take off in one place, can you imagine the pressure and the backlash on any government whose people look across the border and see everyone in that neighbor getting paid $1,000 a year.. only to see their own government somehow make that cash disappear?
If you lived in the state of Washington, would you accept from your state assembly that British Columbia could do it, but Washington can’t?
From Washington to Oregon, California, Idaho, Wyoming.. How would Texas resist the backlash if surrounded by states with such a program?
How would the Eastern states resist the backlash, if the Western states did this?
How long before a Federal jumped on board and bribed voters with the promise to stop stealing $1,000 a year from them? And what would be the backlash to the party that didn’t?
And once the USA has this program, how long before Brazil and China, India and the rest of the world start to feel the heat?
Of course, this program has a limited shelf life. In time, it will eventually almost equalize, and become an income at a much lower level.
The early adopters will gain the most benefit. If it isn’t the USA second (or third, or thirtieth), then US citizens will get less than they might if the country jumps in early.
But then, it’s been a long time since the US represented world leadership.
Paul–You are simply wrong when looking at a world view. There is no plan that will stop worldwide CO2 emissions from rising for decades.
Paul Baer
See below for my answer to your question:
Max
There is no scientific foundation for your assertions. Some time spent at co2science is in your interest.
I’m glad to hear they are releasing full documentation. I’m certain the skeptics will enjoy reading all of it.
The pride over improved parameterization is misplaced. Better hindcasting because of better tuning does not lead to better projections or forecasts. This model has zero predictive ability as long as parameters can be tuned. No model will ever be able to avoid tuning completely.
“This model has zero predictive ability as long as parameters can be tuned. ”
With correct experimental design this problem can be overcome to the point where the models perform as well a chance.
By the way…Steve Easterbrook has a couple of interesting posts about an NCAR workshop he recently attended.
I searched the Climate model text for the word “Albedo”. It does not show up a single time.
If Albedo is not in the model, the model is a worthless curve fit and not a real model.
Herman, via the documentation page, have a look here and here.
(It looks like the CCSM is also referred to as the CESM).
Herman, search the CESM UCAR site (linked by JC in main post) for “CESM CCSM4.0 CICE Documentation”.
Judith,
I wonder what kind of output they would get if they input that the CO2 level stayed constant at say 350 ppm. If they did and got a far better temperature tracking, including past 1999, it would clearly show that the implied “sensitivity” issue is all wrong, since the models actually assume rising CO2 and resulting temperature rise.
It would not show that CO2 sensitivity is all wrong; it could be that sensitivity is right and that other unmodeled features of the system are affecting real world temperatures.
But of course it could indeed be that the sensitivity is wrong. That’s why climate scientists generally have a wide subjective probability estimate for the climate sensitivity.
Is correct to understad that no one has publish a run with CCSM4 using the best accepted drivers (forcings) used with CCSM3? Yes, CCSM4 has more forcings than CCSM3, but there ought to be one CCSM4 run that starts from commonly accepted CCSM3 inputs. Where is it? There must be one because changing the model and the inputs at the same time is a carnival game, not science.
The apparent fact that no sensitivity runs varying Climate Sensitivity have been published (yet) is more than disturbing. Climate Sensitivity in the real world is not an INPUT. It is an abstraction of the partial derivative of thousands of temp to a change in [CO2] after millions of regional feedback loops we know only in the sketchiest of details. Climate Sensitivity is a grossly averaged output, not an input forcing in the way CO2 concentration or Solar inputs are.
While I can accept using Climate Sensitivity as a simplifying input short cut for some work, I cannot accept that we know its value with any precision. Use of Climate Sensitivity of 3.0 +/- 0.0 isn’t science, it isn’t engineering, it is dogma.
There is no “climate sensitivity” knob on which to do such a study. Properly and efficiently sampling the high-dimensional parameter / model-structure space to estimate the uncertainty in model output (of which “climate sensitivity” is a derived quantity) is an open area of research.
The sensitivity is not an input to this class of climate model. The climate sensitivity is derived, usually by taking the difference in global mean surface temperature between pre-industrial conditions and an equilibrated double-CO2 experiment. The model predicts surface temperature, and then it is just averaged and the difference is taken. Climate sensitivity is not an input.
If Climate Sensitivity is derived, the please explain this statement in the top post:
However, there are other possi- bilities for model errors, such as a poor representation of the direct effect of aerosols, or the climate sensitivity is in- correct.
Instead of an input it is hard coded into the model? Or is it something else like not coding the feedbacks correctly?
Either way, we are left with the impression that it is not a valid climate model unless it is tuned to deliver a Climate Sensitivity of 2.5 to 3.5.
What am I missing?
Climate sensitivity is not an input. It is calculated from the outputs. then if the figure does not meet the expectations of the model builders, they will modify the input parameters until it does. It would be much quicker if climate sensitity was an input knob that one could dial in, but then the fiddle would be too obvious. By keeping the results of the tuning runs hidden, no one can prove there is a knob.
Well that begs the question of how many parameters (knobs) are visible and how many are hidden in hard coded assumptions and how many are “coupled.”
To be complete, we should always ask ourselves what knobs are completely missing because the model creators either assume them to be insignicant or intractable.
Retorical question: How many knobs need to be reset make feedback of clouds a net zero on temperature?
It is not often noticed that these models already deal with 30-degree swings over the seasons, or equally large variations between the equator and poles, so a 3 degree climate change is not outside the parameters of the current climate, meaning that clouds are not going to suddenly do something unexpected for a gradual warming of this size.
Do something unexpected to whom?
Unfortunately, these biases do not get smaller when higher horizontal resolution of 0.5◦ is used in the atmosphere component, because the cloud distribution in CCSM4 is not sufficiently accurate compared to observa- tions.
A negative feedback would be unexpected, given we don’t see evidence of it yet, and that future climate is within the range of current annual variation. It would be very hard to engineer a negative cloud feedback to 3 degree warming that doesn’t mess up your annual cycle.
Skimming the descriptions (e.g. here’s the atmosphere one) will give you a good idea of the analytical assumptions that lead to the governing equations that are modeled (I would say “solved”, but in practice they don’t actually converge solutions to even this simplified equation set).
Thanks, but the link did not come through.
cam4_desc.pdf
if that doesn’t work, you can google “cesm documentation”, the first result is a description of the atmosphere component
Steven Rasey,
“Is correct to understad that no one has publish a run with CCSM4 using the best accepted drivers (forcings) used with CCSM3?”
With CCSM4 coming means CCSM3 declared not accurate and when CCSM5 coming CCSM4 leading to wrong results. Why run CCSM3 which led to wrong results.
Of all recent posts, this one would, in my view, be best enhanced by interaction with one of the model designers as a means of clarifying issues regarding parametrization choices and tuning, as well as comparison with real world observations. Some of the comments suggest continuing confusion on these points, but few if any of us who have commented are qualified to offer expert opinions.
The central role of accurate quantitation of aerosol forcing has been raised in accounting for the disparity between the long term global temperature trend as modeled and as observed. Possibly relevant are the following observations:
1) The observed and modeled trends – see Figure 12 in the CCSM4 paper match fairly closely over the century-long run until 2000, both showing about a 0.8 C rise, with the model slope slightly greater than the observations. The main deviation occurs during the subsequent 5 years.
2) The sulfate aerosol forcing was based on Lamarque et al, which provided estimates only up to 2000. At that point, earlier negative aerosol forcing had been declining, reducing the cooling effect as an offset to anthropogenic greenhouse gases or other warming influences.
3) Subsequent evidence by Smith et al indicates that sulfate aerosols reversed their declining trajectory and rose from 2000 through 2005, probably due mainly to increased industrial activity in China. The evidence is surrounded by considerable uncertainty, but implies that recent aerosol cooling may account for some of the flatness of global temperature anomalies since the year 2000.
4) The apparent neglect of the 2000-2005 data by the CCSM4 simulation may be an important contributor to the greater mismatch with observations since 2000 than with the earlier observations, which were not greatly deviant. This is probably not the entire explanation, but is likely to be part of it. To the extent this is correct, it implies a forcing issue rather than a climate sensitivity or parametrization issue, although parametrization would presumably be involved in addressing indirect aerosol effects.
I should qualify my comment above by noting that the model deviation became more abrupt slightly earlier than 2000, because the 2000 value was already high, but we don’t know precisely when the putative reversal of the downward sulfate trend started, since it was based mainly on 5-year intervals and involved regional trends that often ran contradictory to one another. In the case of some of the data, 10-year interval data from 1990 to 2000 were used, and the timing of a reversal would be difficult to pinpoint.
Fred, I would think that if sulfates from Chinese industrial production were the cause of the lack of warming that this would be evident in global temperature anomalies, yet the temperature anomalies over China and the ocean east of China seem quite consistent with other areas of the world at similar latitudes for the years 2000 – 2010. Do you not see this as contradictory evidence?
Instead of the anomaly temperature, you need to look at the anomaly in the warming rate between that in 70’s-90’s and that in the 00’s. That would be an interesting map, but I haven’t seen one yet.
Thanks for admitting the recent cooling, as we thankfully will hear “worse than we thought” anymore.
However, how can you be sure that the recent cooling is caused by increased aerosol?
Could it be because of ocean cycles?
Could climate sensitivity was overestimated?
Why does the slight global cooling occur every 60 years?
http://bit.ly/ePQnJj
Thanks for admitting the recent cooling, as we thankfully will [not] hear “worse than we thought” anymore.
Fred– our put another way- we currently do not have any good data to define accurately the quantitation of aerosol forcing. This has been raised by some in accounting for the disparity between the long term global temperature trend as modeled and as observed, but there is really no definitive data to support that position.
Bottom line– GCM’s are not yet reliable for more than very short term forecasts and are unlikely to be reliable for long term regional forecasting for decades.
Rob –
we currently do not have any good data to define accurately the quantitation of aerosol forcing.
We have data. Whether it’s good data or not is another question. But aerosol data was one of the objectives of the Terra (EOS) spacecraft program and was collected by several of the Terra and Aqua instruments. Aerosol data has been collected since 1999 by at least 3 spacecraft in the series. Gotta wonder what happened to it.
In fact, aerosol data was being collected in the late 1960’s by the Nimbus program, but that was short-lived and stopped when the instrument malfunctioned.
Don’t make me start about who’s going to debug a million lines of code. And how.
Yeah. If these models are anything like Harry readme, no telling what they’re actually doing and why. What kind of QC is built in? We all know that you have to build quality in from the beginning, and manage it continuously, not frost it on at the end.
Is that ok to talk about, mosh?
dont be stupid. As far back as 2007 I have been reading and commenting on the ModelE code which is public. The MIT code is also public. It’s beautiful code, written by professional software engineers. The NCAR code is also public. I’ve just started looking at that. (porting project)
here is the difference. harry read me code WAS NEVER USED.
the modelE code you can look at right now is used. same with the other GCM code.
Use better arguments.
Did I say they were junk? Read again. I was asking the question, because that is ONE of the key questions that needs to be asked up front.
How the guts of the program works is another question, but if debugging is as trivial as you claim, then the program itself must be trivially simple. And if that’s the case, how can it possibly have anything to do with reality? Despite what you may have been told, this fluid mechanics stuff is kind of complicated. If it’s that simple, it’s a spherical cow, which may or may not be good enough to have any hope of producing accurate results. Spherical cows are more of an art then a science.
How the guts of the program works is another question, but if debugging is as trivial as you claim, then the program itself must be trivially simple. And if that’s the case, how can it possibly have anything to do with reality?
Complex software (when done right) is built by bringing together simpler pieces…you cannot reduce complexity, but you can compartmentalize it. The individual pieces can be unit tested in isolation to remove a lot of trivial errors that would be harder to catch in an integration test.
as if that made any difference to the need for integration testing
I don’t believe I said that it removed the need for integration testing. It does, however, allow that integration testing to concentrate on integration issues rather than getting side tracked on lower level bugs.
but integration is everything … also because that’s where the most hard bugs pop out.
but integration is everything … also because that’s where the most hard bugs pop out.
I’m not quite sure what you think I’m saying, but I’m fairly certain that it isn’t what I’m actually saying.
Gene –
Component (or subsystem) testing is necessary, as you say. But omnologos is right in that the really tough inconsistencies/problems/bugs will bite you when you start to integrate those subsystems. And then when you “fix” those bugs, you’ll have changed some of the subsystem code (or hardware) so you’ll have to go back and re-test the subsystems. And then reintegrate.
Worse – sometimes the first integration cycle will work wonderfully well, apparently everything will be kosher – and the output will be garbage.
A simple example – the UARS spacecraft ephemeris data was generated in one coordinate system – but the spacecraft was coded to a different coordinate system. There were no subsystem problems and none in ground I&T, because there was no way to dynamically test the response of a constrained spacecraft. But if it had been launched like that, we’d have lost the spacecraft on the first orbit. That particular bug was found 3 days before launch when we got a single anomalous reading from a yaw gyro – while on the pad.
Bottom line – your subsystems can all be 100% – and still not play well together. That’s why integration testing takes a large percentage of the time/money/energy in large spacecraft programs.
Jim,
The thing is, I’m not disputing any of that. What I was pointing out is that unit testing, particularly when you can automate it, can enhance your integration testing. Intra-module bugs are found easier and quicker when testing in isolation. That means the time spent on integration testing should be dealing with actual integration issues. I’ve spent the last 15 years of my life designing systems and systems of systems. The majority of that has been devoted to not only improving the building techniques but also getting the best V&V possible.
Speaking of which, anyone concentrating on the quality of the model code is very likely barking up the wrong tree. Not that any group’s code is bug free (based on what I’ve read the space shuttle group’s code is pretty phenomenal in terms of quality, but still not zero defects), but I doubt the discrepancies with observation are due to verification issues. Rather the second “V”, validation, is more likely to be the problem. Even if you could build a bug free product, if the requirements don’t match reality then you won’t get the right answer.
what makes you think its millions of LOC?
modelE was about 100K LOC last I looked.
Some of the modules in NCAR are descendants of core models written in the 60s’. plus a million LOC is nothing, peanuts really. It gets debugged exactly how other code gets debugged. Steve easterbrook has some good work detailing this. When you finish reading all of modelE let me know.
I started in 2007, I’ll give you some time.
PS. I have found bugs for a living. GCM is a piece of cake compared to real time code.
The million-line mention was by somebody else. But I am will keep my disbelief intact if people delude themselves into thinking even the most beautiful perfectly-written pieces of computer code will always work together “just like that”. That is not the way computer applications work: the computer is always a pedantic ass, software developers cannot possibly consider each and all possible initial state of their code, and millions of software engineers make a living just because of that. Including you, in the past at least.
So rather than pontificating, why don’t you explain how they can find if anything goes wrong, apart than in the most egregious cases? What do they compare their output against, using what trial input data?
How did those people in the ’60s foretold themselves the way their code was going to be used fifty years later?
ps no I do not want to “destroy” anybody’s code. I am sure it works fine 99.9999% of the time (that’s an expected 100 lines of wrong code in a million). But as usual what we need is the humility of accepting reality, instead of yet more manifestations of lèse-majesté.
did you read easterbrook or just flap your lips.
http://www.easterbrook.ca/steve/?p=974
Your objections to code and the impossibility of testing it compeletely should keep you from flying in airplanes or using a credit card
Steven – yours is a double straw-man. First I admit, I do NOT fly airplanes or use credit cards UNLESS they have been properly tested in the real world in their final configuration. Boarding a plane just built out of a computer model of a plane would be suicidal for all but the ablest test pilot. Now what real world is there for GCMs to test against?
Secondly I am not saying that those lines of code are impossible to test. I can envisage ways to test them myself, and the airplane industry could very much help. What worries me is instead the cavalier attitude to testing, that should occupy in excess of 80% of the time and published paper surface. I guess that most coding scientists have little idea of the amazing ability of a million line beast to do the unexpected.
PS. I have found bugs for a living. GCM is a piece of cake compared to real time code.
Agreed
modelE was about 100K LOC last I looked.
This isn’t Model E and if it is, then it’s not worth what we paid for it.
a million LOC is nothing, peanuts really. It gets debugged exactly how other code gets debugged.
Horse puckey. BTDT – and after 3 years, 1.6 mil LOC of HST and shuttle I/F software was finally functional. If this puppy is 98% right they did a good job but if you’re gonna bet on any more than that I want to play poker with you. Keep in mind that I was still finding holes in the Shuttle software after it had been flying for 7 years.
statsvn, Total Lines of Code: 1653920, only ~12% is fortran (173859 *.f90 + 18865 *.f)
Every parameterization in the model is an admission the computer cannot simulate nature properly. How many parameterizations are they using?
actually not. Its not that you cannot simulate nature properly. Its typically that your algorithm relies on a factor that you have limited knowledge of or a process that takes to long to simulate so you replace the actual simulation with a functional form.
Not true of nonlinear diff eq. Everything in fluid mechanics is a fudge, it’s just that some fudge is more reliable than other fudge, so modelling is an art that, with some finesse, can produce usable results sometimes. But it’s flatly untrue (or at least mathematically unprovable) that such problems are always calculable. But you knew that, right?
Well they must Che – but let’s call it sensitive dependence and structural instability rather than fudging. ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’ http://www.pnas.org/content/104/21/8709.full.pdf+html
There are 2 tests for model plausibility. The realism of the representation of nature – not terribly good – and the behaviour of the
solution which is not obvious.
Are they delusional, lying or simply clueless? Amounts to the same thing in the end.
Chief,
I find it quite stunning that climate science will only play with a few hundred years out of 4.5 billion years of climate.
There are other areas to look at besides the proxy of tree rings and other nonsense.
I find ocean salt deposits have a story in itself to tell and can be dated by the dead oceanic life in the past. Considering that our oldest recorded salt is only a billion years old deposited in high regions.
Does that not imply that the oceans were much higher?
It would be utterly insane to think this planet has not lost a drop of moisture into space.
Finally something we agree on!
Chief
Have you been banned again?
Which of the lessons of Mod-dodging did you forget this time?
I stand by my statement. Every parameterization is there to hold the simulation into some semblance of reality. Without the parameterization, the simulation would shortly look entirely otherworldly. It is an absolute pretense to put forward a computer simulation as having any predictive value as long as it has even one parameterization.
Back to my question – how many is the current version using?
Ron – I’m not quite sure what you conceive parametrizations to be. Are you confusing them with “adjustments” or “corrections” and/or changes in model structure made so that model hindcasts would match observed temperature trends as a function of changing CO2? Perhaps you could define climate model parametrization in your own words, describe what you believe they are used for, including what observations they attempt to match, and then give a specific example of what you had in mind by a model “without the parametrization”.
In truth, parametrizations increase rather than reduce the predictive skill of models, although it would be impossible practically, and probably theoretically as well, to improve models to the point of 100 percent accuracy. That doesn’t preclude model improvement to render them more accurate. Just as weather models have improved over recent decades, it is likely that experiences with the CCSM4 model and others in the CMIP project will guide climate model improvement over the coming years.
Fred,
You are mistaken. Parameterizations increase the ability of a GCM to hindcast and have nothing at all to do with improving forecasting skill. For some reason, some climate scientists do not understand that.
The claim is that GCMs accurately simulate fluid dynamics and other physical properties of the global climate system. But without parameterizations, the GCMs would produce results which bright elementary students could recognize as unphysical. The parameterizations further the pretense that the models are somehow representing the physical global climate system. It is a fiction. Every parameterization is an admission the computer simulation is not working.
Ron – You didn’t answer my questions. I believe you have a mistaken impression about what parametrizations are and what they are used for – for example, you seem to believe that they are used to make a hindcast of a long term temperature trend associated with rising CO2 better match the observations (unless I misinterpret your comments). Until you are more familiar with their actual nature, I don’t believe you are in a position to judge their value.
Could I ask you again to describe an example of a climate model “without parametrizations”. Please provide details of exactly what is not parametrized and what the model does instead? I believe that if you start to investigate the concept of parametrization, you may come to a different set of conclusions than the ones you state, including a recognition that parametrizations have the potential to increase forecasting skill in climate models, just as they do in weather models.
Fred, I think my other comments here answered your questions. Did you see the analogy of parameterizations to bumpers at the bowling alley? That is how they work. The parameterizations are designed to get rid of things like the ITCZ. If the models actually simulated solar energy properly, it would never happen in the first place. But this is just one example. There are others. So, back to my question – how many parameterizations are used?
“I think my other comments here answered your questions.”
Ron, I have to disagree, because you still appear to misconceive the nature and purpose or parametrizations. My questions addressed those items. What is parametrization? Is it a “correction” or “adjustment” designed to remedy a flaw in model design? Is it tuned in response to hindcasts involving temperature trends in correlation with CO2 so as to make the hindcast match the observed trends? If a model were not parametrized, what would it look like?
I’ve interpreted your various comments to imply that you might answer these questions incorrectly, but if my impression is wrong, your answers will allow me to correct it.
Fred,
Yes, I am a firm believer that parameterizations are used by climate modelers to tune the model. I have even seen them admit it. Do you honestly believe parameterizations are always done based on physical measurements? Did you forget these people think model measurements are just as much reality as physical measurements?
Ron – With respect, of course parametrizations are involved in tuning, but that wasn’t one of the questions I asked. I am sincere in believing that you don’t understand what parametrizations are, how they are used , what relationship if any they have with hindcasting, what it would mean for a model to have no parametrization, why parametrization improves model predictive skill, and how that ability to improve predictive skill is borne out by the improvement in weather models. I’ll leave it for now, but if you care to answer my questions specifically, I believe I can show you where some of your misunderstandings reside. Someone who does climate modeling for a living could do a better job, but that person isn’t participating in this thread. The same would apply to someone who does weather modeling, which also uses parametrization to improve predictive skill.
Ron – Let me try one more thought on you. The currently described CCSM4 model has been parametrized more accurately than CCSM3 according to the modelers, and confirmed by various tests against observations of unforced climate. However, the CCSM3 hindcast matches the observed centennial temperature trend more closely than CCSM4. Isn’t it obvious that the parametrization and the tuning of CCSM4 were not done to make it match the hindcast? What you should ask yourself is “what is the purpose of parametrization” and “how is it tested”? The answers may help you understand why parametrization is useful, why it can’t be done simply to make a model “come out right”, and why model mismatches with reality may not necessarily have anything to do with good or bad parameters. I’ve mentioned elsewhere in this thread the evidence suggesting that the CCSM4 mismatch might at least partly reflect neglect of recently reported changes in aerosol forcing that have nothing to do with model tuning or parametrization.
Fred,
You might also be interested in read this paper by the Pilkeys.
http://onlinelibrary.wiley.com/doi/10.1111/j.1540-6210.2008.00883_2.x/full
I was already familiar with this paper.
Fred, saying you are familiar with the paper does not give me much to go on. The pretense the modelers know all of the forcings or are able to simulate them accurately is arrogance or a conscious intent to deceive.
Have you seen this paper? http://www.aanda.org/index.php?option=com_article&access=doi&doi=10.1051/0004-6361/201016173&Itemid=129
Fred Moolten
You asked about “climate model parameterization”.
Here is an example I can think of.
As you know, IPCC has acknowledged in AR4:
Since we know that the net natural effect of clouds is an order of magnitude greater than that of anthropogenic greenhouse gases, it is clear that this “largest source of uncertainty” could have a major impact on IPCC’s model-based climate sensitivity.
A study by Wyant et al. entitled “Climate sensitivity and cloud response of a GCM with a superparameterization” addresses this “large source of uncertainty” in the IPCC models.
ftp://eos.atmos.washington.edu/pub/breth/papers/2006/SPGRL.pdf
As the authors point out:
They then go on to describe potential alternates to conventional GCMs, which allow a much finer resolution of the atmosphere and permit “explicit simulation of smaller-scale vertical convective motions associated with clouds”, including a “recent approach, called superparameterization or Multi-Scale Modeling Framework (MMF)”, which “uses a two or three-dimensional CRM embedded in each column of a GCM to simulate small-scale convective circulations and associated clouds”.
On the above basis using superparameterization for clouds, the study shows:
This corresponds to a net cloud feedback of –0.86 W/m^2 °K, as compared to the IPCC model-derived average of +0.69 W/m^2 °K.
The strongly negative net global cloud feedback derived above was subsequently confirmed over the tropics by the physical CERES satellite observations of Spencer & Braswell 2007.
Maybe someone else, who is more familiar with GCMs than you or I, would like to comment on the benefits of superparameterization for simulating the behavior of clouds.
Max
I have no comment on super-p, but the claim that clouds are the largest uncertainty is a good example of science confined by AGW. Large scale natural variability is a far larger uncertainty.
David
I think we come back to understanding “WHY” IPCC has had this myopic fixation on anthropogenic climate change while ignoring naturally forced climate change.
This has been their charter from the start:
http://www.ipcc.ch/pdf/ipcc-principles/ipcc-principles.pdf
It’s all about “human-induced climate change” (plus impacts, adaptation and mitigation from these).
The basic problem is that if one does not understand natural climate forcing and variability, one cannot understand “human-induced climate change”.
A real dilemma for the IPCC (who concedes that its “level of scientific understanding” of “natural forcing” (including solar) is “low”).
Max
Max – You have cited this paper previously. It is an interesting study, and appropriately cautious about its conclusions. However, it addresses cloud forcing but not cloud feedback. Almos all GCMs estimate cloud feedback as positive, but many estimate cloud forcing as negative.
Regarding the more general issue of cloud feedback as a component of climate sensitivity, there are dozens of papers on this subject, and a rational analysis would attempt to review and synthesize the results of all of them rather than selecting one for its particular set of conclusions.
Fred Moolten
You make a good point on clouds, i.e. there have been many studies, even if these have not been conclusive.
The Wyant et al. paper does confirm that with super-parameterization of clouds in GCMs a better understanding of their behavior can be simulated than with the more primitive parameterization of the GCMs cited by IPCC.
These studies show a net cloud feedback of -0.86 W/m^2 °K, rather than +0.69 W/m^2 °K as estimated by IPCC, so the impact on the 2xCO2 climate sensitivity is enormous.
The fact that these more meaningful GCM simulations were subsequently confirmed by actual physical observations (Spencer and Braswell) gives even higher credence to the general conclusion of net negative cloud feedback with warming, thereby helping to clear up IPCC’s “largest source of uncertainty” in its earlier AR4 report., and raising serious questions regarding the 2xCO2 climate sensitivity range estimated by the model simulations cited by IPCC.
Of course, these are just two more pieces in the jigsaw puzzle as you say.
Even more interesting IMO is the concept (unfortunately ignored by IPCC up until now) of clouds as a natural forcing factor itself (rather than simply a feedback to anthropogenic forcing). Spencer has written on this and it appears that this may become the new direction if inquiry.
Suffice it to say, the impact of clouds on a warming climate is not well understood, although it is clear to one and all that this impact can be enormous, particularly in changing our planet’s net overall albedo and hence the amount of solar warming that will actually warm our planet.
Max
Fred Moolten
Just a small comment to your post.
You wrote of the Wyant et al. paper:
This is not quite correct.
Wyant et al. addresses the enhanced “change in radiative forcing” from clouds resulting from an increase in temperature of 2K, in other words the added forcing one would expect as a feedback if temperature increases by 2K, from some primary forcing (solar, GHG or whatever).
The reported overall value was -1.77 W/m^2 for 2K dT.
Just to clear this point up.
Max
Cloud forcing and its changes are not the same as cloud feedback.
Fred Moolten
You opined:
Wyant et al. tell us that models using super-parameterization to better estimate the behavior of clouds than the AR4 models show us that as temperature rises by 2K (for whatever reason) the net overall feedback from clouds will be negative:
The net cloud feedback is -1.77 W/m^2 divided by 2K or -0.89W/m^2 K, while AR4 models without super-parameterization estimated this at +0.69 W/m^2 K.
AR4 estimates, based on this strongly positive cloud feedback, that the 2xCO2 CS = 3.2C (of which 1.3C can be attributed to cloud feedback).
If we correct the AR4 estimate for the negative cloud feedback found by super-parameterization, we end up with a much lower 2xCO2 CS, close to that estimated by Spencer and Lindzen.
A follow-up report by Bretherton (a co-author of the Wyant et al. study) tells us (bold face by me):
http://www.usclivar.org/Newsletter/VariationsV4N1/BrethertonCPT.pdf
Now, you can argue that Wyant et al. got it all wrong, that this is only one study out of many, or any other rationalization you choose, but it does not change the fact that Wyant et al. found a net negative cloud feedback with warming.
Seems pretty straightforward to me, Fred.
Max
Max – A number of authors have confused cloud forcing changes with cloud feedbacks, but they are not the same. As I stated, the peer-reviewed published article by Wyant addressed cloud forcings but not feedbacks. In fact, a number of models estimate forcings as negative, but almost all estimate cloud feedbacks as positive. Whether Wyant or coauthors have elsewhere been confused is speculative, but in the published paper, only forcings were addressed.
For more on the difference, including mechanisms that lead to negative forcing changes but positive feedbacks, see Cloud Forcings and Feedbacks.
The paper of Wyant et al. never mentions cloud feedbacks as a concept, but discusses always specifically cloud forcing, and uses the word ‘feedback’ in a more generic way at two locations. The change in cloud forcing is, however, a feedback as it’s a reaction to the higher SST. The numbers that they give for cloud forcing cannot be directly compared with numbers given by other groups to cloud feedback as these related concepts are not identical.
What is true in any case is that the model of Wyant et al makes an attempt to model cloud feedbacks better than earlier models and that the model results in a lower climate sensitivity than most of them, which is equivalent to a smaller total feedback. As the changes concern mostly clouds, the cloud feedback must also be lower than in most other models.
The paper of Soden et al on the difference between the two approaches of determining cloud feedbacks tells that the change in cloud forcing is about o.3 w/m^2 lower than the feedback expressed as partial radiative perturbation. The value that Wyant et al give for the net cloud forcing is -1.77 W/m^2. Thus it appears certain that also the PRP feedback is significantly negative according to their model.
Correction to the above. Taking into account the 2K change in the SST, the value to which the approx. 0.3 W/m^2 correction should be applier is -0.885 W/m^2, but even that leaves the outcome significantly negative even if 0.3 happens to be an underestimate of the correction for this particular model.
The correction needed to translate cloud forcing changes into cloud feedback will vary according to model estimates and will differ from model to model. Therefore, calculated delta forcing values can’t be translated into specific feedback values, except that one can say that cloud feedback will generally be more positive (or less negative) than forcing changes. As Pekka indicates, it appears that the negative forcing changes from Wyant et al are large enough to signify a negative feedback, but this comes from their particularly large negative SW forcing from low cloud cover, which the authors acknowledge is a source of significant uncertainty. I think their paper is interesting – my comments were designed to point out the distinction between delta forcing and cloud feedback. The paper did not confuse the two.
Fred and Pekka
We have beaten the dog to death, but let me summarize:
Wyant et al show strong negative cloud feedback as co-author Bretherton confirms in later paper.
Whether (as Pekka points out) this is -0.89 W/m^2 K as reported or a slightly lower number it remains a strong negative cloud feedback as compared to the IPCC model estimate without super-parameterization of +0.69 W/m^2 K.
The impact of cloud feedback on the 2xCO2 climate sensitivity as estimated by IPCC was +1.3C.
With the strong negative cloud feedback it is obviously much lower.
With no cloud feedback, IPCC figures the 2xCO2 CS at 1.9C on average.
So with a strong negative cloud feedback it is probably well below 1C on average, as was separately confirmed by the physical observations over the tropics by Spencer and Lindzen.
Seems pretty straightforward to me. Don’t you both agree?
Max
Max – Your statement is incorrect. We can’t state that Wyant et al showed a strong negative feedback. We don’t know what the feedback value is from their forcing calculations since the correction that makes feedback more positive, or less negative, than the forcing changes depends on individual models and may be substantial for large changes in cloud cover. In agreement with you, we can say that their modeled feedback is probably negative but less so than the figures you have quoted, and so its “strength” is a subjective judgment.
Perhaps more importantly, the Wyant values depend on estimates of a substantial increase in low cloud cover as a consequence of climate warming, but the HIRS and ISCCP data on multidecadal low cloud trends don’t show that, but rather fairly stable or slightly declining low clouds and a reduction in low/high cloud ratios (a warming and thus positive feedback phenomenon). These observational data are a better match with the majority of models that estimate cloud feedback as positive.
The ISCCP URL is ISCCP
A later paper by the same authors – J. Adv. Model. Earth. 2009 -re-evaluates some of the earlier conclusions about strong negative forcing changes:
” A cloud resolving model (CRM) is used to investigate the low-cloud increase due to a 2 K SST increase in the SP-CAM superparameterized climate model. Of particular interest is the sensitivity of cloud changes to the CRM resolution (4 km horizontal, 30 vertical levels) in SP-CAM. The CRM is run in column-modeling framework using SP-CAM composite cloud regimes. The experiments are run to steady state using composite advective tendencies, winds, and sea-surface temperature from the control and +2 K climates of SP-CAM. A new weak temperature gradient algorithm based on an idealized form of gravity wave adjustment is used to adjust vertical motion in the column to keep the simulated virtual temperature profile consistent with the corresponding SP-CAM composite profile. Humidity is also slowly relaxed toward the SP-CAM composite above the boundary layer. With SP-CAM grid resolution, the CRM shows +2 K low cloud increases similar to SP-CAM. With fine grid resolution, the CRM-simulated low cloud fraction and its increase in a warmer climate are much smaller. Hence, the negative low cloud feedbacks in SP-CAM may be exaggerated by under-resolution of trade cumulus boundary layers.”
In this summary, the word “feedbacks” is used, apparently without acknowledging the distinction between delta forcing and feedbacks described by Soden et al, but since the two are correlated even if not equal, the more important point is the unreliability or at least uncertainty of the earlier estimates. In fairness to the earlier Wyant et al paper, the relevant uncertainties were fully acknowledged at the end of the text.
No, every parameterization is there because it would be a bit difficult to simulate the earth by modeling the interaction of every quark, proton, neutron, atom, molecule, crystal, etc.
Paul,
Not every quark or proton impacts the climate system. If you cannot simulate the forcings that do impact the climate system, you shouldn’t pretend a simulation with parameterizations is doing something it is not.
Parameterizations can be good and allow for highly predictive models or they can be seriously inaccurate and make an otherwise accurate model produce erroneous results.
Often the parameterizations represent a small scale process, whose properties are well known based on process specific empirical and theoretical studies, which might take advantage of more detailed models. In other cases such knowledge is missing. Then the parameters may be adjusted based on fitting the full model to a set of historical data or to some other requirements set for the results.
Using parameterizations is not automatically a proof of unreliability or inaccuracy of the full model as a tool for projections for the future. The question is rather: how well we know the validity of the parameterizations, and how well we understand their influence on the behavior of the full model.
As far as i have understood, everybody agrees that the parameterizations needed in modeling clouds and aerosols are deficient and understood less well than they should be. This is one major source of uncertainty, but certainly not the only parameterization that is not backed with accurate knowledge of the underlying process.
Parameterization works well in models where you are trying to optimized the interaction between variables, such as maximizing sales revenue. While it may appear that these models are predicting future sales, they are doing no such thing. They are simply uncovering patterns in the historical data, and assuming that history will repeat itself going forward.
The problem with parameterization is it gives the illusion that the model is able to predict the future. The more parameters you add the better fix you are able to achieve when hindcasting. (A polynomial of degree n-1 exactly fits n points). The better the fit, the more you are tempted to assume that this will continue into the future.
Nothing could be future from the truth. An overfit model rarely performs well going forward, because it is modelling the noise and uncertainty.
Any process that can be modelled going forward generally has a relatively simple pattern. By reducing the parameterization of the model you hope to filter out the noise and uncover the pattern.
For example, there is a fairly obvious increasing linear trend in the past 150 years of temperature data, with a 60 year cycle laid over top. A simple model with a few parameters is all that is required to discover this pattern and predict future climate on this basis. If the past pattern repeats, we should have 30 years of little or no warming starting around 2000-2005.
In contrast the mainstream climate models missed this in their forecasts. One possible explanation is that they have too many parameters and have overfit their hindcasts. As a result they missed the underlying pattern in the data and are projecting the noise and uncertainty going forward.
Ferd,
You discuss one type of parameterizations. There are, however, others that are very relevant to climate models. One well known example is turbulent flow near the surfaces in aerodynamic calculations. Modeling turbulent flows in details is not possible as the turbulence is chaotic, but it is possible to use parameterizations that describe with some accuracy the statistical parameters that tell, how the turbulence influences the flow further from the surface. Often a simple parameterization, k-epsilon model is good enough, while some alternative parameterizations are used, when better accuracy is required and k-epsilon model fails to describe, what is needed. The parameterizations are based on extensive earlier studies of turbulence, but they are still just parameterizations, not physical models based on fundamental equations.
Similar uses of parameterizations are very common in engineering and they are often highly predictive.
What little I know about atmospheric modeling tells that this same approach is applied also in them. Many subprocesses have been studied extensively and parameterizations have been developed based on that. The models are still deficient, but they are not like parameterization in technical analysis of stock market or some other applications, where your message is directly applicable. Your message is certainly also applicable to extrapolations of the future temperature from historical data without the help of models the Earth system.
The AOGCM’s may be good or bad, but they are a major tool in learning about the Earth system. They provide a platform for integrating knowledge from different sources. It’s difficult to imagine, what would provide a better tool, but this observation doesn’t tell, how good they are right now.
Or pictorially: http://bit.ly/cO94in
Pekka,
Fred Berple is right. Parameterizations only give the appearance of improving forecasting. Parameterizations, even superparameterizations, are crude attempts to hold the simulations in check so they do not embarrass the modelers. Somewhat like taking small children to the bowling alley and asking them to put the bumpers up so your kids ball doesn’t go into the next lane. Without parameterizations, the models would boil away the oceans or create sea ice the size of Australia in the tropics.
People talk about clouds as if they are the only thing GCMs cannot model. Not true. Clouds are hugely important but there are probably 20 other aspects of the climate the models do not model well. The Sun, galactic cosmic rays, oceanic oscillations (PDO, ENSO), oscillating polar warming (when the arctic is warming, Antarctica is cooling and vice versa) and under ocean volcanic eruptions are some examples of things which are poorly modeled in GCMs. It is a pet peeve of mine to hear scientists say “We know the basic physics” or “We know all of the forcings.” What arrogance!
Pekka is also right in that simple statistical models typically outperform simulations in chaotic situations.
Their purpose is not to predict the future, but to allow us to optimize our designs. Be it plane wings or boat hulls, or sales figures.
The reason is inherent in the nature of the future. A great many different futures are possible. Some are perhaps more likely than others, but that doesn’t mean we will arrive at the most likely future.
Climate science seems to have ignored this in their models. A .4C temperature rise after 2000 may have been completely within the range of possible futures, so the model has not made a mistake in predicting this. We simply ended up in a different future than the one the model simulated.
Fred Berple writes:
“Pekka is also right in that simple statistical models typically outperform simulations in chaotic situations.”
Yes, I agree with this also.
for somebody with all the answers you seem to have a lot of questions.
there are many different type of parameterizations. Nothing wrong with that, we do it all the time to make useful models.
But I’ll look up some stuff for you
Another +0.4 ◦C, mother nature must be sick for she’s getting really cold, get her another blanket steven mosher, it’s just a little code here, adjustment there.
The fallacy is: when are the errors EVER going to be on the downside? Where is the symmetry in randomness? That very fact is why this is not even close to real science I know, simple and crystal clear to most.
so really we play the eigenvertors game and square 0 on 0 and then we come up with .5 over the true temp? I just know math and that is a fail for the paper!!!
If the model has not taken account of convection of air, its another garbage model.
Climate models (and the numerical weather prediction models upon which the atmospheric code is based) have taken account of convection since about 1960
Dr. Curry,
So where are the likely areas of weakness from which the lack of useful prediction is coming?
How can you be sure of the magnitude and direction of wind speed at every grid point on the surface of the earth?
“The fallacy is: when are the errors EVER going to be on the downside? ”
Ever gone to the store where they have scanners instead of prices marked on the package? Ever noticed which way the scanner errors fall? I’ve found plenty of scanner errors, but to this date I don’t recall a single one that priced the product lower than it was marked on the shelf.
Paul Baer
You ask
It must be a proposal to implement a specific action with a specified time line, i.e. “shut down all US coal-fired power plants by 2030, replacing them with renewable sources, where feasible and nuclear plants where not feasible”, “install no new coal-fired plants after 2015, replacing them to cover new demand as above” or the “carbon capture and storage schemes” cited on an earlier thread here.
The key is that each specific proposal can be directly linked to an action plan, i.e. “who needs to do what by when?” and a specific cost/benefit analysis, i.e. “how many degrees C global warming can we avert through this plan and what will it cost to implement?”
For example, the first proposal above was made by James E. Hansen et al. (without listing any cost/benefit analysis). An estimate shows that replacement with nuclear plants would involve a new investment of around $1.5 trillion. The action would result in a theoretical reduction of global temperature (from reduced CO2 content) of around 0.08C. The slightly lower running fuel cost (uranium versus coal) would be essentially off-set by higher costs for spent fuel handling/disposal and capital-related costs).
One “actionable proposal” that has been made is a (direct or indirect) carbon tax, but (as we all know) this will have no impact on our planet’s climate (no tax ever did).
What is NOT an “actionable proposal” are the political pledges to “reduce CO2 emissions by year X to Y% below those of past year Z” or (even sillier) to “hold global warming to no more than 2C”. These are simple political posturing, as there is no action tied to the pledge, and hence it is empty.
Hope this clears up what I meant by an “actionable proposal”. The concept is probably something that comes out of my background as an engineer and businessman.
Max
Max
I didn’t realize you were a Big Government type.
Your definition of actionable proposal requires the politburo style of central planning committee command and control legislation and government ownership and subsidy that got us into this mess in the first place, no?
Shouldn’t the cost/benefit analyses be internalized to the individual in the Market so much as possible?
Rather than, “shut down coal-burning energy plants because of CO2E, particulate and land use externalities as determined by a panel of experts,” isn’t it better to charge the coal-burning energy plant owner what the market will bear for CO2E, charge what the market will bear for particulate emissions, pay the common shareholders in CO2E and particulate budgets (all of us, per capita) all of that revenue, charge a fair, not state-controlled price for the land used in coal (highly subsidized in coal-miners’ favor) and let the freakishly high resulting price of coal-based energy drive the distorted levels of coal out of the Market?
Bart–your proposals are generally laughable. though you try to perfume the concept, all you advocate is a government tax on energy that releases CO2. It has already been pointed out to you that this would have a minimal impact on US CO2 emissions, but you continue to suggest the idea. It would also have ZERO impact on CO2 emissions in the balance of world. The US is already reducing our per capita output. It is the balance of the world that will be increasing emissions and your tax scheme will do nothing about that. You want to tax CO2 with almost no evidence that it is harming US taxpayers
Rob Starkey
We appear to disagree.
1.) If there is to be any “actionable proposal” it will fail if it does not impact individuals directly at the decision level.
We’ve seen this in the case of reduced emission policies for automobiles in the USA in the 1990’s.
The auto manufacturers found a loophole, promoted it through merchandising campaigns to buyers, exploited tax subsidies shamelessly, and lobbied to have the ineffective “actionable proposal” scrapped as it didn’t work.
Why didn’t it work?
Because the auto industry destroyed it by subverting individual buying decisions.
You are hopelessly naive if you believe you can have any impact without reaching directly to the individual level and having individual impacts — both benefits and harms — reflecting the actual, real impacts — both benefits and harms — of the individuals’ decisions.
Or you think we are that naive.
All I advocate is taking the decision out of the hands of the government — which is inept at such decisions — and putting it into the hands of individual Americans.
That you don’t trust individual Americans tells us all we need to know.
Bart–you are absolutely not proposing to take it out of of the hands of the government. Who would determine the amount of the tax to levy??? Answer–the various world governments! Government policies did make automobiles both more efficient and cleaner. You continue to make silly statements referencing “democracy” and not wanting “big government” , but these are not the issues in play to support what you are advocating. In order to force lower CO2 emissions, it will require many governments would have to force actions. That is a simple fact.
Rob Starkey
Who would determine the amount of revenue to collect?
Who would fix the price of CO2E and particulates?
How?
The government would set this, of course, as trustee of its citizens, just like any CEO.
It would use the same formula as any CEO is expected to, in a publicly traded company: the price should be set to maximize revenues to owners.
That is, at the point where when the Supply and Demand curves meet, Volume Sold times Price per Volume is at its highest.
Which the Market decides.
No number of governments enforcing any level of command and control regulations will have anything like meaningful impact, until government policy and individual decisions align.
You have this Big Daddy view of governments, this Paternalist approach of someone who only thinks in terms of tyrannies and nanny states, Rob.
It won’t fly.
It can’t work.
Individuals would resist this approach of yours, and it would fail.
As, apparently, you want it to.
2.) This will have minimal impact on CO2?
Really?
It’s been “pointed out” to me, as in asserted, but the evidence one way or the other is impossible to prejudge.
This would be a decision of the Market, not of committees or governments, panels or corporations.
Perhaps the Market as the realization of the democratic buying decisions of all individual Americans won’t crush coal and snuff out petroleum, reduce overhyped and oversubsidized ethanol to fumes and foster a decently healthy economy of conservation, efficiency, non-energy-intensive goods and activities, and stable nuclear and ‘alternative’ energy generation options.. but at least it will be restored to fairness and remove the pricing distortions from the current state of affairs, and I won’t feel like I’m being robbed by free riders.
Or perhaps it will deliver on the CO2E goals of people who want a lower CO2 level, and no further “actionable plan” of the Big Government sort will be needed, and we can all pay attention again to things that really matter, like baseball season.
3) It will have zero impact on the rest of the world?
Invent a better mousetrap, and the world will beat a path to your door.
The rest of the world follows American tastes in many things. It’s inconceivable that if the USA undertook a successful shift to a fair and balanced delivery system for CO2E revenues to citizens per capita that the entire rest of the world would roll over and ignore it.
And if they do, so what?
I’m still wouldn’t be robbed of my money.
4) Almost no evidence..
If I own a store, and I see someone walk in skinny with a bulky coat and walk out fat.. I have the right to check my inventory and suspect a shoplifter.
If I tighten up my security system so shoplifters can’t just walk away with my inventory — in this case the limited CO2 budget, or limited particulate budget either — then I see no legitimate cause for the shoplifter to complain that I don’t have enough evidence.
Bart writes:
“This would be a decision of the Market, not of committees or governments, panels or corporations.”
My response—that is a silly, unrealistic concept. If electric company charges higher prices due to adding a CO2 tax, who do you believe will buy from them vs. an alternative company??? My answer is almost no body. The government would be the only potential source of the proposed tax. Otherwise it is simply additional profit for the seller.
You have not suggested a better mousetrap or a better anything. All you have suggested is a magically appearing tax. Your concept has absolutely ZERO merit in the real world. If the US was to implement a CO2 tax, what it would do is make US goods more expensive as compared to goods produced in other countries, and make it easier for those countries to compete against the US in the world marketplace. A CO2 tax in the US would not reduce US CO2 emissions by an amount that would be meaningful to the climate.
Rob Starkey
“My response—that is a silly, unrealistic concept.
Yes, it’s called Capitalism, look it up if you need help with the definition.
Your anti-capitalist views and rhetoric aside, it seems to work just fine where it is followed, so I’ll stand by it.
“If electric company charges higher prices due to adding a CO2 tax, who do you believe will buy from them vs. an alternative company??? My answer is almost no body. The government would be the only potential source of the proposed tax.”
Wait, did you think I wasn’t proposing that the government govern for a change?
What do we pay them for, if no this?
The government governs standards of weights and measures, broadcasting, telephony and telegraphy standards, and all sorts of other parts of the Market. Why not administer the already known and established tables of CO2E emissions by fossil source that we’ve already paid for to identify what is using up our CO2E budget, use the sales tax infrastructure that we’ve already paid for to collect our money from those who use up our CO2E budget, and use the income tax infrastructure to deliver our money back to us per capita?
Sweet justice, for a minarchist to see the government recycle and reuse these mechanisms of raiding the pockets of private citizens to for a change pay out.
“… Your concept has absolutely ZERO merit in the real world.
Except the real world has some two dozen jurisdictions with some measures in some ways similar, and one that has made progress some 10% of the way to meaningful CO2E prices on almost exactly this scheme.
Your claims about the real world appear to show relatively little familiarity with that real world.
“If the US was to implement a CO2 tax, what it would do is make US goods more expensive as compared to goods produced in other countries, and make it easier for those countries to compete against the US in the world marketplace.”
Oh, puhleeze.
You’re saying, “Hold every US citizen hostage to $1,000.00+ a year blackmail so someone in some other country can enjoy $5.00 a year cheaper US-made goods in some industry the US isn’t naturally competitive in, but only remains active because we subsidize it and let it dump its uncompetitive, subsidized goods into world markets, distorting prices and enriching corporate communists at home and abroad.”
This argument extorting money from my pocket so someone can keep doing a lax and lazy job inefficiently, how am I to take it?
“A CO2 tax in the US would not reduce US CO2 emissions by an amount that would be meaningful to the climate.”
If it doesn’t — and you haven’t shown it would not — then why not cross that bridge after we’ve come to it?
If it does or if it doesn’t, then I and everyone else still get our money, and our democratic choice of how to spend it undistorted by subsidy and bad governance.
What is it about Capitalism and Democracy that you don’t like?
Bart –
What is it about Capitalism and Democracy that you don’t like?
That’s the question you need to answer.
But first you need to learn what those words mean cause you’ve got them wrong.
Jim Owen
Enlighten us.
Where do I go wrong?
But first you need to learn what those words mean cause you’ve got them wrong.
Jim,
Anyone who opposes a tax on energy that is completely divorced from any limits, wholly unrelated to climate issues, and is enacted for the sole purpose of redistributing wealth, is a communist. ‘Cause Bart says so.
In the US, we will certainly soon have higher taxes combined with a reduction in services. This is 100% unavoidable based on our current budget situation. I would just hope that implement these measures based upon sound principles and not silly hopes.
GaryM
“Anyone who opposes a tax on energy that is completely divorced from any limits, wholly unrelated to climate issues, and is enacted for the sole purpose of redistributing wealth, is a communist. ‘Cause Bart says so.”
I see you’ve run out of objections and so, kimlike, turn to parody and babble.
You’re #WINNING. Bit time.
You should launch a Torpedo of Truth Tour.
Bart
Your concepts/messages have absolutely ZERO merit, and I see no value in replying to you any longer.
Bart R
You apparently have missed my point.
An “actionable proposal” can be as large or small as desired.
Changing out light bulbs is an excellent example.
This is actionable at the individual level. Its net cost/benefit can be quickly calculated (in the climate debate the “benefit” is the resulting global temperature increase, which has been averted by the resulting reduction in atmospheric CO2 concentration).
GE/WalMart are working on a program to replace 1.5 billion lightbulbs in the USA with compact fluorescent lightbulbs (CFL).
http://www.fastcompany.com/magazine/108/open_lightbulbs.html?page=0%2C4
The power saved is estimated to be 80%. At an average power saved per bulb of 60 watts and an average on time of 3 hours/day this equals 270 GWh per day or 98,600 GWh per year energy saved.
Assuming this energy comes from a coal-fired plant, these generate on average 1 metric ton CO2 per 1,000 kWh generated, so we are talking about a CO2 reduction of 0.099 GtCO2 per year. If these 1.5 billion CFLs were installed by 2015 this means we would have had a cumulative reduction of around 8.4 GtCO2 by year 2100. The premium cost of replacing 1.5 billion light bulbs with CFLs is around $1.20 per bulb, so we have a total investment cost of $1.8 billion.
The 8.4 GtCO2 not emitted would be equal to an atmospheric reduction of around 1 ppmv by year 2100. Using the 2xCO2 climate sensitivity of 3°C, this would result in a calculated reduction of warming of 0.008°C
Replacing a gas guzzling SUV with a more fuel-efficient hybrid car is also directly actionable at an individual level. Again its cost/benefit can be calculated.
The Hansen et al. proposal to shut down all coal-fired power plants in the USA by 2030, replacing them with non-fossil fuel plants is also a directly actionable proposal, although this is can no longer be implemented at an individual level. A closer look at this proposal shows that it would involve a capital cost of around $1.5 trillion and result in a reduction of global warming by year 2100 of 0.08°C.
On an earlier thread here, Rutt Bridges commented on a carbon capture and sequestration (CCS) proposal, which he concluded did not provide a cost effective solution. The total cost of this proposal was apparently equal to $33 per metric ton of CO2 captured and stored, and the proposal was to keep 25 GtCO2 from entering our atmosphere over a 50-year period. Again, such a proposal, while actionable, cannot be implemented at individual level. A quick calculation shows that it would result in a reduction of global warming by 2100 of around 0.03°C at an estimated total cost of $833 billion.
These two specific large-scale actionable proposals show that between 0.036 and 0.053°C of global warming can be averted by investing $1 trillion. Whether this cost/benefit relationship is valid for other actionable proposals, I cannot say, since I have not seen any such proposals. (The GE/WalMart US lightbulb proposal is limited in scope, but shows a better theoretical cost/benefit result than either the Hansen or CCS proposals.)
Carbon taxes (direct or indirect) will not result in any change of our planet’s climate (no tax ever did), so we can forget about these as a solution to our projected future climate problem.
All the plans of “charging the coal-burning energy plant owners for CO2 emissions” will achieve nothing but higher energy costs, which will be passed on to the consumers. If the “tax revenues” are re-distributed to the consumers, they will recover a portion of the added cost (except for the administrative costs). If (a more likely scenario) the tax revenues are used by the government to fund pet projects or plug budget holes, the consumer loses.
Yet no single ton of CO2 emission is avoided and our climate is not changed one iota.
What is needed are specific actionable proposals to actually reduce CO2 emissions, which can be evaluated one-by-one on a cost/benefit basis and then implemented, if they pass this evaluation.
Any suggestions?
Max
manaker writes “GE/WalMart are working on a program to replace 1.5 billion lightbulbs in the USA with compact fluorescent lightbulbs ”
Let me just point out, once again, that the amount of energy saved by using CFLs is a function of latitude. The higher the latitude, the less the savings. At higher latitudes, the energy “wasted” by incandescent bulbs merely goes to reduce heating bills. At lower latitudes there could be a double wammy. The energy wasted by incandescent bulbs may have to be removed by air conditioning.
Max
I appreciate your considerate reply.
Reminding of the precept that I begin with is that higher CO2 in the common shared atmosphere, regardless of +/-AGW, increases common shared Risk to all of us.
A mixed approach of the lowest net cost solutions to the problem of Risk from all of your actionable proposals and all of my actionable proposals and all of everyone else’s actionable proposals would be ideal. But the world is not simple.
Actionable proposals have come and gone in the past, outright failures because of design errors.
Look at the compact flourescent lightbulb example.
It seems like a great deal, however cf bulbs contain mercury and are, let’s face it, capital intensive and in some ways inferior. You can’t dim them, can’t safely dispose of them domestically, and they take a while to warm up to full brightness. At the local level, these objections have derailed many cf-replacement schemes. There are new designs for LED and nanotech bulbs at a fraction of the price of this switch coming onstream.
Suppose a CO2E surcharge (all revenues returned to all of us, per capita) were applied to all lightbulbs at time of sale, on the argument that they’re going to increase CO2E emission to the end of their lives, and the average life of a bulb is known. This internalizes the choice of whether to emit the CO2E that is now thoughtlessly done every time a bulb is changed, and allows the purchaser to decide the true weight of their objections as represented by the price they pay. A more perfect example, you could apply a mercury deposit fee, too, returned when the bulb is disposed of safely.
We’re talking about the same solution — replacing conventional lightbulbs with lower CO2E bulbs — but with a CO2E tax returned to all buyers, GE and Walmart don’t need to take special measures: their customers will make this choice without objection as it aligns with their individual interests.
The same with SUV’s, which in the 1990’s were exempt from the low fuel economy surcharge of automobilies in the USA, because in the 1990’s they were classed as business utility vehicles, not cars. When the auto industry, principally Ford, recognized this and shifted the market to SUV’s to take advantage of the loophole (and the Humvee loophole), by telling young men their SUV made them more manly and telling young mothers their SUV protected their family (obvious lies, but manhood and motherhood aren’t particularly rational states) all thought of fuel economy was gone. The government program failed in the USA because it did not align with individual decisions.
A CO2E tax with revenue returned to all of us would not have this problem. You see someone selling something without that tax, you know that someone is stealing revenue from your pocket. You buy from them, sure, but then you turn them in so you get your money.
Your carbon tax description sounds like the Australian example, which of course is not my plan. I share your objections to that example. It’s too low, and not proposed to rise to effective levels. Compare this with the British Columbia case, which appears designed to raise regularly, and which delivers all revenues back to individuals per capita.
While the British Columbia scheme isn’t mine, either, it’s closer. While it is only 10% of the level one expects to alter CO2E behavior, it could get there. And in British Columbia, it is a popular program, defended through popular election successfully, and apparently causing no harm or havok to what is considered the healthiest economy on the planet by many experts.
I lament that you Australians don’t have this option in front of you, but face an inferior good. On the other hand, think of those in the USA who don’t get the benefit of this approach at all, but must look north of the border and see neighbors with an extra $100 a year in their pockets that citizens of the USA do not get.. and know that it could in a few short years be $1,000 that people in the USA are being denied every year.
And you think this won’t affect CO2E decisions?
Bart R
OK. I think I see where our basic disagreement lies.
Your idea of a top-down enforced CO2E surcharge on all items that require energy seems overly complicated and costly to implement in practice.
Rather than giving CFL buyers a CO2E rebate to encourage them to substitute their conventional incandescent bulbs with CFLs, why not simply make the CFL available at the a competitive price, emphasizing the lifetime cost benefit? The CO2E tax/rebate solution simply allows the CFL producers/sellers (i.e. GE/WalMart) to put the equivalent amount in their pockets as added profit. [So I can see why both GE and WalMart would support your CO2E tax/rebate scheme, or some equivalent.]
The same goes for switching from a gas-guzzling SUV to a hybrid. It must be inherently attractive to do so in the eyes of the buyer on an overall cost/benefit basis. Enforcing a top-down CO2E tax/rebate scheme to force the market into one (at that particular time) desired direction seems like a costly intrusion into individual freedom. Face it. Oil prices will rise as reserves become more limited and more difficult reservoirs must be exploited. Automobile manufacturers have already seen the benefit of offering cars with better gasoline mileage, including hybrids, These are gradually becoming more popular and this trend will continue naturally, without a bureaucratic top-down tax/rebate scheme.
Why not simply make hybrids so inexpensive and fuel-efficient that many buyers naturally choose them?
The CO2E tax/rebate scheme simply allows hybrid manufacturers to pocket a bit more profit, equivalent to the CO2E rebate given to the purchaser. [If I were Toyota, I’d love your plan.]
As you can see, I am principally against any sort of carbon tax scheme.
I believe these will all end up being costly, bureaucratic nightmares.
And I am 100% sure that they will not result in any perceptible change in our planet’s climate – no tax ever did.
Max
Bart R
In addition to our difference of opinion in principle on carbon taxes, I do not share your stated precept that higher atmospheric CO2 increases a commonly shared risk.
So our differences appear to be pretty basic.
Max
Max
I’m open to hear out a sincere argument that disproves the formulation that increased CO2 increases Risk.
Bart R
I’m open to hear out a sincere argument that proves the formulation that increased CO2 increases Risk.
Until such formulation is proven, all discussions of how to tax CO2E (directly or indirectly) are meaningless.
Max
Bart R
Aside from the “burden of proof” that CO2E+ = Risk+, let me ask you a basic question.
You talk of a “CO2E tax with revenue returned to all of us” amounting to $100 per year (?) being paid to everyone.
First question: Where is this money coming from?
Max
“Until such formulation is proven..”
For 10-15 million years (probably), and provably for many hundreds of thousands of years, per multiple ice core and isotope, geological and fossil studies, we know CO2 was 230 +/-50 ppmv and life thrived, with CO2 never reaching or exceeding 310 ppmv as a global average in all this period. (Sure, there’s some dispute, but there is much much less doubt of this claim than of any of the +/-AGW claims, despite how much parties like the Idsos inflate thes disputes. Where a prudent and conservative observer could skeptically reserve judgement on +/-AGW, CO2 level is well into the range of confidence that one cannot reject the above hypothesis.)
We’re at 390 ppmv and rising now. That’s over 80 ppmv than it’s been for over 10 million years, and 160 ppmv above the average of the past 10-15 million years. (About 25.6% & 70% higher, respectively.)
The fraction of life that evolved under these conditions is enormous; the rest of all species have adapted to these conditions for an order of magnitude longer than our own species has existed.
We are by any measure in unknown territory, and heading higher. There aren’t even computer models to simulate how life might respond to this change, and it’s generally agreed there can’t be any meaningful computer model to draw valid conclusions about this change. CO2 has hormone-like impacts on different plants differently, and will favor some over others in ways we do not well know.
Further, we know there certainly are ceilings past which CO2 will absolutely have effects negative for some. Many agree AGW is one such vector where there may be such ceilings, but there are other such vectors (eg ocean chemistry).
The problem of proving what the Risk will be is beyond the scope of my argument. For the economic principle to be established, all that is necessary is that there is Risk.
All other elements of Risk fall out of the economic arguments from this point, and are not important to know.
QED
The people who have the closest thing to what I’m proposing get over $100 a year revenue per capita (sort of), on their way to $30/ton.
Multiply by 10, to hit the $300/ton generally agreed to be the minimum price that would be required to affect buying decisions.
You can investigate the British Columbia case for yourself and extrapolate.
My estimate is $1000-$2000 revenue per year per capita.
Only the Market can really say.
Max
We do have some differences in definitions.
I don’t believe that ‘all energy’ has a CO2E component, nor that the CO2E components are equal across all fossil or biofuel sources.
I don’t believe that the ‘top down enforced’ characterization is any more valid for CO2E than for cell phones or apples, textiles or music.
And I’ve been unclear, I see. In the case of CFL’s, where I refered to revenue returned to ‘all buyers’ I of course meant all citizens per capita, whether they bought CFL’s or not, unrelated to CFL purchase entirely.
One ought pay for their CO2E at the point any purchase commits them to drawing from the CO2E budget. This is what internalizes the decision to the individual. This is what produces freedom, by removing the distortionate impact of hiding the cost of the decision.
Do not confuse free ridership with freedom, nor market efficiency with costliness. A measure that increases the efficiency of the market decreases overall costs. Just because you do not acknowledge that this will result from pricing CO2E does not make it less so.
Also, who’s talking about hybrids? Conventional non-hybrid vehicles like the Cruze Eco can easily achieve 60+ mph; there are four such makes on the market in America now, I think.
In the 1990’s, auto manufacturers saw the benefit of subverting fuel economy command and control measures because the market was not aligned to reward them for providing their customers with efficient products.
If this misalignment returns, we’ll see another SUV fiasco.
The right measure is to pay the owners of resources what the Law of Supply and Demand deems is our due.
I want my money.
Bart wants ‘his’ money. It’s only ‘fair’. He will let you know what ‘fair’ is.
This is very encouraging for all skeptics who stick with the data. I hope this will be the end of “it is worse than we thought.”
Now the following can hopefully be said openly:
The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998.
http://bit.ly/6qYf9a
Comparison of trends when the above statement was made (RED) and now (GREEN): http://bit.ly/lybgDB
Is the scientific community would NOT now come down on any one in no uncertain terms if one said the world had cooled from 1998?
Where is this discrepancy of 0.4 deg C between model and observation coming from?
I can only see 0.2 deg C?
http://bit.ly/ipPCgW
Can some one please explain?
The sun has risen and the British policy cock crows.
=============
The lights have dimmed, yet the wind still blows.
Chocolate teapot comes to mind here. Lines of code, better this and that, improved whatsit, better tuned hows your father and not to mention heightened sensitivity to thingy. Sheese mon, if it dont work right its bloody no good for nothing, period…
The only output I have ever seen from a GCM is temperature anomaly, rather than temperature itself. Also I have never seen an estimate of actual global temperature, we only ever see the anomalies. It must however be much more important that the GCM gets this absolute value correct than that the anomaly trend is approximately right. In fact you could probably say that if the trend is right but the absolute value is wrong, then the GCM has somehow been fiddled. Has anyone seen any comparison of modelled and measured temperatures?
The paper only reports the anomalies, but the simulations themselves give temperature itself as an output. See, for example Global Temperature Trend</a). Anomalies are useful in determining how temperature is changing. Global mean temperature is not an easy quantity to define. However, model runs must be initialized to some prevailing condition at the start of a simulation. If it is temperature, observed and modeled temperature will be the same by definition, and anomalies will determine how closely they remain matched. If the model is initialized to something else (e.g., estimated past sea surface temperature and salinity), the modeled and observed absolute temperatures may diverge.
Sorry if I failed to close the html, which should be Global Temperature Trend.
Thanks for this – so the temperature at the start of the run is an input? Isn’t this rather important? I can’t see how we can know the global mean temperature (however defined) in 1880 to closer than a couple of degrees at the very best. If a change of 2 degrees in the future is serious, then surely an error of 2 degrees in the past is at least as important?
You are right. It’s absolutely essential that the initial state is consistent with the equilibrium state produced by the model in absence of additional forcings. If this is not the case the model will result in a trend towards the equilibrium state.
Ok, so how is this initial value chosen?
Pekka – Knowing what the absolute temperature was in the past and knowing how much it deviated from equilibrium seem to me to be two different problems. Even if we had a highly accurate temperature reading for 1850, we might not know whether the climate was near equilibrium. Similarly, if the climate had been stable for a long interval, on average, even an imprecisely known temperature might be interpretable as a near equilibrium temperature. I believe many calculations of forcing by the IPCC and others have assumed relative stability, but that would be hard to prove.
I would also think that evidence that ocean temperatures in particular had not been changing could be taken as an index of relative stability. I’m not sure there’s a more accurate way to determine whether a climate at some past time was near radiative balance. Of course, models going forward have more data at their disposal, although neither ocean changes nor TOA fluxes are yet measurable as accurately as we would like.
Fair enough, but surely the initial temperature still has to be correct? Otherwise either an equilibrium is being assumed that didn’t exist, or the model is wrong.
Possibly, snowrunner, but not necessarily. We might be able to correctly assume near equilibrium conditions by virtue of the fact that climate was not changing, even if we didn’t know exactly what the temperature was. In fact, determining whether or not the climate was changing might be the easier challenge, because actual temperature records were inaccurate, and proxies, as you know, are associated with considerable uncertainty margins. Even back then, it might have been easier to know whether thermometers were reading a changing temperature than whether they were recording an accurate one. I agree, though, that none of these metrics is completely reliable.
Fred – “by virtue of the fact that climate was not changing”. Can you give us an example of such a point in time, and by what means it was determined that the climate “was not changing”?
Tom – “Not changing” is a relative term, signifying that over many decades, up and down fluctuations have tended to leave the average temperature relatively flat, even if not completely so. A slight pre-existing trend would complicate interpretations of a new perturbation if the perturbation is small, but would introduce little error if the new perturbation is large enough to introduce a significant change in the slope of temperature vs time. That type of change in slope occurred early in the twentieth century in comparison with mid to late 19th century temperature, even though the latter may not have been completely stable.
Fred – I asked you how you determined that the “climate” was in equilibrium. You reply referring to “up and down fluctuations [that] have tended to leave the average temperature relatively flat”. With respect, I was not asking how you determined that TEMPERATURE was (relatively) stable. I can do that for myself. I was asking how you determined that the CLIMATE was in equilibrium. You seem to be saying “stable temperature = climate in equilibrium” Leaving aside quibbles about what constitutes “flat”, isn’t this a bit simplistic? I say again, how do you determine that at any point in time the CLIMATE is in equilibrium?
Tom – Climate”equilibrium” (more accurately a “steady state”) is defined as a state in which incoming energy and outgoing energy at the top of the atmosphere are balanced. A climate far from equilibrium will inevitably demonstrate that state through changing temperature. If average temperature has been changing little over long intervals (despite short term up and down fluctuations), the climate is sufficiently near equilibrium to permit reasonably accurate estimates of the effects of a strong imposed perturbation. Thread comment space is inadequate for a very detailed discussion of atmospheric thermodynamics, but the foregoing summarizes the principles.
So just to be clear – you regard temperature stability as a proxy for climate stability?
Defining “equilibrium” in a theoretical sense is relatively easy.
But determining whether or not our climate ever was in such a state is more difficult.
Postulating that it was in such a state prior to the Industrial Revolution is pure folly.
Fred,
When the goal is to estimate anything related to the strength of the temperature change, it is much more essential to start from an equilibrium than to have the absolute temperature correct.
Starting from a non-equilibrium state is guaranteed to lead to spurious temporal changes except for the unrealistic case, where we are able to start from a non-equilibrium that deviates from the model equilibrium by the same amount and in the same way as the initial state differs from the equilibrium of the real atmosphere.
This sounds correct, it probably is impossible to start from non-equilibrium and get realistic results. However, the starting temperature must still be very important. Remember that the two tuning parameters are according to Mosher KNOB 1. “This is the relative humidity threshold above which low clouds are formed”
KNOB 2. sea ice albedos.
Both of these could be seen as corrections for a wrong initial temperature. Then where are we?
Coupled ocean-atmosphere or ocean-only models have to be run for centuries with steady climate forcing to reach an equilibrium before the forcing is varied in a forced climate run. The ocean circulation in such models takes a long time to spin up.
When the deviation from equilibrium takes long to disappear affects it the dynamics on the same time scale. Deviations that disappear almost fully in a year are insignificant for longer term calculations and processes that take several hundreds of years may have minor influence for results on changes of 10-20 years.
In any case it is more important to start close to the model equilibrium than to the correct absolute value. Large deviations from the correct absolute values mean naturally that the model has serious weaknesses.
snowrunner,
Excellent questions.
It will be intersting to see how our AGW believer friends deal with it.
Fred: I’m not sure what this implies. Ok, we can assume equilibrium in 1850, but how would we test for this, what if we are wrong? And if we are right, then what if we use the wrong initial temperature? The models apparently show a steady state without an increased CO2 forcing, but this must be a steady state at an (almost) arbitrary temperature given the huge uncertainty in the true 1850 temperature. What does this say about the model?
snowrunner – A model that presumes to start from an equilibrium condition will output a temperature response with random fluctuations along with seasonal changes, but the temperature will average out over the long run. This is true regardless of the absolute value of the starting temperature, because the model computes temperatures changes on the basis of departure from equilibrium – i.e., on the basis of changes in the radiative balance at the top of the atmosphere between incoming and outgoing energy. The magnitude of the temperature changes the model computes (e.g., from a change in CO2 or solar irradiance) will vary depending on the starting absolute temperature, but the error is likely to be small if the deviation from absolute temperature is small in percentage terms. For example, a model that starts at 287 K when an actual temperature was 285 K would incorrectly compute the anomalies resulting from increasing CO2 or sunlight, but the inaccuracy would be minor. The greater inaccuracy would come from incorrect assumptions about equilibrium. I should add that the climate will never be in exact equilibrium globally, but “near equilibrium” conditions are a reasonably good starting point for computing anomalies.
This is how it is done, but there are two basic problems. First the system is far from equilibrium. Second there is no “top of the atmosphere.” The fact that these fundamental assumptions are fundamentally false matters.
the models are spun up to equillibrium.
Then they are turned loose.
So the initial temp after spin up is not likely to match the observed.
snowrunner writes “Has anyone seen any comparison of modelled and measured temperatures?”
Try Science August 2007 Smith et al pp 796 to 799. The main forecast is for 2014. However, the claim is made that after 2009, half the years will be warmer than 1998. We know 2010 was not warmer than 1998, and it looks unlikely that 2011 will be.
Dont worry about anomalies. All anomalies are, are differences from some arbitrary base. It is easy to find out what the base number is for any particular set of temperatures.
I’m not sure what you are saying here – it certainty isn’t possible to find the baseline from the anomalies, and my point is that the baseline for the model may well be different to that for the observations. In which case, I don’t see how the model output can be taken seriously.
Snowrunner, see Lucia’s post Fact 6A: Model Simulations Don’t Match the Average Surface Temperature of the Earth.
snowrunner. I have not explained myself clearly. Supposing there is an anomaly of 0.5 C. The GISS temperature may have a base of 15 C.; the NOAA set, say 14.5 C. ; the HAD/CRU set 14.8 C. These can be found from the description of the sets. Simply add the anomaly to the base, and get the estimate of the absolute temperature. In this case, GISS is 15.5 C, NOAA 15.0 C and HAD/CRU 15.3 C. There are many of us who do not think that the absolute average temperature means anything anyway. So changes in the anomaly may make much more sense. Suppose you have a house with two rooms; one at -50C and the other at +50 C. Does it make sense to say that the average temperature of the house is 0 C? But if one room warms to -49 C and the other to +51 C, it might make sense to say that the house has warmed 1 C.
I forgot one other temperature prediction Keenleyside et al Letters to Nature 1 May 2008 pp 84 to 87. Both references were published at a time when global temperatures were not rising as predicted. They postulated a pause, before once again temperatures went to where they would have been had the pause not occurred. “The heat is in the pipeline”.
If the absolute temperature of the models differs from observations of the planet the changes in the phase state of H2O cannot be modeled correctly.
Thanks Jim
Science 10 August 2007:
http://bit.ly/iiHulE
Warmest year currently on record:
GMTA for 1998=>0.55
GMTA for 2009=>0.44=>FAIL
GMTA for 2010=>0.48=>FAIL
It appears it will also fail for 2011 (0.21 for Jan, 0.26 for Feb & 0.32 for Mar; well below 0.55)
100% FAIL!
We also have two data points for April. RSS +0.11 C UAH +0.12 C. IIRC.
Hi Girma,
“However, climate will continue to warm, with at least half of the years after 2009 predicted to exceed the warmest year currently on record.”
So.. half the years after 2009 will NOT exceed the warmest year currently on record.
Warmest year currently on record:
GMTA for 1998=>0.55
GMTA for 2009=>0.44=>SUCCESS
GMTA for 2010=>0.48=>SUCCESS
It appears it will also SUCCEED for 2011 (0.21 for Jan, 0.26 for Feb & 0.32 for Mar; well below 0.55)
100% SUCCESS!
Girma and Nebuchadnezza, you have both got it wrong. I am reaching back over 30 years since I did this sort of thing, but if p is the probability of an event occurring, then the probability of the event not occurring after n tries if (1-p)^n. I am sure someone will tell me if I am wrong. What the paper says is at least half the years after 2009 are predicted to be above 1998. If we allocate a probability of 0.6 that the model predicts the anomaly will be above 1998 from 2010 onwards, then the probability that both 2010 and 2011 are not above 1998 is 0.16. That is, if 2011 does not exceed 1998, the probability is 84% that the prediction is wrong.
Jim Cripwell
Your formula (1-p)^n is correct.
“At least half,” by convention, can be read as “half” for the purposes of the accept-reject test.
“After 2009,” again by convention, can be read as 2010 or after.
However, you’d need Bayes Theorem to decide whether based on 2010 and part of 2011 being below 1998 you can predict success or failure, given that it’s an open-ended comparison (no last year is specified).
Carry on.
While not an exact analogy, http://en.wikipedia.org/wiki/Checking_whether_a_coin_is_fair is related. Sort of. And at least it’s fun.
And it explains how Girma’s leg has been pulled.
However, climate will continue to warm, with at least half of the years after 2009 predicted to exceed the warmest year currently on record.
The given hypothesis could take thousands of years to establish to a high degree of confidence one way or the other as stated.
It’s all but useless.
Thanks Jim,
I know I got it wrong – that was the point.
I think the wording is ambiguous. I’m not sure if it’s deterministic – half the years will definitely be above 98 – or probabilistic in the sense you suggest.
Any ideas?
87% of statistics used on blog posts are made up.
Nebuchadnezzar
What do you mean please?
Girma, See Jim’s post. But, I’m not sure he’s got it right either.
Snow runner yes.
I have posted the actual temps produced by GCM versus the observed
The results are horrible. Tim Palmer discusses this in his grand challenge video. I will repost if you like. Its a link to Palmers video.
Thanks, I would be interested to see this. I looked at Lucia’s page (as recommended by Oneuniverse), and the plot there is truly astonishing. I think I understand that some global temperature (or possibly regional temperatures) are used as starting values, then the model runs for a few hundred years until it stabilises, then CO2 forcing is added. Clearly the models can stabilise at a lot of different temperatures. What is astonishing about Lucia’s plot is that the mean temperature varies by 3 degrees between models, but the trend is virtually identical in all of them. Since an important feedback is supposed to be increase of water vapour due to higher temperatures (and there are other temperature related feedbacks) I can’t see how you can get stability+ identical CO2 forcing at such a range of temperatures. The only explanation I can think of is that there are in fact no feedbacks in the models at all. Or perhaps that the feedback is directly related to CO2 forcing, and not temperature. Either way, I can’t see how these models can all use the same physics.
Re: anomalies. The focus is on anomalies as output and the calculation of trends. Ok so far as a diagnostic. But then the model output is used to drive crop models or disease models or human health models. Now the absolute temperature matters (bias), not just the trends. lucia documented many models being up to 4 deg C above GMST. Not so good.
Craig – The deviations described in the references cited by Lucia relate to seasonal averaging. This in turn is a function of differences between the Northern and Southern hemispheres in land mass, as well as differences in sun/Earth relationships at different times during the year. It is one of the reasons why the concept of “mean global surface temperature” is problematic. These differences between models are not unimportant, but have relatively little impact on the computation of long term trends in anomalies. From the perspective of a farmer or a health worker who already knows the current temperature in his or her location, and how it varies by season, the anomalies are the more important variable than the manner in which different models average out seasonalityover the entire globe.
They are moving the goal post under our own eyes!
Now it is not global mean temperature that causes climate change!
http://bit.ly/jFoTaI
Jim D, on the GISS home page is a map showing temperature anomalies. It’s a fun toy, you can specify time periods to evaluate and base periods to compare with. In specifying the base periods as the decade before the one evaluated it gives a rough idea of the rate of warming per decade. I see no evidence of regional influences of sulfates but this is just eyeballing and hardly meaningful in any way other then to make one wonder where are the regional studies supporting the influence of aerosols and why shouldn’t regional effects be expected. Anyway, if you want to play around with it here is the link. Global maps is the one I am refering to.
http://data.giss.nasa.gov/gistemp/
OK, interesting. It looks like the 2000’s decade was cooler than the previous decade mainly in the E. Pacific upwelling regions. Warmer overall, of course.
let’s go back to physics … in thr troposphere convection wins. The adiabatic lapse rate is not affected by GHGs. You can study the whole troposphere without knowing the gh effect exists, and Pierrehumbert says as much in his book (radiative effects play a minor role, not zero of course. What mechanism would then increase the surface temps by gh gases? I can only think, an increase in tropopause height.
Without greenhouse effect the adiabatic lapse rate would not occur anywhere.
The convection prevents the lapse rate from exceeding the adiabatic lapse rate, but without greenhouse effect the atmosphere would be close to isothermal.
And if pigs had wings, they would fly. Or maybe not. It depends on the wings, and on the pig.
Please don’t just throw statements around, you have to argue your opinions.
My point was to contest your statement “You can study the whole troposphere without knowing the gh effect exists” and tell that without GH effecf we would not have a totally different atmosphere.
The issue that the heat flow through the atmosphere is driven by the GHG’s and radiative heat transfer has been discussed also on this site many times. I know that few people have read even most of what has been written here, but that is not a reason to repeat the same points over and over again.
I go to more details than before in mys reply to Fred below.
The radiative heat transfer is the only significant one from the top of troposphere to the space. The radiative heat transfer is also the heat source at the surface, but the convective processes join the radiative processes throughout the troposphere. That increases the “overall heat conductivity” (meaning all ways of transferring heat) of the troposphere and makes the temperature difference between the surface and the tropopause smaller. The temperature difference can then be calculated dividing the heat loss from the top of troposphere to the space by the overall heat conductivity of the troposphere. Without greenhouse gases the heat loss from the top of troposphere would be extremely small. Thus this calculation would lead to zero height for the tropopause or no troposphere at all.
As I discuss below this is not strictly true for the Earth with diurnal and latitudinal variations, but this simplistic argument tells, why the atmosphere would be totally different and why we cannot understand it at all without considering the greenhouse effect.
“Without greenhouse gases the heat loss from the top of troposphere would be extremely small.”
More ghg, less more heat loss at the TOA?
My statement concerns emission by the gas at the top layers of the atmosphere. If that gas doesn’t emit radiation, it has almost no heat loss (a few energetic molecules escape, therefore “almost”).
The greenhouse effect relates to the total emission from surface and atmosphere as seen at TOA. That is reduced with more GHG.
Pekka – If the atmosphere were not coupled to the surface, it would be almost isothermal. However, even inefficient coupling, such as by conduction alone in the absence of radiatively active molecules, should, I believe, eventually lead to an adiabat in which near surface air temperature is close to surface termperature, but declines with a lapse rate defined by the gas laws and hydrostatic equation. How could an atmosphere be coupled to the surface and still be isothermal? What would be its temperature, for example, if the surface temperature were 280 K or thereabouts?
Fred,
This issue has been studied and several papers published on it. It’s not simple to conclude, how close to isothermal the balance would be. Foe a vertical column without horizontal mixing the answer is that the atmosphere is isothermal even when the coupling with surface is taken into account. For the spherical Earth some kind of cell structure would be created, but the result would be very far from the adiabatic lapse rate limit.
The present strong temperature gradient is the result of an atmosphere heated from below and cooled from above. This combination is so strong that it would lead to much stronger greenhouse effect unless convection would increase the rate of heat transfer from the surface to the upper troposphere.
The heating from below is based on solar radiation, the cooling from above is based on the emission of the greenhouse gases of upper troposphere. Without greenhouse gases there would not be any process that could significantly cool the top of atmosphere. All radiation to the space would be directly from the surface. As the atmosphere would not have any mechanism for loosing heat, it would gradually warm by conduction, which is not influences by the cooling of the rising air parcel.
The vertical mixing would be strongly reduced as it is in the stratosphere, but some mixing would remain as the surface is much warmer during tropical days than at high latitudes. It should be remembered that the heat transfer from the surface to the air would also be much weaker without greenhouse gases as the radiative heat transfer between the surface and the atmosphere is one of the main heat transfer mechanisms.
Without greenhouse gases the tropical day would be hot, but even there nights would be very cold and the Earth surface would be much colder in general. The atmosphere would have very little contribution to the surface temperatures, although it wold transfer some heat laterally. We would not have any clear separation between troposphere and stratosphere, but something like the stratosphere would extend to the surface.
Pekka, in concurrence:
cam4_desc.pdf
Description of the NCAR Community Atmosphere Model (CAM 4.0)
3.1.2 The governing equations for the hydrostatic atmosphere
“Note that the last term in (3.6) and (3.7) vanishes if the vertical coordinate zeta is a conservative quantity (e.g., entropy under adiabatic conditions [Hsu and Arakawa, 1990] or an imaginary conservative tracer), and the 3D divergence operator becomes 2D along constant zeta surfaces.”
3.1.4 A vertically Lagrangian and horizontally Eulerian control-volume discretization of the hydrodynamics
“The very idea of using Lagrangian vertical coordinate for formulating governing equations for The ideal gas law, the mass conservation law for air mass, the conservation law for the potential temperature (3.27), together with the modified momentum equations (3.28) and (3.29) close the 2D Lagrangian dynamical system, which are vertically coupled only by the hydrostatic relation (see (3.45), section 3.1.5).”
If the adiabatic profile is not that of thermodynamic equilibrium, it is a steady-state profile and, of necessity, requires a supporting energy flux maintained by external sources. Entropy increases at a steady rate and is not a conservative property. Free energy is also being lost at a steady rate and equals dissipation. The presumption that the adiabat, represented by a ‘virtual temperature’, is non-dissipative appears to be a core feature of GCM dynamics, allowing decoupling of vertical and horizontal coordinate systems. I have yet to discover that section in this CAM4 description file which indicates an awareness of adiabatic dissipation.
Omnologos – I think you’ve partly misinterpreted Pierrehumbert. GHGs profoundly influence tropospheric temperature, and would do so even if there were no tropopause (i.e., even in the absence of stratospheric temperature inversion). GHGs also affect the lapse rate on a planet with a volatile absorber in both liquid and vapor form – e.g., water – by shifting the lapse rate from a dry adiabat toward a moist adiabat. However, I believe you’re correct in implying that the warming effect involves an increase in the mean altitude at which terrestrial radiation escapes to space. This happens because infrared absorption by GHGs redistributes energy isotropically, including downward (an atmospheric and surface heating effect), so that escape to space can only occur at an altitude where the GHGs are sparse enough to allow outgoing radiation to match incoming radiation. The lapse rate then tells us that temperatures at the earlier escape altitude with lower GHG concentration will now be warmer.
The convection prevents the lapse rate from exceeding the adiabatic lapse rate, but without greenhouse effect the atmosphere would be close to isothermal.
.
This is not right .
Without GHE the atmosphere would be all but isothermal and violently out of equilibrium.
Much more so than the atmosphere we have on the real Earth.
The reason for that would be the huge temperature and density gradients.
Let’s just take the Moon and give it 10 km of a non GHG atmosphere.
The temperature on the day half would be about 370 K and on the night half about 120 K.
Now consider just the heat transfer by convection at the day-night boundary as the planet rotates.
With a typical heat transfer coefficient of 10 W/m²K and a conservative low gradient of only 50 K , this gives a heat flow of 500 W/m² heating the atmosphere.
As it can’t dissipate this energy by radiation, it must do so by kinetic and potential energy.
So a non GHG atmosphere would be violently out of equilibrium and give raise to a very active circulation both horizontally and vertically.
It would develop a system of chaotic storm structures (like on Jupiter) and would be far from isothermal.
We are lucky that we have GHG and the more there is, the better.
I believe that the stabilisation of the atmosphere by the GHG which reduce temperature and density gradients is a necessary condition for life.
Tomas,
I agree on the basic effects and I also mentioned the influence on the large temperature differences in may earlier comments. Perhaps we should refrain from guessing the outcome as there are many important factors contributing to the final outcome. They depend on the more detailed dynamics and time scales of temperature variations of the atmosphere at various altitudes. My statements were based on the hypothesis that the upper atmosphere would become warm enough to stop the rising air flow at a significantly lower altitude than the present troposphere. This would be a result of the fact that the vertical flow of the upper atmosphere in the tropics would be strong during the day time, but weaker at night. This is not a solid argument, but it’s a guess that I believe to be plausible.
Building A GCM for that kind of atmosphere should be much easier than for the real one. Calculations with such of a model should be able to resolve the issue to my satisfaction. Do you know about such model calculations?
Pekka – I welcome your new stance on the topic.
The main reason that I wrote the original comment to your message has not become any less strong: You cannot study the troposphere at all without correct handling of the radiative processes. On this point I see no disagreement with Tomas Milanovic.
The only controversy appears to be, whether assuming that radiative heat transfer does not occur in the atmosphere, would lead to persistent state of strong circulation, as Tomas thinks, or to an nearly isothermal atmosphere that would strongly restrict the circulation, which is the alternative I think to be more likely. Both of these alternatives are very different from the present atmosphere.
Scientists in ‘great desert’; poison their own well. Unable to find new diggers for next well. WHO cares. You?
Building A GCM for that kind of atmosphere should be much easier than for the real one. Calculations with such of a model should be able to resolve the issue to my satisfaction. Do you know about such model calculations?
.
Not by the usual GCM crews, they are too busy with AGW to look at some physics. I am not sure that somebody really studied that.
But I have read a preprint some years ago and did some calculations myself (of course without super-computers).
Thinking that it is “simpler” only because there is no radiative transfer in the atmosphere is an illusion.
Actually the most complicated part of the atmosphere is not the radiation (pretty simple that one) but the convection and phase changes. On the Earth we are lucky – radiation dominates so it’s (relatively) simple.
So it may appear paradoxal but a non GHE atmosphere is very difficult.
Sure it’s “only” Navier Stokes and convection.
But as Navier Stokes can’t be solved at this scale (planet) and for convection even a simple non isothermal plate is a mystery, there are no useful simplifications.
I am convinced that no computer can do anything (useful) with that.
Basically what I found (but I put in it only 2 months or so) is:
– the system is so violently out of equilibrium that one must solve time dependent equations. No steady states.
– the temperature gradients are huge so the velocity fields are correspondingly huge too
– the behaviour is chaotic with no obvious statistical interpretation
But this is not really a scoop. Jupiter’s atmosphere is also strongly convective and we don’t understand how it works still today.
Even if Jupiter is not strictly speaking GHGless, you’ll have more chance finding papers about Jupiter’s atmosphere which has been studied for a long time.
It gives an idea how complex is an atmosphere that is much farther from equilibrium than the one we have on Earth which is already not in equilibrium itself.
Tomas,
I agree totally on the basics as you describe them. The alternative that I had in mind is that these processes would lead to an atmosphere that is for most part far enough from the stability limit of adiabatic lapse rate to dampen the air flows. There would be local daytime exceptions at low latitudes, but my idea is that these effects would remain restricted. The proposal that modeling might be easier was based on the expectation of such an outcome.
A real calculation based on parameters applicable for Earth may show my expectation wrong, but what happens on Jupiter doesn’t tell much about that. The result would certainly depend also on, what the temperature of the Earth would be. If it is a snowball Earth of very high albedo and little latent heat in the atmosphere, the situation is simpler than for some other hypothetical situation. As we are talking about a counter-factual situation, we may make quite different assumptions.
Pekka – You mention that a number of papers have been published related to atmospheric temperature profiles in the absence of greenhouse gases. Are there any references or links you can cite? I find the topic interesting simply because it asks us to review what we know and don’t know about atmospheric thermodynamics, and not because it applies to our current Earth and atmosphere.
I found your comments and those of Tomas interesting, but I’m still uncertain as to how much they affect expectations about an adiabatic lapse rate without GHGs. The latter refers to an atmosphere coupled to the surface in some way (even inefficiently via conduction), with a potential temperature equal at all altitudes in accordance with the hydrostatic equation and gas laws. “Adiabatic” signifies that no heat is being added or subtracted. I would infer that the Earth is not going to yield an exact adiabat if it is being warmed from below and cooled above – that would be a superadiabatic effect of radiative transfer, requiring a convective adjustment – and it would also exhibit something less than a dry adiabat from latent heat release at high altitudes. Diurnal, seasonal, and latitudinal differences would also affect temperature profiles. My question is not whether we have an exact adiabat anywhere on Earth, but whether a hypothetical planet without GHGs but with surface/atmosphere coupling would tend to form an adiabat over the course of millions of years. I say “tend to form” because I recognize that diurnal and other variations would make it impossible for a constant steady state to exist at any moment.
My guess is that something close to an adiabat might form depending on conditions that include surface temperature, length of day (possibly very long), and other destabilizing influences. If the atmosphere can’t cool from above, and if (neglecting diurnal and other variations), its surface temperature were assumed to remain constant and in radiative equilibrium, the surface and near surface atmosphere should reach a constant temperature. Under those admittedly unrealistic assumptions, wouldn’t an adiabatic lapse rate develop simply in conformity with the hydrostatic equation and gas laws? Again, any references would be of interest to me.
Fred,
As may be inferred from my later discussion with Tomas Milanovic, the papers are not on full Earth atmosphere. What I had in mind are papers, which have studied the one-dimensional case or equivalently the case of an atmosphere without no lateral differences. The main point of those papers that I once found is to prove that the maximum entropy solution and therefore the equilibrium solution is isothermal.
When I was looking at the issue I found out that this is a old problem and that it has taken a long time to reach the point, where more or less all scientist, who studied the problem agreed that the isothermal solution is really the correct one. Right now I do not remember references to these papers.
As Tomas emphasized (and I also mentioned but with less emphasis) the situation is different in an atmosphere of an Earth-like planet without greenhouse effect, when diurnal variations and latitudes are taken into account. The basic factor is still valid that the atmosphere can exchange energy only with Earth surface (and possibly absorb some of the solar SW radiation, if that is allowed). Here I take given that also radiation from clouds is assumed to be absent.
That all heat transfer is with Earth surface and even there the radiative heat transfer excluded is obviously very limiting and will affect strongly the atmosphere. Whether latent heat is a major component or perhaps excluded or suppressed by low temperatures will also influence the outcome. I have no idea, whether any serious studies are done on such hypothetical atmospheres. After all it’s all counter-factual, and not of obvious scientific interest.
Pekka – Thanks. Out of curiosity, I’ll keep my eyes open for papers on this topic. My intuition tells me that where a planet might fit in the range between an isothermal atmosphere and one with an adiabatic lapse rate is likely to depend on conditions. Simply as a matter of speculation, for example, what would the temperature profile look like on a planet with a relatively thick, non-absorbing atmosphere with a low but non-negligible thermal heat conductance, a cold surface (e.g., 180 K), and a very long day (equivalent to one Earth year)? I’ll assume the surface is in radiative equilibrium so that it’s temperature is unchanging – it absorbs and emits radiation in equal quantities. I know that this is unrealistic, because time of day affects radiative input, but I’m making this assumption as an approximation that might be reasonable for a very long day. Under these roughly stable conditions, the atmosphere would neither be gaining or losing heat, and so its temperature profile would be a static one.
I presume that there is a component of the atmosphere sufficiently close to the surface to have a temperature close to 180 K – the alternative would be a huge discontinuity over a few millimeters despite some thermal conductivity and despite the fact that the surface temperature and atmospheric temperatures at all altitudes are unchanging, and no heat is moving within the atmosphere or escaping to space from the atmosphere. I also assume that at a high enough altitude and low enough pressure, the temperature is close to 0 K – e.g., in the range of about 2 K.
What would be the temperature at 10 meters? At 10 kilometers? What would prevent convection from maintaining a lapse rate that equalizes potential temperature at all altitudes? That is a question, not a statement in disguise, but I’ll be interested in further information to satisfy my curiosity. You may not have anything additional now beyond what you’ve already written, but if you come across something, let me know.
I add to the discussion one related practical example and some more comments on the physics in hope that they are helpful some some readers.
Here in Finland it’s quite common in winter that we have an temperature inversion close to the surface at a altitude of perhaps 200 m. Under such conditions we can see how the plume from power plant stack rises very little after exiting the stack in spite of the significantly higher temperature. Instead its top appears totally flat as it would be limited from rising by an transparent roof.
All situations with a less negative temperature gradient relative to altitude than the adiabatic lapse rate or with a positive gradient are stable with respect to vertical mixing. The further the situation is from the adiabatic lapse rate the more stable it is and the stronger disturbance is needed to induce vertical mixing.
In the real Earth atmosphere the radiative heat transfer and in particular the fact that the upper atmosphere loses continuously heat by radiating to space makes the negative gradient stronger and would make the gradient exceed the adiabatic lapse rate, unless strong vertical mixing would not prevent it. The main significance of vertical mixing is to add heat to the troposphere to balance part of the loss of energy from the top of atmosphere. Nothing like this can happen without IR radiation from upper atmosphere.
In the case of power plant plume, a rather modest inversion has a strong effect. Any larger scale deviation from the adiabatic lapse rate in the atmosphere acts effectively against buildup of vertical mixing.
All positive temperature gradients and all negative gradients up to adiabatic lapse rate are stable. Strong vertical mixing is possible only in combination with adiabatic lapse rate. If some forcing maintains strong vertical mixing, the lapse rate remains adiabatic. If such forcing is not strong enough, the lapse rate is likely be reduced and the vertical mixing stops. Under such conditions conduction may be sufficient for bringing the atmosphere closer to isothermal making it even more stable against vertical mixing.
The most useful detail in this argumentation is in my opinion, what it tells about the real Earth atmosphere: There is a strong relationship between the strength of radiation from the top layers of troposphere to the space, the adiabatic lapse rate and the altitude of the tropopause, i.e. the altitude, where the vertical convective mixing stops, because radiative heat transfer from below is enough to provide the energy emitted to the space. At that point the adiabatic lapse rate is not any more exceeded in absence of convective heat transfer.
Consider an experiment with two gas columns of different lapse rates in thermal equilibrium with a common substrate. If the adiabat represents thermogravimetric equilibrium, then the tops of these two columns will have differing temperatures and we can extract useful work from two systems in equilibrium with each other. A direct short-circuit creates a perpetual motion machine of the 2nd kind with a steady energy flux up one column and down the other with no external input.
I’m not sure, what you are aiming at.
Perhaps to point out that the adiabat is a stationary state in a system with a constant heat flux flowing through the system from the base to the top.
Quondam seems to me with the experiment to be explaining how CO2 works in the atmosphere.
Did you notice this comment that I wrote high up in this chain, where it may have gone unnoticed by most:
The atmospheric model is already in version 5. Thus the correct documentation is cam5_desc.pdf.
The version numbers of the full model and the atmospheric submodel are not the same.
Pekka,
Point is that, assuming one has an aversion to perpetual motion devices, at equilibrium all gases have either a common lapse rate, in contradiction to their definition, or a zero lapse rate (isothermal). One can reach the same conclusion mathematically by assuming an ideal gas of fixed volume, mass, and energy (internal plus gravitational) and, starting with an isothermal configuration, show that all alternate profiles of density and temperature have lower configurational entropies. For 6.5K/Km, I came up with -58kJ/K/m2 for a 10Km column with nominal atmospheric densities and temperatures. These are student exercises but having a number lets one say a bit about the time and flux densities associated with relaxation from an adiabatic to isothermal profile when the cell is adiabatically isolated.
I might suggest that nature seems to favor the quickest route to equilibrium. Rather than relying on thermal conduction, which can take millenia, inhomogeneous moving pockets with large thermal gradients might be the more effective mechanism.
Quondam,
I don’t think that your argument is enough. All lapse rates less than the adiabatic one are stable with respect to convection. Therefore both thermally connected columns would soon reach the smaller of the two adiabatic lapse rates, but after that we would be left with the molecular conduction.
Pekka,
Now I’m missing your point. IF the adiabat represents thermogravimentric equilibrium, whatever forces drive the system towards equilibrium will attempt to restore the different lapse rates and maintain a perpetual flux. IF isothermal represents equilibrium and we start with a nonequilibrium configuration with two different lapse rates, then relaxation to equilibrium would be limited to conduction if the thermal profile were described by a one-dimensional lapse rate less than the adiabat – a very slow process. I suggested that inhomogeneous 3D structures which can not be represented by a 1-D lapse rate might circumvent this, but this issue is independent of whether the adiabatic or isothermal profile represents equilibrium, and a point on which I believe we agree.
If we start with two gas columns of different lapse rates (both at the adiabatic stability limit for that gas) and both with the same temperature at the base and couple them thermally, the column with a higher lapse rate starts to cool and remains stable against convection, while the other starts to warm and convective mixing is initiated. This continues until both columns have reached the lower of the adiabatic lapse rates.
If there are no external heat flows, the bottom part of both columns start soon to warm as that is required by the limit on the lapse rate together with the conservation of energy. If we would start with equal average temperature (weighted by mass) in both columns, the outcome would be the same temperature distribution in both columns as originally in the column of lesser lapse rate.
That describes, what happens rapidly, when the columns are coupled thermally. That they’ll finally get isothermal is another matter.
With some delay, I think, I understand what you mean.
If there would indeed be a process that would maintain the adiabatic lapse rate, the heat flow of that process would provide a possibility of building a perpetum mobile.
That argument can be made more simply by choosing a metal bar for the return flow of the heat. That tells more directly that the equilibrium must be isothermal.
My sense is that the highest entropy state is an isentropic one. That is, it has constant potential temperature, which differs from isothermal in a gravitational field. If you somehow thoroughly mix an isothermal atmosphere, you would end up with an isentropic one.
I also don’t think conduction causes an isothermal state, but an isentropic one due to gravity (however, I think Spencer also said isothermal results from conduction). Radiation is the only process that favors an isothermal state.
No, the state of maximum entropy is not directly related to adiabatic motion of air parcels. The movement of macroscopic air parcels stops when the potential temperature is the same. That is also a point, where the entropy has reached a value that does not grow rapidly with reduced gradient, but it still grows and conduction (as well as turbulent mixing, if it exists, but under these conditions it may be absent) leads still towards isothermal, which is the state of entropy maximum.
The question of, which of the two alternatives is the real entropy maximum is precisely the issue that took long for reaching a wide agreement.
There are some interesting related physical phenomena. One of them concerns the case of noninteracting molecules leaving from a surface with Maxwell-Boltzmann velocity distribution in a gravitational field. It’s natural to think that the average velocity would be largest at the surface and get smaller with increasing altitude, but that is not true. The velocity distribution is actually identical at all altitudes, because those particles that had smallest vertical velocity at start never reach high altitudes, and the Maxwell-Boltzmann formula is such that the loss of energy in gravitational field and the selective effect cancel exactly. This observation is obviously related to the fact that the isothermal temperature profile is the natural ultimate equilibrium, not that resulting from constant potential temperature, that does not take into account microscopic processes, but only motion of parcels of finite size.
On second thought, I shouldn’t have mentioned turbulent mixing. It may still be too macroscopic. Only molecule level conduction leads to the isothermal outcome. All macroscopic motion takes with it all molecules inside a parcel. The whole parcel loses kinetic energy when moving up in gravitational field and the internal collisions in the parcel lead to thermal equilibrium at lower temperature.
The individual molecules are always influenced both by the loss of energy, when going up and by the selective process that compensates the loss of energy. The result is a lower density, but equal temperature at higher altitude. This applies equally for the example of noninteracting molecules and for real molecules The difference is only in the length of free path, but an equality remains an equality also, when built from very small steps.
Pekka,
I am willing to be convinced about this. What you say implies that even though eddy viscosity mixes potential temperature, molecular viscosity mixes temperature itself. If so, then, yes, a stable final state dominated by molecular viscosity would be isothermal, so it hinges on what property is mixed by molecular viscosity, even in a gravitational field.
Yes, potential temperature is mixed, when mixing is due to motion of matter in a way that the new thermal equilibrium is formed within that matter. When the equilibrium is formed with matter that already was there, the temperature itself is mixed. Thus the share of matter that crosses parcel boundaries as individual molecules of all matter in the moving parcel determines the weight of the two mechanisms.
The isothermal solution is stable, when no external disturbance is applied. The vertical movement of parcels of air is stopped, as soon as the gradient is less than the adiabatic lapse rate. The adiabatic lapse rate is a bordering situation, where even small disturbances large influence, but without any disturbance the atmosphere changes continuously towards isothermal state. Without external disturbances we have no turbulent mixing to prevent this development.
The molecular conduction is always present. Thus the question is, whether the external disturbances are strong enough to overtake its influence. In the example of atmosphere without radiative heat transfer, there is the possibility that convection leads to the formation of continuously growing stable volumes with an lapse rate less than adiabatic. That could be a process that would ultimately end up with an nearly isothermal atmosphere, and that is really the outcome that I consider quite possible or even likely until proven wrong by realistic enough calculations.
Interesting!
Simply as a matter of speculation, for example, what would the temperature profile look like on a planet with a relatively thick, non-absorbing atmosphere with a low but non-negligible thermal heat conductance, a cold surface (e.g., 180 K), and a very long day (equivalent to one Earth year)? I’ll assume the surface is in radiative equilibrium so that it’s temperature is unchanging – it absorbs and emits radiation in equal quantities. I know that this is unrealistic, because time of day affects radiative input, but I’m making this assumption as an approximation that might be reasonable for a very long day. Under these roughly stable conditions, the atmosphere would neither be gaining or losing heat, and so its temperature profile would be a static one.
You just described what I mentionned above – Moon with an atmosphere.
There is one thing we know, the temperature gradients are huge : 200 to 300K between night and day.
That means that the convective heat transfer from the surface to the atmosphere and vice versa will be correspondingly huge – we are talking hundreds of W/m² – just consider the day/night boundary.
Basically the atmosphere must organise in a way that it transports energy from the hot day half to the cold night half.
So the atmosphere would both strongly gain (day) and loose (night) heat.
I mentionned Jupiter because we see that its strongly convective atmosphere is organised in a complex system of latitudinal bands that interact chaotically with each other creating anticyclonic and cyclonic storm structures.
On top of this structure there must also be strong Hadley like cells because the gradients are also large in the N/S directions.
I can’t speculate how many cells there would be – like in the Rayleigh Benard flow , this kind of structure is chaotic and intrinsically unstable.
As the vertical profile depends on this, it is difficult to say what it would be for every latitude.
But again basically the vertical movements would be only adiabatic compression/expansion combined with strong horizontal movements where heat is exchanged at the surface (mostly) what gives a general idea without being able to establish this as function of latitude.
Of course if the day is no more “long”, intuition tells that the atmospheric flows would be more violent and even more longitudinally structured.
Tomas,
You are declaring the forces huge, but are they really huge in the way that they would lead to the consequences that you tell?
You say they are, I doubt. You told that you have done some calculations, I cannot know, whether they are valid calculations.
I haven’t calculated anything, but as long as I have no more concrete evidence, I keep to my prejudices as a very possible and even likely outcome. I’m not certain on that, and I admit that freely.
Perhaps I should explain a little more, what I consider plausible. Here is my narrative. It’s not knowledge, but a subjective narrative.
During daytime at low latitudes the surface temperature rises. That induces convection that transports heat up in the atmosphere. The air parcels cool following the adiabat, i.e. their potential temperature remains constant as long as the parcel remains large enough to keep the effect of mixing at the boundaries negligible. The rising motion stops when the temperature of the parcel is equal to the temperature of surrounding air. This process heats the upper atmosphere to an potential temperature close to that obtained during the day in the layers close to surface.
A lateral flow brings replacement air to the surface from cooler regions and another lateral flow spreads warm air in the upper atmosphere to the cooler regions where the air must sink. Part of that happens over the day/night boundary and part as an latitudinal cell. In absence of radiative heat transfer the cooling effect of the earth surface is not effective as an temperature inversion stops it. The surface winds are not likely to be strong and the lower lateral winds are likely to be strongest at an significant height.
All this process adds heat to the atmosphere in the way that upper atmosphere warms enough to make the temperature gradient almost everywhere less than the adiabatic lapse rate. This restricts the vertical mixing to a very small part of the atmosphere, while the rest of the atmosphere gets gradually more and more isothermal. If we start far from this state reaching it takes a long time, but having enough time is not an issue for an counter-factual theoretical exercise.
Whether atmospheres without radiative absorbers would form temperature profiles that are isothermal or characterized by an adiabatic lapse rate continues to intrigue me, despite its doubtful relevance to our own climate. On real world planets, surface temperature, length of day, and latitudinal variations will affect the stability of a temperature profile to a greater or lesser extent. I’m interested particularly in hypothetical climates where these variables play a lesser role – e.g., cold planets with very long days, such that an atmosphere can become relatively stable. In a hypothetical extreme case of a static atmosphere, would we find an isothermal atmosphere or an adiabatic lapse rate?
The following strikes me as an argument in favor of an adiabat, but since I may have neglected important considerations, I mention it as a subject for comment rather than as an assertion of fact.
Consider a gas column, and arbitrarily choose a “boundary” location separating the upper and lower part. Molecules, with velocity and kinetic energy, will be traveling downward across this boundary from above, and upward from below. Because of gravity, the mean velocity and kinetic energy of molecules traversing the boundary downward will exceed the overall mean velocity/kinetic energy of molecules in the upper layer. Conversely, the mean velocity/kinetic energy of upward molecules will be less than the overall mean for lower layer molecules. However, the number and energies of molecules crossing in each direction must be the same in a static atmosphere. Therefore, the overall means must be higher in the lower layer than in the upper – i.e., the temperature must be higher, with the differences determined by the strength of the gravitational force. If they were start at equal temperatures (i.e., an isothermal atmosphere), the downward traveling molecules would heat the lower layer and the upward traveling molecules would cool (reduce the average kinetic energy) of the upper layer.
This would be the result of energy exchange at the molecular level and would not depend on bulk transport of air parcels.
The above implies that an isothermal atmosphere would tend to develop an adiabatic lapse rate. Is this consistent with climate observations that tell us that under conditions where a particular atmospheric column exhibits a subadiabatic lapse rate or even a temperature inversion, the column resists convection and maintains a gradient of increasing potential temperature with altitude – i.e., a form of temperature stratification? I suggest that it may be. The stratification, as far as I can tell, is the result of unequal heating or cooling of different atmospheric masses, or of the atmosphere and surface. I suggest that it would not occur without differential heating/cooling. If that is correct, a stratified atmosphere would change to an adiabatic profile once the differential heating or cooling was removed.
I’m aware there may be flaws in this argument, and I would welcome comments. In the meantime, although I’m still more or less agnostic about the issue, I tend to think, tentatively, that a static atmosphere will exhibit an adiabatic lapse rate unless perturbed to do otherwise. I’m also aware that this is a prevailing view in the geophysics literature and textbooks, but that doesn’t necessarily make it correct. I’ve tried to derive it from basic principles.
Because density and pressure are continuous variables, molecular concentrations immediately above and below the boundary (e.g. within one or two molecular distances from each other) will be almost identical. For the boundary to be static, the downward pressure from above must equal the upward pressure from below. Downward pressure will be a function of mean kinetic energy plus gravity of the upper layer molecules that happen to be at the top of the boundary at any particular instant, while upward pressure will be a function of mean kinetic energy minus gravity of the lower layer molecules on the underside of the boundary.
If the gas were isothermal, the mean kinetic energies would be the same, not only at the boundary but throughout both upper and lower layers. However, gravity would add pressure to the top and subtract if from the bottom, and so total pressure would be net downward. The result would be to compress the boundary downward, warming the lower layer and cooling the upper one, thereby establishing a lapse rate as a function of gravitational force g.
NOTE – the entire last paragraph of my above comment was copied unintentionally, because I forgot to delete it before copying to the comment box. It was a start of an earlier version of the same argument, but does not present the argument in a manner that makes the best case, and can probably be ignored.
Fred,
You missed in your discussion the effect of selection that I wrote about in my above message
http://judithcurry.com/2011/05/08/ncar-community-climate-system-model-version-4/#comment-68177
and in the message following that. There I point out that the slowing down of the molecules going up does not make the average velocity smaller at higher altitudes, because of the selection that slowest molecules never go significantly up. The selection makes the likelihood of upward motion the larger, the higher the original speed of the molecule is. Therefore the velocity distribution doesn’t change with altitude.
I checked my archives and found these two papers related to the issue
http://www.knmi.nl/publications/fulltexts/verkley_gerkema.pdf
http://web.ist.utl.pt/berberan/data/43.pdf
The first of these papers shows, how the maximum entropy state of an insulated column is isothermal, while the constant potential temperature is obtained, when heat flows continuously through the column at a level that maintains convective mixing.
The second paper discusses, how the same velocity distribution is maintained at all altitudes, when molecular conduction is the only mechanism of heat transfer.
Pekka – Thanks for the references. To me, neither they nor your original point about adding molecules to a system from a surface are inconsistent with my conclusion that a column of gas of constant mass that cannot exchange heat at any point along it height with its surroundings (not merely no net transfer) will establish an adiabatic lapse rate. The second article you mention does not appear to adequately address the point that gravity will distort the otherwise isotropic distribution of kinetic energy even if it doesn’t change its overall value. The first article leaves much room for differing conclusions.
The current text books and literature all appear to accept the adiabat as the established state of an atmosphere unrelated to radiative transfer (including the text I currently use as a main reference source – Raypierre’s 2011 book). I may be wrong in my conclusions, but if so, many others will be equally wrong. However, although you have cited references, you have not addressed the points I made in my comment. If my analysis is wrong, it should be possible to state why. In the example I gave of a column arbitrarily divided at any point into upper and lower layers, I don’t believe any selection of the type you mention is possible to compensate for the effect of gravity, because any unusually energetic molecule headed upward will be associated with another one headed downward, but the first will be decelerated by gravity and the second accelerated. You could then repeat the arbitrary division at any other point in the column to observe the same thing. The result, in my view, remains an adiabatic lapse rate, but again, I don’t feel confident enough to claim that this couldn’t change with new evidence.
Note also that this comment by Quondam
http://judithcurry.com/2011/05/08/ncar-community-climate-system-model-version-4/#comment-68106
and the subsequent discussion presents a proof by contradiction with the second law that the equilibrium state of on thermally insulated column of gas cannot be anything but isothermal.
I am also still not convinced that isothermal is the highest entropy state. Surely isentropic, almost by definition, is the highest. That is uniform potential temperature. As I mentioned, if you take an isothermal state and thoroughly mix it, with gravity present, it will end up isentropic and theta will be the integral that is conserved in the absence of boundary fluxes. To me, entropy maximization is achieved by mixing.
Even though air molecules move fast and have a short mean free path (less than a micron), so that potential energy changes between collisions are about eleven orders less than kinetic energy, over time it should matter.
Pekka – The discussion you refer to makes assertions, but I don’t believe it proves anything, for a multitude of reasons that I won’t go into, but which include the fact that lapse rates are a function of the molar heat capacity of a gas. More important, however, your comments have not addressed the question I asked. What, if anything, is incorrect about the reasoning I described above indicating why a gas column will establish an adiabat? If there is a flaw, it should be specifically identifiable. If no flaw can be found, the conclusion is probably correct, but that conclusion must always remain tentative because new evidence might emerge that proves it wrong.
At this point, I must assume that the adiabatic lapse rate is what an insulated static atmosphere will establish – again, for the specific reasons I described. The assumption is tentative, but I’m reassured by the acceptance of that view by experts in geophysics who have evaluated the issue in detail, and the lack of evidence to date convincing me to change that opinion. If you see a specific flaw in my analysis, please let me know.
After thinking for a while on the argument presented by Quondam, I do indeed believe that it contains a valid proof. I give here a slightly modified version of the proof.
We have a thermally insulated column of gas. Lets connect a thermocouple at both ends of the column. If the equilibrium is not isothermal, the column will maintain a temperature difference between the two ends and we can extract electrical power from the column. I.e. we transform heat energy to electrical power. That is in contradiction with the second law. The only resolution is that the equilibrium is isothermal.
Pekka
I’m a little confused.
Doesn’t the 2nd require only that inputs match outputs?
Surely, if you extract electric current through a thermocouple you no longer have the same column of gas; you have a column of gas plus a mechanism that alters the column’s equilibrium state until you are again below the temperature differential threshold where the thermovoltaic effect begins to produce current in the thermocouple.
No?
Otherwise, your proof would be equivalent to saying that an inflated balloon is equivalent to an inflated balloon plus a sharp pin, therefore it is impossible to ever inflate a ballon.
I used the thermocouple as an example, because it may be the technically simplest way of building a perpetum mobile of the second kind related to this problem.
The basic issue is that the claim is that natural processes will create and maintain a temperature difference, i.e. they will move heat from a lower temperature to an upper one. That is not possible for any closed system and the theorem applies to real temperature, not potential temperature. The perpetum mobile of the second kind is formed when we connect a system that does that and another system that is not affected by gravity or is affected with a different coefficient as in Quondam’s original formulation.
Pekka – Regarding the thermocouple experiment, I’ll have to think more about all the details, but one obvious point is that the experiment would change the situation from an equilibrium one into a steady state in which heat is continually supplied to the bottom of the column from the planetary surface (e.g., via solar radiation), and that energy is what is helping to maintain the gradient and allowing electricity to flow. In that case, the sun would be a heat source and the cooling via the thermocouple (or simply a metal rod connecting bottom to top) would provide a heat sink. I had mentioned much earlier that I believed the adiabatic lapse rate required coupling of atmosphere to surface. It may be that if they are decoupled, the column would eventually become isothermal (with or without a thermocouple), but I’ll have to consider it further. Notice though, that when surface and atmosphere (without radiative absorbers) are coupled and surface temperature is not changing, there is no heat flow through the column and so we have an equilibrium system in conjunction with the adiabat.
Fred,
Lets assume that we start with an isothermal initial state. If the equilibrium has a temperature difference between the upper and lower ends, we may wait for a while for a differential to build up and start to produce electricity. The column would cool and finally get too cold, but during all this time we would transform heat to electricity in a system that stores energy only in the gas column. This is most certainly against the second law.
Pekka – I’m not sure of the conditions for your last conjecture – coupled to the surface or decoupled? However, in any case, under conditions when isothermal is not an equilibrium condition, letting the system go to equilibrium could generate usable energy. In this case, work would be done by gravity, and the system would end up with lower potential energy. I have to go out, but will return later for more of this interesting discussion.
Pekka
Again, I’m left head-scratching.
“The column would cool and finally get too cold, but during all this time we would transform heat to electricity in a system that stores energy only in the gas column.”
If the column differential eventually gets too low to convert heat to electricity, then the perpetual motion isn’t perpetual, and 2nd is maintained.
If the column has energy coming into it from outside or flowing from it to outside then the system isn’t closed, and the 2nd can’t be applied.
If the column has no energy coming into it from or flowing out of it to outside, then what is it meant to model? Certainly not a column in our atmosphere either by day or by night.
If no energy from outside, also, the GHE is irrelevant, but I’m not sure that’s a factor we’re considering.
I’m perfectly happy to agree that a column in a closed system will not have a temperature differential capable of producing energy for very long.
I’m just not sure it’s still the same column you started talking about anymore.
In case someone might prefer math to rhetoric,
http://mysite.verizon.net/files.2011/isothermal.pdf
Fred,
You state that you do not believe that “any selection of the type you mention is possible to compensate for the effect of gravity”. I would say that it just tells that you have not gone true the argument. It took time for me as well to believe that things can go like that, but a careful calculation shows that it’s indeed the case. This is the subject of the second paper although it’s title mentions only the barometric formula.
As the horizontal components of the velocity have in thermal equilibrium locally the same distribution as the vertical component and as the gravity does not influence horizontal motion, it follows from the fact that the distribution of the vertical component doesn’t change that the isotropic distribution is maintained automatically.
If one considers molecules in isolation, how can the horizontal component have the same distribution as the vertical component when the latter is distorted by gravity? That doesn’t seem to make any sense. It’s true, as I mentioned earlier, that in an equilibrium situation, the downward distortion for any set of molecules is compensated by the upward motions of molecules from below, specifically because the latter have a higher temperature. That is only consistent with an adiabatic lapse rate. If the gas were isothermal, there would be no compensation, and the vertical energy distribution would be directed downward in a non-equilibrium fashion.
Here I was not referring to the example of noninteracting molecules, which is just an introduction and interesting observation on its own right, but to the calculation related to infinitesimally differing altitudes. In that analysis local thermal equilibrium is maintained everywhere and the calculation concerns upwards and downwards movements of individual molecules and the relationships between both velocity distributions and densities at those close-by altitudes.
Pekka – Maybe we’re misunderstanding each other. At LTE, the upward and downward energy flows will of course be equal. My point had to do with why that is true. It is that the downward acceleration caused by gravity is balanced by equally energetic motion of molecules in an upward direction, but that this equality requires that the upward moving molecules come from a set with a higher temperature in order for their energies to compensate for gravitational pull downward.
Bart,
Its not really perpetual, because we do not violate the first law and we do not have any external energy source, but the second law does not allow for conversion of any part of the thermal energy to electricity under the conditions of that example.
The process is that of the perpetum mobile of the second kind. It can be made perpetual by connecting the bottom of the column to an inexhaustible source of heat at a constant temperature. That is a pure case of perpetum mobile of the second kind: Only one source of heat, which is converted to electricity.
Pekka
So the temperature gradient can only exist until observed?
Is that a joke?
We are not discussing QM. I’ll take that some other time.
Pekka
To clarify, I’m attempting to describe the Carnot Engine, as opposed to Perpetual Motion of the Second Kind.
Take the temperature gradient Fred predicts as a reservoir of energy so large as to be considered infinite within the closed system.
Your thermocouple may then be as efficient only as Carnot’s Engine, but no moreso.
So long as the efficiency of the Carnot Cycle is not exceeded by your thermocouple, the situation does not violate 2nd, no?
Bart,
It does violate the second law as soon as it produces any electricity starting from a situation with all heat at the same temperature. Therefore I wrote that we start from a isothermal column and wait for the temperature differential to build up.
As I wrote, we have a pure example of the perpetum mobile of the second kind, when we connect the column to one heat source, which stays at constant temperature for an unlimited period.
To run a Carnot engine or any other heat engine, we need to heat pools at different temperatures. The internal process of the gas column doesn’t qualify for that, as it’s internal to our system or part of a wider engine.
Pekka
You’ve been very patient.
All of my remaining questions collapse, given your initial conditions, which somehow I inverted.
Thank you.
And yes, thank you for recognizing my feeble QM reference in the spirit it was intended.
There is something enjoyable in these discussions. Even, when my main views hold, I learn on their physical nature. That applies certainly to this thread.
Quondam brought up a fundamental argument, whose power for this question I had overlooked, and all the other argumentation has clarified the details to an significant degree. (It has always been true that I learn most, while trying to explain the issues to others, and that is true both when I’m basically right from the beginning and when I have erred and must find, where the error is. In collaborative research that has been one of the most efficient methods of getting over obstacles.)
Pekka and Bart – On returning to this thread, I have little to add in general. There remains disagreement on the fundamental point of whether a gas column, left to its own devices, will become isothermal or establish an adiabatic lapse rate. I’ve cited my reasons for the latter, but I’m still interested in any flaws that can be found in the mechanism I described. I also tend to think now that the same principles apply regardless of whether or not the column is coupled to the surface, although the ultimate temperatures will probably be different.
My one new thought on the preceding involves the thermocouple. I conjecture that a thermocouple can’t completely short circuit the temperature differential between the bottom and top of a column, because upward transfer of energy will be opposed by gravity regardless of whether it takes place by molecular collisions in a gas or outside the column through the flow of electricity. The alternative (unimpeded upward flow of electricity) would imply the potential for a perpetual motion device that produces work. If we could move energy upward without interference from gravity, we could construct an apparatus in which energy is moved upward to an enormous height, converted to mass that is allowed to fall to power an attached device (i.e., to do work), reconverted to energy at the bottom, and the process repeated.
Fred
As with all things Thermodynamics, this seems a situation that is very dependent on details.
If the initial condition is isothermic and there are no external influences, the column must be restored to an isothermic state in the long run regardless of internal processes.
If any of those conditions is not true (initially isothermic, no external forces, long run), then something else may be observed.
In the real world, for real columns, I’d expect ‘something else’ to be observed quite regularly, and nothing in this discussion can predict without experiment and observation which would dominate.
Bart – As long as an isothermal gas column is not the equilibrium state, it would move toward equilibrium in the absence of external perturbations. The large majority of geophysicists and texts agree that the equilibrium state is an adiabatic lapse rate (see for example, Pierrehumbert – Principles of Planetary
Climate). Independent of their views, I’ve described the mechanism that in my view must lead to this conclusion, but others disagree and I’m interested in whether they can identify flaws in my description. I’m not sure I expect agreement at the end of the day, but it’s an interesting topic to explore.
Fred,
I’m disturbed by the fact that you don’t accept a valid proof based on the second law, but try to get over it without any real proposal on, where it is in error.
You refer to the fact that the isothermal equilibrium is not mentioned in the textbooks on atmospheric physics, but can you point out a place, where these textbooks would not have assumed explicitly or implicitly that there is a mechanism for heat loss from the top of atmosphere to the space. The textbooks have no real interest in this counterfactual case. They might include it anyway, but the fact that they don’t is not a strong evidence on anything.
All normal textbook presentations discuss radiation and classical thermodynamics neglecting conduction as too weak to be of concern. Without radiation, only convection is left, but that fails to describe situations which are statically stable. When nothing else is left, conduction dominates and conduction is not controlled by potential temperature but real temperature even in a gas column in gravitational field. This situation is just overlooked by the textbooks as an unrealistic and therefore uninteresting case.
Concerning your more detailed arguments. Everything changes very little, when we go very little up or down. Therefore saying that one specific variable changes little doesn’t prove that it’s not as important than other variables. One has to calculate better.
The noninteracting case of Berberan-Santos et al. is certainly contrary to the intuition of most, but it’s easy to verify that it’s still true. It’s exactly true, not approximately. Therefore it’s meaningful to take any horizontal level as the base level and choose the upper level to be closer than the mean free path of a molecule above the base level. Now we are close to the case of noninteractive molecules and the result of Berberan-Santos can be applied. As far as we believe that forgetting the interactions doesn’t change the conclusions, we find that the distribution of molecular velocities remains Maxwell-Boltzmann with exactly the same temperature, when we take this infinitesimal step upwards. This is the reasoning that I have in mind, when I conclude that the result is unchanged by the interactions between molecules. You may wonder about neglecting collisions, but whatever you think on them, the basic effect remains. The collisions are not likely to make any difference, when the velocity distributions do not differ. Thus the result appears fully self-consistent.
Here we have an microscopic explanation for the result that we knew to be true by the simple second law proof. At this point I’m fully satisfied with a good explanation, although it’s details have not been worked out in full detail mathematically.
Pekka – We are beginning to repeat ourselves, so it may be time to stop, and agree to disagree. I have seen no “valid proof” of second law violations for an adiabatic profile. More important, I suggested a mechanism that requires the adiabat for equilibrium. That mechanism entails a thermal gradient to ensure isotropic distribution of kinetic energies at all levels, and no contradictory evidence or flaw has been cited as far as I can tell. Any calculations that neglect the downward skewing of kinetic energies for all sets of molecules (compensated only by higher mean kinetic energies at lower altitudes) will inevitably lead to incorrect conclusions. I respect your right to disagree, so perhaps we can leave it there.
However, I don’t believe the literature I’ve seen on the adiabat requires anything about escape of energy to space. Numerous examples (including those in Raypierre’s text) assume no requirement for radiative absorbers or any other property of the involved gases except their heat capacity. In this regard, these authors appear to agree with me that the adiabat is the preferred state, although I depend more on the principles I cited than on the opinions of others.
Perhaps the missing factor in this discussion is the compression resulting from gravity. Air with molecules all having the same energy level will have greater energy per unit of volume (heat) lower in the atmosphere, which we see as the lapse rate. For an air column to have the same temperature throughout, the molecules lower in the column would need to be at a lower energy level than those higher in the column, which is contradicted by both convection and surface heating.
Fred,
I’m not ready to give up yet.
You say “I don’t believe the literature I’ve seen on the adiabat requires anything about escape of energy to space.”
Still all the models imply a continuous upward flux of energy. Where do you think that flux goes?
I wasn’t referring to models of Earth’s climate, Pekka, but to textbook and literature references to lapse rates as phenomena that will characterize planetary atmospheres independent of the presence or absence of radiative properties that permit energy loss to space. I’m somewhat confident the adiabat would be what becomes established in atmospheres coupled to the surface (so that the relevant entropy is that of the entire planet), but whether the coupling is necessary is less clear to me. I’ll keep thinking about this and let you know if I find additional material on it – maybe on your blog.
One more thing that I made, was checking, whether anything relevant can be found from Pierrehumbert’s book. I couldn’t find anything.
The Chapter 2 goes through the basic thermodynamics, where adiabatic lapse rate has a significant role for good reasons.
Everything else related to the temperature profile starts from checking the radiative energy balance at the top of atmosphere, and that means always that the top of troposphere is losing energy by radiation to the space. I don’t think that he says anything on an atmosphere, which is not emitting from the top of atmosphere, or anything on an atmosphere, whose temperature profile is not dependent on the existence of such energy loss through emission.
What one author says isn’t determinative, but the concept seems implicit in several places, and I think more explicit in the paragraph starting at the bottom of page 387.
But even that paragraph refers to an optically thin atmosphere, not one totally non-emitting. Therefore the earlier discussion of optically thin atmosphere in Chapter 3.6 applies to this case to the extent that radiative heat transfer dominates over conduction. While the net heat transfer with the surface must vanish in comparison with its typical strength, there is still the very small energy loss from the top of the troposphere, which determines the temperature of tropopause according to (3.27). Without any emission and absorption, there is no basis for this equation.
The situation changes, when the emission from the high atmosphere is not stronger than the sum of conductive and radiative heat flux from below calculated on the basis of adiabatic lapse rate. Then the lapse rate sets to the value that makes the nonconvective net heat flux zero at all levels. (If the adiabatic lapse rate would be exceeded at some altitude levels, then convection is initiated at those levels.)
Concerning the example of a planet with a very long day, my intuition tells that the outcome is a warm almost isothermal atmosphere with a strong temperature inversion close to the surface on the night hemisphere.
This is due to the fact that the heat transfer from the surface to the low atmosphere is efficient as soon as convection is initiated, while the heat loss from the atmosphere is limited to the slow convective heat transfer on the dark side of the planet. Lateral mixing would maintain close to the same temperature at higher altitudes on both sides and the net addition from the convective heat transfer must ultimately be reduced to the same low value as that of the conduction on the dark side. When that point is reached, the atmosphere is (almost) isothermally warm according to my intuition.
” The rising motion stops when the temperature of the parcel is equal to the temperature of surrounding air. ”
You have completely forgotten “DENSITY” and ” BOYANCY”!
You have still something to learn in physics.
Yeah. Very true. Air density and bouyancy is fundamental physics that were completely absent in your hypothesis of rising air, unbelieveable. LOL.
AGWers tend to believe something unphysical and missing physical properties of gases. Thats why so sad with climate community on radiation, heat and magnitudes of CO2 effects.
Sam NC
The short of it is that in Thermodynamics, you can treat this as an Entropy maximization problem (assuming you treat everything as ideal). Entropy rules, and once you have maximum Entropy, anything you do will make something else shift until you return to that same maximum Entropy.
Density and bouyancy fall away in the long run, or are carved away before the example starts, by the initial condition that the whole column is the same temperature and the insulation of the column from outside influences.
(see Quondam’s http://judithcurry.com/2011/05/08/ncar-community-climate-system-model-version-4/#comment-68497)
You could have a hydrogen balloon tethered at the bottom of the column, and cut the anchor line, and though it rises bouyantly to the top, it won’t affect the temperature as it and everything else in the column are the same temperature.
Your aggressive writing makes reasonable discussion with you impossible and all discussion too unpleasant. Here again you are totally wrong, but I’m not motivated to explain.
OK. Old learnt folkes demand repsect with weird disagreement with fundamental physics and I am really itched out of curiosity that my understanding of basic bouyancy and density are wrong from my high school education.
My apology for my aggressive reply.
Now, can you prove/explain my fundamental understanding of physical density and bouyance wrong?
I’m not the only one, who has been irritated, and it’s not about having different views, but about the way, it’s stated. Like here “You have totally forgotten ..”.
Density and buoyancy are not forgotten and they are important, but they are linked by the temperature by the most basic part of thermodynamics. When the air parcel rices, it’s pressure is always the same as that of neighboring air at the same altitude, its temperature and density are falling as required by the condition of the adiabatic process and maintaining the equation of state of the gas. The density is the same as that of the surrounding air, when the temperature is the same. That’s the point where buoyancy ceases from maintaining the rising movement.
It’s just not possible to go through all basics of thermodynamics in discussions of this site. The discussions are often a very good place for learning about some specific issues, but that works only, when the basics are known, and the text books are for that.
Many parts of classical thermodynamics are amazingly difficult to master, but I’m not talking about those, but about the basics, which includes issues like, what I have written above. I would expect that regular commenters know the basics, before they start declaring, how others have totally erred.
Still got your ears on?… This one is for you Eli, sounds like a wrap, too. Who would of thought that a doc could dunk it?
http://www.theblaze.com/stories/climate-scientists-release-profane-video-bashing-deniers/
Right on.
Well as I already wrote in the first post, there is no way to have an isothermal atmosphere over a spherical body where the temperature gradients are as large as 300 K.
At least not in our Universe.
Perhaps the difference is in the fact that what I think about is not some fiction but a REALISTIC physical case – I think of the Moon with a (f.ex He) atmosphere.
I am not sure what you think about – “statical atmospheres” or “isothermal spheres” are indeed objects that can’t be real planets so it is little interesting to speculate what would happen with tem.
On the other hand however a Moon with an atmosphere is a well defined planet which could exist and the question about the atmospheric circulation makes physical sense.
So in this case what I did was the following :
Assume an initial temperature gradient of 300 K on equator and the atmosphere in thermal equilibrium with the surface.
Rotate the planet.
When the point goes crosses from the night half boundary (where the temperature is minimal) to the day half, the surface begins to heat up very fast.
So there is a heat transfer from the (hot) surface to the (cold) air proportional to the temperature gradient.
This depends on the thermal inertia, speed of rotation, convection coefficient and conductibility but is quite large – order of magnitude 100 W/m².
So you clearly see that in this case a rather trivial calculation (supposing that the atmosphere is statical) shows that the air will show the same temperature gradient as the surface – e.g 300 K with a smaller or bigger phase delay.
It would be highly non isothermal.
But an atmosphere displaying a temperature gradient of 300 K would of course not stay statical.
So in the second phase I did a simple 1 D model with a gas at 100 K on 1 side and 400 K on the other.
You obtain a flow and it is not surprising either.
Now what I did not, was to go to 3 D because 3D Navier Stokes can’t be solved.
So while it is obvious that a planet with a non GHG atmosphere would be neither isothermal nor statical, I can’t say what would be the flows in 3D (e.g adding vertical and N/S gradients).
But intuitively looking at Jupiter whose atmosphere is 99,…% H2 and He which are not GHG I suppose that there would be qualitatively the same features.
This is also supported by the studies of the Rayleigh Benard flow which show that a fluid heated by convection organizes in toroidal rotating structures.
From these both (qualitative) elements I infer that the atmospheric flows would be structured in tores like in Jupiter where the fluid follows the surface of the latitudinal tores.
For some from W to E and for some from E to W.
You can’t surely mean that if you put a He atmosphere on the Moon, the atmosphere would become isothermal while at the same time there would be 300 K temperature difference between the day half and the night half.
Conversely if the surface became isothermal then there would have to be an energy transfer of some 600 W/m² from the day half to the night half and you’ll agree that this is huge.
As the only way to do this transfer is the atmosphere, the atmosphere would again not be isothermal.
There is simply no physical scenario leading to an isothermal atmosphere for a spherical body heated on 1 half and cooled on the other.
Whether there are or are not GHG is not important.
Tomas,
If you have read all my messages in this thread, you know that my answer is that the atmosphere warms up to the extent that the convective mixing will be suppressed, not to zero to by a very large factor.
One of the main points is that the surface will not be able to cool the atmosphere effectively on the dark side, because the only heat transfer processes there are conduction and lateral convection with very little vertical (downwards) convection because of the temperature inversion.
One of the main points is that the surface will not be able to cool the atmosphere effectively on the dark side, because the only heat transfer processes there are conduction and lateral convection with very little vertical (downwards) convection because of the temperature inversion
You consider 10 W/m²K little ??
The hot air coming to the night side (at 400K) and seeing the surface temperature dropping to 100 K will transfer up to 3 000 W/m² in the solid!
Just try to run hot air above a cold horizontal stone plate, you will see how fast it will get hot.
Why do you mention “convective mixing” or “lateral convection”?
It is just a banal heat transfer between fluids and solids which is dealt with by engineers in every form of heat exchanger.
You can’t have a stable 100K gradient or more between layers of gas unless they are completely statical what they are not in this case.
But for me it is still not clear – do you mean that the atmosphere would be approximately isothermal and/or statical on a planet with 400 K on one half and 100 K on the other half?
Or do you mean something else that I have not still detected?
How many times I have to repeat that what I see is a insulating temperature inversion building up near the cold surface and most of the atmosphere being warm. This is picture that I have in mind on the most rudimentary level. Details must be added to that, but that’s the starting point.
I add one observation.
The energy received from the Sun through a clear sky at equator during one day corresponds to an 2-3 C increase of the temperature of the whole air column above that area. If the potential temperature rises by more than that in the column, the heating does not reach the whole column. The heat transfer from air to the surface and from the surface to space is certainly much less. Thus the surface cools far below the temperature of the atmosphere and much of the daytime heating goes to the reheating of the solid or liquid surface.
The whole process is likely to become not at all violent reasonably rapidly. There seems to be nothing to prevent the radiatively noninteracting atmosphere from getting nearly isothermal and also rather warm compared to the average surface temperature.
Can I ask a naive question here? As I understand it from comments above, a models is started with some “earth-like” parameters (including temperature) then run for a long time until it stabilises (maybe at a different tempreature). Presumably if it never stabilises then changes are made until it does. Then forcings like CO2 are added to match the last century or so. Now one of the arguments that anthropogenic CO2 must be a main cause of the increase in temperature is that the models can’t reproduce said rise without the ACO2. However, I have just said that the models are run until they stabilise, ie show no increase in temperature. How can a lack of an increase in temperature in a model designed to have no increase in temperature be evidence for anything other than the modellers success in creating a model that is stable?
The simplest way to put it is that climate is determined by forcing only. The forcing changes from variations in the sun, earth’s atmospheric constituents and albedo. So initially a climate model is run into equilibrium with a constant forcing, then the forcing is changed, e.g. adding CO2, solar variation, volcanic dust, aerosols, etc. The transient response is how the earth responds to gradual changes in forcing over time. The equilibrium response is the new equilibrium after the same new forcing has been applied for a long time (possibly centuries).
As I see the situation you describe one possible way of estimating climate sensitivity. Due to the potential of chaotic behavior it’s not automatically guaranteed that the models will lead to some stable situation in the sense that long term averages of climate indicators approach asymptotically some fixed values, but lets assume that it does so with both concentrations of CO2 keeping all other input identical. If that is indeed the case, we get two different values for the average temperature and their difference tells, what the climate sensitivity is in that model.
The point is that it is guaranteed that the model will be stable, because if it isn’t it won’t be used. It is clear from the outputs that temperature at least is very stable, but that it can be stable at a wide range of temperatures for the same CO2 level (at least for the older models, Lucia’s plot here http://rankexploits.com/musings/2009/fact-6a-model-simulations-dont-match-average-surface-temperature-of-the-earth/was done in 2009). There is no possibility of natural variation, because the model is forced to be “stable”. This might I suppose be allowed to include some small variations, but in practice doesn’t seem to. Then you add an increasing forcing, and the temperature goes up. So what? This doesn’t say anything about sensitivity, because since the model can be stable at different temperatures for the same CO2, the stable temperature must be to some extent arbitrary.
Models which obey physical conservation laws should be stable anyway, and the only question is the equilibrium temperature, which would depend on parameters used for albedos, vegetation effects, sea-ice prediction, etc., that are not perfect, and can explain the inter-model variation. These models have decadal variations due to their chaotic, but slowly evolving, ocean circulations, which therefore require ensembles to obtain meaningful results to cancel those out. Stable does not mean steady temperatures year after year. That, for sure, would be wrong and recognized as such.
“Models which obey physical conservation laws should be stable”. So the hypothesis is that the climate should be stable if forcings are constant. The temperature (in the real world) has risen. If the hypothesis is correct ACO2 is to blame. The hypothesis could be false. The point is the model itself doesn’t add anything, and doesn’t prove anything.
snowrunner
I believe Noether’s Theorem would predict models which obey physical conservation laws should be symmetrical, as opposed to stable.
Maybe this is a special case I’m too unfamiliar with to comment.
In a more general sense, models may form key components of elegant proofs.
I’m certain I’ve seen examples of this in Queue Theory, for example, though none come to mind after so long away from the field.
Indeed, even if such models played no role in proofs, a number of gainful fields of pure and applied mathematics would be impossible without them, so I’d have to think models in general add something.
Also, it’s entirely possible for a model to be invalid at a boundary condition (such as ‘equilibrium’) but valid between boundary conditions. I don’t think a computer program to convert Celsius to Kelvin or Fahrenheit is ‘valid’ at -500C, but with bounds checking enforced could still correctly convert 0C to 32F.
Of course, there ought be some mechanism to validate where the model is and isn’t in some way a reliable representation of what it claims to simulate on such ranges.
The hypothesis is based on energy conservation, which is a fundamental principle. People have made efforts at other ways earth could cancel the extra trapped energy (such as more clouds somehow), and those are the ones to investigate, not energy conservation itself which no scientist, even the skeptical ones, dispute.
Jim D,
There is another possibility, that of essential chaos. That may make it impossible to determine average temperatures over any length of period or even in the limit of infinite time. That may apply both to models and to the real Earth system. Conservation of energy does not exclude the possibility of persistent variations in albedo and arbitrarily long periods of decreasing or increasing temperatures related to the albedo.
I make no claims on the likelihood of such behavior, rather I say only that there are no fundamental reasons to exclude the possibility.
Pekka, yes, there is a possibility of more than one temperature for a given forcing, such as a hysteresis albedo effect. For example if and when we get to 800 ppm CO2, the ice caps will still be present, but last time we were there 35 million years ago they weren’t, so the state will be cooler than last time because of the extra albedo, but this is a temporary situation until Antarctica melts a few hundred years later, and it warms up more. Or, Antarctica could allow its own survival by adding the required negative albedo forcing, but there would be a tipping point. In this situation there may be two stable states with different albedos.
“this is a temporary situation until Antarctica melts a few hundred years later”
The melt time for a block of ice this size is measured in thousands or tens of thousands of years. Paleo records show the earth has two stable states, one at 12C and the other at 22C. The one at 12C involves ice a mile thick over much of the northern hemisphere. We are currently at 14.5C and likely heading back to 12C unless CO2 saves us.
Oversimplified. The stable states depend on the lay-out of the continents, and which ones are frozen, as well as CO2 in the ocean/atmosphere system. There is a graduated scale of states, and we are headed towards a state something like that 65 million years ago with 1000 ppm, 22C, and no ice caps. I don’t know how long it will take, but that is the direction.
There is another possibility, that of essential chaos. That may make it impossible to determine average temperatures over any length of period or even in the limit of infinite time.
Are the complex climate statistics transitive,intransitive or almost intransitive eg Lorenz 1968,1970.
@JimD”Models which obey physical conservation laws should be stable anyway”
Perhaps there is a fundamental issue with the complex model design, in terms of the basic rules which are modeled and the level of complexity. Every consideration of physics contains assumptions, which on a broader perspective may or may not be accurate, but in the larger scheme, near enough is usually good enough for engineering (when a safety fudge factor is added).
Thinking about a simple example of boiling a pot of water…
How long will it take to boil a pot of 1L of water using a 5MJ/h gas burner?
At the simplest level, we can assume normal atmospheric pressure and guesstimate starting temperature, guesstimate process efficiency and come up with a near enough answers.
At a complex level and with more information, we could model a larger number of physical processes that contribute to the information:
* gas burner design, flame shape and heat transfer
* heat transfer modes between the burner and the pot
* heat diffusion through the pot structure and water
* heat transfer through the body of water
* heat losses through various modes
* effect of the system on surrounding air currents causing further losses
* effect of changing gaseous molecular environment (CO2, O2, H2O vapour levels) caused by combustion on thermal losses.
Then we could consider additional levels of complexity….atomic bond energies, energy transfer between adjacent atoms and molecules, etc
In the end, we could come up with a quantum molecular model of heat transfer which would be nearly infeasible to run on the fastest supercomputer on earth. The model may become so complex that it becomes unstable and predicts boiling times significantly different from reality.
Where did we go wrong?
At each stage of making a model more complex you would check (a) that the results are closer to observations, and (b) that the added computer cost is worth the increase in accuracy. The models they have at any stage are due to this compromise of accuracy and efficiency that make climate simulations of increasing complexity feasible with time. Computer power places limits on climate model development.
BLouis,
“… near enough is usually good enough for engineering (when a safety fudge factor is added).”
Your knowledge of engineering is poor. Engineering is precise but provides a safety factor for qualities, ambient condition changes, etc… to cover the required performances under uncertainties.
My comments on the requirement that certain averages are obtained asymptotically imply that the value is unique for that particular model and fixed input. Under those conditions one gets two well defined numbers and the calculation is valid.
I believe there are maybe a couple of local stable minima for a given top-of-atmosphere output and the one reached depends on the direction of approach. The analogy is to multiple potential wells in quantum mechanics and other minimization problems. Even simple models can duplicate this type of behavior, so I am sure climate models can.
There are tho related possibilities: two or more stable minima and quasistable attractors, with fat-tailed PDF’s for their lifetimes.
The first alternative leads to the conclusion that we will end to one stable state from a set of several, but we may not know their relative likelihoods. The second tells, that we may have the impression of the first case for periods of unpredictable length, but then transition to another, etc. In this second case, the expectation values of climate variables are not necessarily well defined on any time scale and the long-term averages could fluctuate forever without any changes in external forcings.
Pekka, I agree to some extent that natural variability can cause a jump from one stable state to another making it a non-steady equilibrium. The Ice Ages had very small forcing distribution changes leading to widely different albedo states. However, I think ice albedo is the main factor that distinguishes these states. Perhaps we can speculate that there is a cloudy state with a similar albedo role, but there is no evidence of such in models or reality.
Jim,
I’m not ready to speculate on the likelihood of various alternatives for the real Earth, but concerning the models, I consider it important that their stability properties are analyzed so carefully that they are known for each model to exclude the risk of spurious results.
When the models are known to be stable enough to give non-spurious meaningful results, the next question is, whether this has been obtained by forcing them to be excessively dissipative, which would mean that they are likely to produce wrong results.
The situation is good only, if the models are stable enough without too much dissipation from either purposeful stabilizing choices or for reasons inherent in their basic structure. If that’s not achievable, then we cannot use models for all purposes of interest.
By stable I mean that multidecadal climate averages converge rather rapidly to their final values, when forcings are kept constant and when the initial state does not introduce some significant imbalances of a type, which will take long to disappear. For the production runs such initial imbalances must be removed unless they are the subject of study, because they distort otherwise the results.
Pekka, I expect models have reasonable circulations and energy transports compared to reality. Too much dissipation or damping would manifest as an energy sink and a resulting deficiency in atmospheric and ocean circulations, I guess, but these budgets are checked as part of the validation. However, the area of sea-ice prediction is a less mature part of these models, and will have been tuned to some extent to maintain a realistic sea-ice cycle, or in some models sea ice may be specified simply.
Jim,
The scale, where the problems that I have in mind with too much dissipation might occur are on the order of the cell size and above. On those scales the average dissipative power is very small compared to the whole energy flux. The question is related to the existence and properties of large scale oscillations and instabilities. The dissipation that leads to significant heat generation is another matter.
To give an example of the problem.
It may be very difficult to build a model that is stable enough for giving meaningful results and that also describes multidecadal oscillations or climate shifts. This may be a situation where very small changes in the model parameter change the results completely and a slightly larger leads to onset of chaos at a level that makes results useless.
Pekka, I don’t see a reason why the model would have either more or less chaos than the real earth system. The energy and circulation constraints are similar. The main uncertainties might be sea-ice changes or large vegetation changes that can cause local feedbacks and that might affect regional climate and global albedo. I am not sure if that would be part of the chaos you define.
Multi-decadal ocean oscillations should be simulated (but not in phase with reality) unless they depend on some unresolved aspect of the ocean circulation. I don’t know if the coupled-ocean models have them, but we see realistic variations among ensemble members that indicate something of this kind.
Jim,
If the chaos is related to slow processes, such as temperature and salinity differences on various parts of oceans and ocean currents, it’s probably critical to have the driving forces and dissipative attenuation pretty accurately right. Having even a little too much dissipation may kill the whole oscillation, while too little may lead to chaos. Getting it right may also require that issues like the cloud forcings caused by the temperature anomalies have the correct strength.
OK, yes, that is related to Hansen’s recent article about the ocean time scales and deep mixing rates. This affects climate sensitivity and transient response rates. It may also affect aspects of multi-decadal oscillations and maybe the equilibrium response, but how importantly is less clear to me.
My comments are not based on Hansen’s article, but on generic issues related to getting dynamics right in complex models. They may of course agree with related comments in that article.
As my basis is generic, I have no other evidence on the quantitative importance of these considerations than, what has been observed about slow oscillations and possible state shifts in real climate indicators. The problem is that models may be of little value in understanding these phenomena even, if and when we know much more about them from observational data, because they may occur under conditions, where models fail, or at least models, which try to get the whole dynamics right fail. Models, which are built to describe shorter term dynamics may still provide better results, but they cannot be used to model arbitrarily long time spans. Such models will then also be dependent on correct initialization.
Pekka, I don’t think internal variations can amount to more than 0.2 C oscillations in global decadal averages, as any more than that would require some large reservoir. Given that doubling CO2 has a magnitude ten times as large, I view these as distractions to the main argument anyway.
Willis Eschenbach has posted a bloack box analysis of CCSM3 showing the model can be reduced to:
T(n+1) = T(n)+λ F(n+1) / τ + ΔT(n) exp( -1 / τ )
with a .995 correltation. The formula was derived from a black box analysis of the GISSE model which suggest the models are similar and can both be replaced by much less computationally expensive methods.
http://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/#more-39907
Fred
I’d say Willis’ black-box analysis should be a “must read” for several of the posters here.
It points out that the models are essentially worthless for making future projections and why this is so.
Max
For projections to 2050, std error = .1C
modelling the present, error = .4C
>> how does error decrease the further out we project?
I wish the modelers would be more forthcoming about the imput values that result in stability and the values that cause the model to get crazy and run out of bounds because this is to me one of the most important results of enterprise. There is a lot of work in the models that is certainly correct and will be of great value when some bugs are worked out.
The clearest indication that CO2 is given too much “force” in one way or another is the result of removing all CO2 from the GISS model (Hansen 2010). GMAT dropped 6 degrees to LGM (Wisconsin) level in ONE YEAR!
That’s a lot of heavy lifting for those 390 molecules in a million.
We now have the HAD/CRU data for April. 0.405 C . If my arithmetic is correct the remaining 8 months of the year will have to average over 0.66 C per month if 2011 is to exceed the 0.54 C anomaly of 1998.
Jim Cripwell
Your arithmetic is correct, using the 1998 annual HadCRUT anomaly as averaged from the latest published monthly values for 1998.
But the HadCRUT record is a moving target, as we know: adjusted, manipulated, variance corrected, etc. ex post facto ad nauseam.
And the published monthly/annual figures are not even internally consistent.
The latest published HadCRUT annual anomaly figure for 1998 is 0.517C (even though the averaged monthly figures would show 0.546C). The annual figure reported four years ago was 0.54C, so it has been “adjusted downward” since then.
So, on the latest basis we only need to exceed 0.624C every month through this year to exceed the 1998 record.
Still looks a bit unlikely to me, though.
But who knows what adjusting, manipulating and correcting of the past record may still occur before year-end?
Max