Raymond Pierrehumbert has written an excellent overview on infrared radiation and planetary temperature. The article was published in Physics Today, and unfortunately behind paywall. Fortunately, Climate Clash has posted the article in full. I suspect that this article is digest of the corresponding chapter in his new book, Principles of Planetary Climate, which is hot off the press (published December 2010). On a previous thread, Chris Colose highly recommended Pierrehumbert’s treatment of infrared radiation and planetary temperature.
I think Pierrehumbert’s article is very good, and summarizes many of the topics that we have discussed at Climate Etc. on previous greenhouse threads:
- Physics of the Greenhouse (?) Effect
- Best of the Greenhouse
- Confidence in Radiative Transfer Models
- Radiative transfer discussion thread
So, if you have followed the Climate Etc. threads, the numerous threads on this topic at Scienceofdoom, and read Pierrehumbert’s article, is anyone still unconvinced about the Tyndall gas effect and its role in maintaining planetary temperatures? I’ve read Slaying the Sky Dragon and originally intended a rubuttal, but it would be too overwhelming to attempt this and probably pointless. Has anyone else read this?
I’m asking these questions because I am wondering whether any progress in understanding of an issue like this can be made on the blogosophere. Yes, I realize there are a whole host of issues about feedbacks, sensitivity, etc. But the basis of greenhouse don’t seem to me to provide much intellectual fodder for a serious debate. I’ve lost track of the previous threads (with over 1000 comments on one of them); can anyone provide a summary of where we are at on the various unresolved discussions?
I’m really wondering if we can get past exchanging hot air with each other on blogs, learn something, and collectively accomplish something? If you have learned something or changed your mind as a result of the discussion here, pls let us know.
Moderation note: this is a technical thread and comments will be moderated for relevance.
Well, if you are wondering whether any progress in understanding of an issue like this can be made on the blogosphere you can’t leave this thread as purely technical can you?
raypierre is the perfect example of what has gone wrong between mainstream climate scientists and the rest of the world, starting from the “blogosphere”. One can be the smartest kid this side of the Virgo Supercluster of galaxies, but if in one’s mindset questions are considered as instances of lèse majesté and examples for the general public are routinely simplified in the extreme, so as to make them pointless, well, one will only be able to contribute to the hot air: because the natural reaction of most listeners will be to consider whatever one says (even the good stuff) as just a lot of vaporous grandstanding.
From this point of view, SoD’s work should be more than highly commended.
The first response doesn’t bode well for Judith’s question.
Judith, the sad reality of ‘discussion’ forums on the internet is that there are many who participate simply because they enjoy arguing. The topic is of secondary importance.
Michael – Are you in denial over the getting past exchanging hot air with each other on blogs, learning something, and collectively accomplishing something?
I am not sure I understand what this is all about. My understanding of what transpired on the threads of Radiative Forcing, No-Feedback Sensitivity and Climate Feedbacks is that no-one understands the detailed physics of HOW CO2 causes global temperatures to rise. The physics presented by the IPCC is wrong. Until we know what is right, there is little to discuss. I dont see that Raymond Peirrehumbert has added anything to our knowledge.
Jim, I don’t think the physics of the IPCC report are wrong, just the math. The 1.5 to 4.5 sensitivity range is wrong because you cannot just average guesses and get a correct answer. That is scientific, good ol’ boy, cypherin’
Dallas writes “Jim, I don’t think the physics of the IPCC report are wrong,”
The object of my remark is that this is precisley what I meant. The physics presented by the IPCC of HOW CO2 cause global temperatures to rise is just plain wrong.
We have
1. Gerlich & Tscheuschner
2. Tomas Milancovic
3. A complete lack of the scientific method. There is no observed, measured data. Radiative forcing has not, and probably cannot be measured. Feedbacks have not, and probably cannot, be measured. No feedback sensitivity has not been and almost certainly can never be measured.
So, IMHO, the IPCC has got the physics wrong, and any numbers quoted about how much global temperatures might rise as a result of doubling CO2 are purely hypothetical and completely meaningless.
I realise that my first sentence does not make sense. Instead of “The object of my remark is that this is precisley what I meant.”, I ought to have said “What I wrote originally is precisely what I meant”. Sorry about that.
I have come to understand that the simple model of a doubling of CO2, radiative equilibrium at TOA, and lapse rate is pretty much meaningless as a gauge of climate sensitivity and the effect a doubling would have on surface temperature.
Dr. Curry,
I’ve read his article (he referred me to it over at Dot Earth) but was not impressed. First, the paper is mistitled. It should be called “Infrared radiation and inferences about planetary temperature” because it does not compute based on any dataset of planetary temperature and so there is no way to check computations against observation.
Ray referred me to the article because I mentioned Richard Feynmann and a video available at http://www.youtube.com/watch?v=b240PGCMwV0
I love Feynmann because he is such a clear communicator. He uses “guess” where many scientists would say “hypothesis” – It’s the same thing. Ray’s paper demonstrates that AGW is a reasonable hypothesis, no more. In fact, his paper is very similar to the work of many of the scientists mentioned in Spencer Wyart’s “The Discovery of Global Warming,” a history of scientists making estimates, mistakes, corrections and more mistakes.
In order to establish CAGW as a viable theory, one would have to use temperature measurements – preferably satellite temps or ocean heat content (as those two are the least subject to mischief) – not spectral inferences. Spectral inferences (which is what Ray uses) leads us to the conclude that CO2 should likely lead to warming, but the climate system is very complex and there is a great distance between what might be and what is.
Watch the Feynmann video again. If the hypothesis is that CO2 causes a change to surface temperature, tropospheric temperature or ocean heat content, the next step is to compute the consequences of the hypothesis using one of these datasets, then you compare the computation to nature. Ray’s paper did not even attempt to take these steps.
The key statement of the Feynmann video is: “If it (the guess) disagrees with experiment (or observation), it is wrong. In that simple statement is the key to science. It doesn’t make a difference how beautiful your guess is. It doesn’t matter how smart you are or what his name is who made the guess – if it disagrees with experiment, it’s wrong. That’s all there is to it.”
According to Accuweather.com, currently global temps in January are below the running mean. Contemplate that for a minute. After decades of rising atmospheric CO2 and decades of rising global temperatures, we are now below the running mean. If CO2 is the primary driver of dangerous global warming, why is it that after all these decades we don’t have any global warming?
” the paper is mistitled. It should be called “Infrared radiation and inferences about planetary temperature” because it does not compute based on any dataset of planetary temperature and so there is no way to check computations against observation.”
Ron – the article itself does not cite temperature data, but temperatures (from satellite, radiosonde, and ground-based measurements) are an important input to the transfer equations, as given by the statement in the article – “the change in the flux distribution across a slab is ΔIν = eν[−Iν + B(ν,T)]”. One can’t use these equations in the absence of data for T.
The correspondence between computed and observed radiances and also temperatures is an important element in the confirmation of the basic theories. Indeed, this aspect is not truly controversial in a theoretical sense. Rather, some controversy remains as to the parametrizations utilized to avoid the computationally impractical use of line-by-line calculations in global models, and their substitution by band-averaging procedures instead. The band-averaging appears to yield results of high accuracy, but less than that achievable by LBL methods. Despite this compromise, however, the overall validity of the approach is now well confirmed.
Fred,
It is entirely possible I am missing some real scientific advance here, but your comment has not yet convinced me. The fact Ray’s conclusions are considered good theory is clear. But that is exactly my point. Ray has not provided, so far as I can see, any way to confirm or falsify the hypothesis through observation of actual temperatures. If you think he has, please provide me with computations of the consequences of the theory (projections) measured in either satellite data or ocean heat content.
As a skeptic, it is my position that no one knows how much of 20th century warming was natural and how much was anthropogenic. If Ray has determined a way to make that call, it would be a real advance. I don’t see it.
Ron – I probably wasn’t completely clear in my comment. What has been confirmed by observational data are the amounts of IR heat energy descending from the atmosphere to the ground, or ascending into space from the top of the atmosphere – in each case in the wavelengths relevant to CO2 and water vapor absorption and emission. This validates the radiative transfer equations, their incorporation into computationally practical modalities, and the quantities derived from them. It does not, of course, answer questions as to how other mechanisms affect heat flux (solar changes, volcanic eruptions, aerosols, etc.), nor does it address the quantitation of feedbacks. My point is that atmospheric CO2 and water behave observationally as predicted, and so it would be incorrect to assert that this particular element of climatology lacks confirmation. The other factors have been, and will be, topics of other threads here and elsewhere. In echoing Judy’s point, I believe it is these other elements of climate where the uncertainties are more in need of resolution.
Fred,
Thank you. Then you are confirming the situation is as I thought. The paper is mistitled. It should be “Infrared radiation and inferences about (or forcing on) planetary temperature.” While is it nice to know certain components of climate physics have been confirmed by observation, it is completely wrong to say “We know the basic physics.” There are far too many complicating factors to make such a bold and unwarranted claim because then we end up discussing Trenberth’s travesty of missing heat.
The thing about a running mean of a noisy quantity is that the quantity stays on each side of it about half the time.
Does tend to do that. If the data set is long enough, you can be confident about what the mean is, doncha know.
Let me state this another way. If your hypothesis (guess) does not allow you to compute the consequences of the guess in a manner that can be compared to observation, it is not science. At least it is not a developed science. Ray’s paper does more to point out the shortcomings of climate science than to provide any new insight.
Judy – I believe this summary by Pierrehumbert only partially overlaps his coverage of radiative transfer in his book (due out this week), and radiative transfer is itself only a fractional component of the entire book. I base this on a near-final draft of the book that I already have. The summary captures the essence of radiative transfer in abbreviated form, but also adds empirical data (e.g., from the AIRS and ground-based spectroscopy measurements), and addresses common fallacies surrounding radiative transfer, such as the saturation argument. These last items are not in the draft, and may not be in the book unless recently added. The book itself addresses the quantitative aspects of radiative transfer, the Schwartzchild equations, and their adaptation to a computationally practical methodology in some detail.
The question still is how much, not how. Referencing Arrhenius (1.6) and Manabe (2.0) sensitivity should be in a range of roughly 1.2 to 2.6.
The article is a good framework in which to frame the AGW question.
IMHO ICBST AGW is true without the need for another theory or hypothesis, just continue to collect data.
I’ve read Slaying the Sky Dragon and originally intended a rebuttal, but it would be too overwhelming to attempt this and probably pointless.
Just to be certain, can you give bigger hint as to which way this would have gone?
Between SoD and here, I think I understand the basic physics much better than before. If what you want to derive as a basic agreement is this: “More CO2 causes long wave radiation to be deflected as it leaves the earth’s surface and thereby causes the air and underlying surfaces to be warmer than would be the case with less CO2”, then you have accomplished something with me.
If you want to go on and say that increasing CO2 this year and next means that planetary temperatures are warmed by some calculated number of degrees, then I haven’t seen that evidence. Speculation based on computer models and speculation about correlation between recently increasing temperatures and increasing CO2 do not constitute evidence.
As a software developer with 40+ years of experience, some of that with complex computer modeling programs, and a lot of experience debugging and validating complex logic structures, the idea that you can reliably trust much of anything output from programs that are constantly being changed, have no apparent validation or configuration control and nothing resembling a quality assurance plan, is just daffy. I personally was involved in ‘tuning’ complex models and know very well that even the developer (maybe especially the developer) cannot be trusted to know which changes to the logic make what difference in the validity of the result. The statement by a PhD scientist that the program does A and B and a few dollars will buy you a cup of coffee at Starbucks. Absent a formal methodology for testing an unchanging set of logic against static input and validating the predictions against measurements over substantial years, these models will never move out of the class of research tools and competing forecast techniques. By the time any one of them can be validated, none of the people involved will be young enough to care.
Not to say that the experiment isn’t worth continuing with the expectation that models which come somewhere close to predicting actual measured climate behavior across complete cycles of the various known naturally varying cyclic influences (Ocean currents, changes in the sun, changes in the orbit and orientation, etc.) may someday produce valuable results.
The constant drumbeat from CAGW true believers that we must act immediately to end CO2 emissions should be regarded as just so much urban noise, but since the perpetrators have high positions in Government and Educational Institutions, presumably the noise will continue until it dies out naturally or a replacement Government shows them the door.
For what it is worth, the concept that “Planetary Temperatures” can be predicted decades in advance by models and theories that cannot predict them them month to month and year to year is also daffy. The complete lack of ability to forecast next summer and next winter mean to me that the variables are simply not known or understood well enough to trust predictions for 2020 or beyond. The unknown unknowns clearly rule the theoretical environment of today, and it doesn’t appear that significant resources are being directed at discovering them. Most of the money goes into stronger and stronger drum beats for the choir that sings at CO2 true believer’s religious events.
“For what it is worth, the concept that “Planetary Temperatures” can be predicted decades in advance by models and theories that cannot predict them them month to month and year to year is also daffy.”
That may be an exaggeration, but you’re correct that climate models have been shown empirically to perform better over multiple decades than over short intervals. It’s therefore not “daffy”, but an inherent property of the obervation that chaotic elements in climate tend to operate on shorter timescales than the more stable forces underlying long term trends, and appear to even out over the timescales of particular interest (e.g., many decades to a few centuries), although how complete is this averaging remains controversial. This has been discussed in detail in several previous threads on models, but if you believe important facets were previously overlooked, this thread would be a good venue to discuss them, and I would be interested to read what you have to say.
Fred,
I think the daffy part is assuming that averaging a chaotic system over a century is sufficient to claim you understand any underlying trend. As those of us who have developed modeling software for much simpler systems know, the period you are modeling must be many times longer than the period you are projecting, and then you call the results a guess, and make plans in case it is wrong. I’m not seeing that same respect for the difficulty in modeling real time systems from climate modelers. Dr. Curry’s statements about uncertainty highlight that difficulty.
The uncertainties lie in the quality of the data used for input for calculations, the magnitude of perturbing factors, the correct application of those perturbations, and interpretation of the model run outputs. Everything else looks OK. I think.
So except for all the important input factors, they have done input factors well.
My guess is that another 20-40 years of experimental modeling might allow the modelers to predict out ten years or so with better than a 50/50 chance of making useful predictions.
As those of us who have developed modeling software for much simpler systems know, the period you are modeling must be many times longer than the period you are projecting, and then you call the results a guess, and make plans in case it is wrong.
Wow, someone who thinks like me on that point. This is disappointingly rare on Judy’s blog.
Fred Moolten “climate models have been shown empirically to perform better over multiple decades than over short intervals”.
Clearly I didn’t speak clearly. I said exactly the opposite. Where is the evidence that climate models have been shown empirically to perform well at all over any interval of any length? What is your definition of “perform well”. Hansen’s model predicted the streets of New York awash in salt water. No model predicted the flattening of temperatures since 1998. No models predict much of anything of substance in any publication I have seen; except that maybe some of them predict the past with only minor errors.
“Hansen’s model predicted the streets of New York awash in salt water. …”
Link?
According to the below link, the comment was made to Bob Reiss when he was doing research on a book called “The Coming Storm. It was the West Side Highway (that runs along the Hudson River) that would be under water.
The link is : http://dir.salon.com/books/int/2001/10/23/weather/index.html
More discussion about it also can be found at other sites (climateobserver, WUWT) by doing a search for key word (Hansen interview west highway under water).
How is that the result of climate model?
When did he say this will happen?
Within 20 or 30 years. And remember we had this conversation in 1988 or 1989.,
Link to Hansen’s own writing where he states he thinks parts of Manhattan would be underwater by 2011.
Two years ago, Hansen told Barack Obama he had four years to save the world. See http://www.guardian.co.uk/environment/2009/jan/18/jim-hansen-obama
It seems Hansen has given up on Obama. He recently said China was the best hope to save the world. See http://www.examiner.com/climate-change-in-national/top-nasa-scientist-says-china-is-best-hope-advocates-trade-war-with-u-s
Hansen’s book “Storms of My Grandchildren: The Truth About the Coming Climate Catastrophe and Our Last Chance to Save Humanity” makes it clear he is talking about human extinction.
Do you really doubt the comments made by Bob Reiss?
Do you really doubt the comments made by Bob Reiss?
I sincerely doubt that Hansen would say something to Reiss, who is not a scientist and who could easily have misconstrued the conversation, that totally contradicts Hansen’s scientific writings. In them he has said in history the large ice sheets typically take thousands of years to melt, but because of anthropogenic global warming, they could melt in timescales measured by centuries, and that in history there have been episodes of rapid ice melt. There is simply nothing in the history of James Hansen’s scientific career to support the silly notion that he could be so stupid as to believe melting could be so aggressive it would inundate parts of Manhattan in 20-to-30 years.
I think what is most likely is he was trying to tell Reiss that if mitigation efforts failed to be implemented in 20 to 30 years, Manhattan would eventually flood as that is perfectly consistent with what he always says, and I think this Reiss just blew it.
Do I think James Hansen believes ice sheet disintegration will go nonlinear? Yes, he clearly believes that. There is, that I know of, no computer modeling of nonlinear melting in the literature. What Hansen has speculated is that nonlinear melting could lead to SLR of 5 meters by the end of this century.
That is obtuse, JCH.
Perhaps, but more likely it’s cognitive dissonance –
http://web.mac.com/sinfonia1/Global_Warming_Politics/A_Hot_Topic_Blog/Entries/2008/8/19_Cognitive_Dissonance.html
http://web.mac.com/sinfonia1/Global_Warming_Politics/A_Hot_Topic_Blog/Entries/2008/8/20_More_On_Cognitive_Dissonance.html
We’ve seen a lot of it here lately.
No it’s not. A detective novelist interviewed Hansen. He thinks he heard Hansen say something that appears totally absent in Hansen’s vast peer-reviewed and non-peer-reviewed writings. What he thinks he heard sounds oddly similar to something Hansen has said many many many times.
What most likely happened? Answer: the detective novelist misunderstood the conversation.
His theoretical call to confirm he heard Hansen correctly:
“Jim, do you still think if we don’t do something in the next 20 to 30 years, parts of Manhattan will go under saltwater?”
Unaware of the man’s confusion, Hansen could easily confirm that question.
Hansen’s writings consistently say that we have a period of time to act to avoid future negative consequences. The period of time is a range that depends upon what humans choose to do now. The negative consequences are mostly in the future. He has written some very aggressive things on SLR. He clearly believes melting will go nonlinear in this century, and his estimate is 5 meters by 2100, a significant majority of which will not be seen until after 2050.
This enforces my belief that he could not possibly have said Manhattan would be under water by 2011.
Yep -cognitive dissonance, JCH. Rewriting history is one of the symptoms.
JCH,
you are attempting to integrate the science James Hansen has done with his activist pronouncements. It won’t work. Here is the quote in question:
While doing research 12 or 13 years ago, I met Jim Hansen, the scientist who in 1988 predicted the greenhouse effect before Congress. I went over to the window with him and looked out on Broadway in New York City and said, “If what you’re saying about the greenhouse effect is true, is anything going to look different down there in 20 years?” He looked for a while and was quiet and didn’t say anything for a couple seconds. Then he said, “Well, there will be more traffic.” I, of course, didn’t think he heard the question right. Then he explained,
“The West Side Highway [which runs along the Hudson River] will be under water. And there will be tape across the windows across the street because of high winds. And the same birds won’t be there. The trees in the median strip will change.” Then he said, “There will be more police cars.” Why? “Well, you know what happens to crime when the heat goes up.”
He obviously was scare mongering as is his habit with media ala Coal Trains of Death…
(someone claiming to be quoting Hansen) “The West Side Highway [which runs along the Hudson River] will be under water. And there will be tape across the windows across the street because of high winds. And the same birds won’t be there. The trees in the median strip will change.” Then he said, “There will be more police cars.” Why? “Well, you know what happens to crime when the heat goes up.”
Nicely illustrating the point that people can put any words they like in other people’s mouths when there’s no evidence either way to support or refute them. Amazingly enough there are people who take every such alleged quote as gospel truth.
When a tree falls and no one hears it, does it make a sound? And when you quote people and they’re not there to refute you, have you added anything to what we know?
Garbage.
Here is the abstract of a Hansen paper from 1981…
Hansen et al. 1981
Hansen, J., D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, 1981: Climate impact of increasing atmospheric carbon dioxide. Science, 213, 957-966, doi:10.1126/science.213.4511.957.
The global temperature rose 0.2°C between the middle 1960s and 1980, yielding a warming of 0.4°C in the past century. This temperature increase is consistent with the calculated effect due to measured increases of atmospheric carbon dioxide. Variations of volcanic aerosols and possibly solar luminosity appear to be primary causes of observed fluctuations about the mean trend of increasing temperature. It is shown that the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century, and there is a high probability of warming in the 1980s. Potential effects on climate in the 21st century include the creation of drought-prone regions in North America and central Asia as part of a shifting of climatic zones, erosion of the West Antarctic ice sheet with a consequent worldwide rise in sea level, and opening of the fabled Northwest Passage.
I would say some of that has come true.
This exchange is recapitulating earlier exchanges regarding climate models, and so I recommend that readers review those several relevant threads to avoid repetition here. Many models have peformed “hindcasts” showing that only GHG forcing added to other variables could reproduce observed trends. In addition, however, Hansen’s published 1980’s models were predictive in nature, and the predictions for CO2 emissions scenarios that actually transpired were reasonably good, but too high. The main reason was that his climate sensitivity estimates were higher than those now known to be more probable, and if the current inputs responsible for the lower estimates had been used, his results would have been quite accurate. (Note that since models aren’t “retuned” to make them reproduce observed trends – that’s not legitimate – this is a hypothetical case, but still one that is informative about the potential predictive values of models for long term trends).
The limitations as well as the utility of GCMs for long term prediction have been discussed extensively in the previous threads, and if the topic of this thread is radiative transfer specifically and/or its relationship to the value of this blog in gaining new understanding, I wonder whether it wouldn’t be more useful to discuss model performance in the earlier threads devoted to that topic unless important new information about models can be offered here.
Fred, i’m not sure i agree. I was of the distinct impression that his models predictive nature were off by some degree.
Also, if he modifies the climate sensitivity to be more accurate in the predictive areas, surely that will result in him being ‘off’ in the hindcasting.
The hindcasting models were not the ones Hansen used, but rather improved versions with more accurate input values. His trend estimates were, as you say, too high, but not extraordinarily off the mark. In fact, his predictions for the case of a CO2 emission rate scenario that was lower than the actual rates yielded a trend that closely matched the observed temperature trend.
For clarity, GCMs don’t use climate sensitivity as an input. Rather, in the case of Hansen’s earlier models, the input data he used yielded a climate sensitivity to CO2 as a model output that current data inputs show to be too high. If more current inputs had been used, the early Hansen model trend estimates would have been fairly accurate.
Can you define ‘not extraordinatley off the mark?
“GCMs don’t use climate sensitivity as an input”
Understood- i was unclear- i meant that if he inputted modified data/criteria to ‘tighten’ then predictions that it would ;loosen’ the hindcasting.
“Rather, in the case of Hansen’s earlier models, the input data he used yielded a climate sensitivity to CO2 as a model output that current data inputs show to be too high. If more current inputs had been used, the early Hansen model trend estimates would have been fairly accurate.”
This is quite confusing. Are you trying to say that the data he used to program the models was ‘off’, so that if he used ‘better’ data then he would have been right? or are you saying that his paramaters used were wrong/inaccurate in which case you’ve argued my own point for me.
Can you be clearer?
I’m still doing a poor job explaining. Hansen’s early model utilized model-based estimates of CO2 forcing and feedback that are higher than inputs used today. The earlier inputs lead to climate sensitivity values for doubled CO2 of about 4.2 C, whereas mid-range estimates today are about 3 C. With current inputs, his model would have very closely matched observed trends. With the higher inputs, his trend estimate was too high, but not by an exceedingly large amount. Indeed, his scenario B based on CO2 emissions that are not very different from those that occurred yields a reasonably good match. Only CO2 emissions much higher than those observed create a large disparity – Hansen Models
Current models are still far short of optimal, but perform better than is sometimes claimed.
Monkey,
Let me see if I can help. In the version of the model Hansen used there were at least two key parameters/processes that have been improved. The ocean and the sulfur cycle.
There really is no point whatsoever in pointing back to Hansen’s old projections, excect to point out that GIVEN the data at the time and the MODEL at the time, it did a fair job of predicting the future. As any engineer who build physics models will tell you it’s a never ending process. Perfection aint in the cards.
Plus Hansen’s model is so OT, I don’t know why you even bring it up
Fred –
Many models have peformed “hindcasts” showing that only GHG forcing added to other variables could reproduce observed trends.
If you remove the word “only” from that sentence then it would make sense to me. But a model that will reproduce observed trends. “only” due to GHG forcing added to other variables is a model that was designed to produce those specific results.
That’s not correct, Jim. The models were “tuned” to the existing climate, but what they projected when the modelers then added increasing CO2 to the input was outside the control of the modelers. It came out the way it came out, and if it had come out differently, there would be nothing the modelers could have done about it. The only recourse would be to develop completely new models – this process is an ongoing one, but it takes much time and money, and is not something that can be done simply to make a result come out better.
(I would add that climate models do a much poorer job with certain other climate variables, such as ENSO. If that could be fixed simply by retuning, it would have happened long ago.)
Fred –
Can’t agree – As you say, they were tuned to the existing climate – with specific assumptions built into the “tuning”. How many other possible factors were not built into the models because they were “asssumed” to be non-contributing (cosmic rays) or insignificant (solar variation) – or non-existent ? Or simply because the mechanism was not sufficiently defined (clouds perhaps)? ONE way to tune the models is to assume that GHG’s are THE main factor (not that I’m contesting that assumption at this time). There are others that I believe have not sufficiently been tested.
I was a spacecraft science instrument engineer in the mid-60’s. The Asian Brown-cloud was barely known back then, but it showed clearly in satellite photos. And clouds were a known unknown. They still are. Do you believe that either of those are accurately represented by today’s models? Or that the effects are understood?
Jim – model parametrization is a challenge, but the tuning of the models to the existing climate did not dictate how they would respond to CO2. I don’t see any fair way to negate the conclusion that the models performed well in demonstrating a critical role for CO2 in the rising temperature trend.
Regarding brown clouds, the combined warming effects (from the black carbon components) and cooling effects (from organic carbon and other aerosols) are reasonably well understood and represented in current models, thanks to the pioneering studies of Ramanathan et al involving atmospheric measurements at a multiplicity of different altitudes. You are right that they were poorly represented in earlier models.
Fred,
Statements about, what is predicted by the models and what is a result of conscious or unconscious tuning are notoriously difficult.
I can easily believe that it would have been much more difficult for the modelers to explain the recent warming without CO2 and I have no difficulty in believing that CO2 has contributed significantly. Still I do not give very much weight to the claims of impossibility. It is quite typical for a modeling process to add new features until an agreement is reached with empirical observations. At that point the model cannot be brought into agreement without the last addition – or without some other addition or modification that the modeler would have been ready to do, if needed for reaching an acceptable agreement.
The more extensively the modelers try all possible alternatives that they can invent the more one can give weight for statements about the necessity of a particular factor, but complicated models can never provide straightforward proofs about the necessity. In the case of climate models I have little trust that everything possible has been tried. Furthermore the models are badly constrained by the requirement that they must be stable enough to give any results at all. This limits quite severely the choices available for the modelers. In particular it may make it presently impossible to describe by the model some dynamics that is true in the real earth system.
Models are very important tools, but complicated models of a dynamic system may fail badly in describing some features of the real world and the same limitation may apply to all models making their agreement with each other a less useful proof of their validity.
In the case of the earth system (atmosphere + oceans + others) there is considerable evidence of oscillations or mode switching which cannot be modeled successfully. This is evidence on that the concerns I presented above are not empty theorizing but probably a real serious limitation of the modeling approach.
FM, see what happens when an actual modeler gets his hands on the DIY kluges favored by The Team?
Fred –
I don’t see any fair way to negate the conclusion that the models performed well in demonstrating a critical role for CO2 in the rising temperature trend.
Actually, I haven’t seen much evidence that the models have performed all that well. For example –
http://rankexploits.com/musings/2010/multi-model-mean-projection-rejects-gisstemp-start-dates-50–60-70–80/
As for the model parameterizaton, Pekka Pirilä’s post covers most of that well so I won’t pursue it right now. But things like this keep on showing up –
http://www.thehindu.com/todays-paper/tp-national/article1107174.ece
If true, it could be a game-changer. Is it true? Who knows. But if it is and it’s ignored, we could have another 20 years of wandering in the wilderness. :-)
The performance of the models has been discussed previously, including Hansen’s early model that performed well in predicting temperature trends (see links elsewhere in thread) and would have performed perfectly if the input data in use today (which yield a climate sensitivity estimate of about 3 C) had been used instead of earlier data that lead to a climate sensitivity estimate of 4.2 C.
As to a news report of a paper in an Indian journal not devoted to climate science (the paper itself was not presented), I have little to say except that the news item claimed that the conclusions about cosmic rays were based on “calculations”. If that is true, then the paper has little to offer. If the authors, however, went out and acquired significant new data, the results might be worth examining, although I suspect they would appear in something other than the journal they are in.
Actually, just subtract out the warming trends since the Ice Age and Little Ice Age, and the anomalous La Nina surge, and Hansen is left with bupkis to support his prediction.
And the reason people continue, Fred, to bring up this outdated “old stuff” is called “validation”. The only models eligible to be tested on the last 20 yrs.’ data are the ones that were extant at the time. The hindcasting performance of subsequently tuned efforts is irrelevant.
Excerpt:
Oops! Of course, as Fred says below, it’s merely India(n) research, so can be safely disregarded until The Team approves it.
Perhaps I should add that I do not think that modeling is not useful and an important tool in learning about atmosphere and wider earth systems. I do believe that a lot has been learned from the models.
The problem is that it is extremely difficult to estimate, how reliable the models are. There may be reasons to think that certain model results are likely to be true, but no proof of that without empirical confirmation. When using models, we may know, how to estimate the uncertainties from certain known sources, but we do not know, how much other uncertainty remains. When comparisons between models are used to estimate the accuracy, we get feeling about errors from features, where the models differ, but no knowledge about errors that are common to all models.
If the properties of the real earth system are stable enough in a sense important for climate issues and in a suitable way, then we are likely to be able to model it well, but if its dynamical properties go outside certain limits, then modeling becomes much more difficult, and then it may also be that all models will fail in this important aspect. My intuitive feeling is that there are important open issues of this type in judging the capabilities of climate models.
The climate models have shown great skill in describing a large number of details of the climate, but they are at the same time incapable in explaining some other features. What is their skill when used outside the domain of empirically confirmed validity remains largely unknown.
I should add that I have not used climate models, and read only one introductory textbook (Washington & Parkinson) and several review articles about them. Thus my claims are based on knowledge about other somewhat comparable modeling problems. I have noticed that many active climate modelers have expressed somewhat similar views, but cannot tell about the main stream views as expressed in internal scientific communications of the climate modeling community.
“(I would add that climate models do a much poorer job with certain other climate variables, such as ENSO. If that could be fixed simply by retuning, it would have happened long ago.)”
Now what are you exactly saying in your last sentence :).
Seriously though its a lot more than just ENSO. Even bias in temperatures is a problem – but that’s not for this thread.
What is is the extent to which the understanding of infrared radiation helps materially in modeling temperatures at the surface, at the TOA and in between. And perhaps more important: how significant is the variation arising from infrared radiation in comparison with other sources when undertaking these tasks.
At the TOA it is very significant – in fact this makes this boundary attractive to model for just this reason.
Modeling at the surface is much more complex, so modeling at the top of the ocean might be a better bet even if it relies much less on this body of knowledge.
But trying to model the whole atmosphere (apart from trying to predict weather) – would you really want to start there?
A paper on WUWT posits that a flat line is within statistical error in the instrumental temperature record. What does this do to ‘hindcasting’ and model tuning? What does it imply for the remains of the hockey stick chart? Really, it appears that with just a little more time, the global warming threat will melt like ice in Alabama in the Spring.
http://wattsupwiththat.com/2011/01/20/surface-temperature-uncertainty-quantified/
There is a site that sums up aspects of bad science:
http://www.catchpenny.org/patho.html
It strikes me that much of what the climate consensus supports is well described at this site.
“shown to perform better over multiple decades than over shorter timescales” — by what? Hindcasting? Curve-fitting?
Gimme a break.
Fred,
As long as climate scientists are stil reduced to only state that something was not unexpected after it has happened, your assertion that AGW is better at predicting decades rather than short term is a little bit embarrassing.
Judy – You ask whether we who participate here have learned from the experience and whether we have changed our views. The two are not synonymous, of course, and it’s doubtful that many active participants have radically switched positions.
As to learning, however, I can only speak for myself. I have learned much – most of it from linked scientific sources, including not only the current article by Raypierre, but even more from sources on radiative transfer you have cited and those cited by others on this topic as well as sources with data on water vapor and other feedbacks, and sources on climate modeling.
I have learned less from individual comments, although it would be wrong to say I’ve learned nothing. Certainly, I’ve learned from errors I’ve made that were corrected by others.
I’ve also learned a great deal from trying to explain climate science principles to other participants who expressed a different point of view. There is nothing like trying to teach someone something as a way of discovering that you don’t understand it completely yourself. In the process of doing that here, I’ve been able to refine and reinforce my understanding of important points, often by having to look up data as a means of doing that. Fortunately, most of the “refinement” has occurred before I committed myself to a posted comment, but there have been a few examples where the process was reversed. Still, it’s a learning experience that I have found valuable.
Fred,
You have only strengthened your own views towars temperatures and modelling.
Is all other data irrelevant?
Climate science has ignored a great deal of physical changes for a mathematical formula or model.
In doing so, they have missed the movement of ocean currents, salt changes and many atmospheric events that are not recorded by temperatures.
“…is anyone still unconvinced about the Tyndall gas effect and its role in maintaining planetary temperatures?”
Radiative transfer only accounts for an aspect of the climate system and not the dynamics. Water vapor as a trasport mechanism overwhelms other atmospheric gases and also alters transfer in a variety of ways.
If the topic is climate change with the idea of forecasting, accurately accounting for atmospheric circulation is a must. Has anyone run across a refined circulation model that accurately accounts for the system?
Water Vapor and the Dynamics of Climate Changes
http://arxiv.org/pdf/0908.4410
What controls the static stability of the subtropical and extratropical atmosphere?
The lack of a theory for the subtropical and extratropical static stability runs through several of the open questions. Devising a theory that is general enough to be applicable to relatively dry and moist atmospheres remains as one of the central challenges in understanding the global circulation of the atmosphere and climate changes.
The global mean temperature data for 2010 is out.
It is 0.475 deg C.
The previous maximum of 0.548 deg C for 1998, 13 years ago, has not been exceeded.
http://bit.ly/f2Ujfn
Global warming has stopped for 13 years, and we continue to count the number of years that the previous maximum has not been exceeded.
The number now is 13!
How many more years is required to declare global warming has stopped?
2? 5? 10?
Monthly global mean temperature for December of last year has dropped by about 0.281 deg C from its maximum of 0.533 deg C for 1997. That is nearly half of 20th century warming.
http://bit.ly/f2Ujfn
There is no global warming.
Is global cooling on its way?
Judith,
Some EXTRAORDINARY events are occurring as we speak.
If you put the satellite map of cloudcover over the map of the ocean surface temperatures. You get areas of vast evaporation in the Arctic regions where there is warm water in the oceans.
Dense cold air going over warm water is generating an extraordinary amount of precipitation.
This event has lowered the ocean levels that were rising previously due to the water transfer(WUWT ocean levels falling).
I was on board with the Tyndall gas effect for quite some time- since I read about in my elementary school library science section.
It was called a greenhouse effect then, but Tyndall gas seems more relevant today.
What none of the excellent writers connect- from SoD to Raymond Pierrehumbert is the connection between
1- the workings described and the predictions of doom by so many climate scientists
2- how this works in large places like an atmosphere.
Hunter,
Climate Science has no real clue as to how a round planet with rotation operates and all the interactions in an enclosed biosphere.
A general physics does not apply to a system that is vastly complex system.
Theories are easier to manipulate into order than following the physical evidence. So, physical evidence is ignored.
Hence the problem of not knowing how to interpret physical evidence. Physical evidence is not a temperature number nor an equation.
Judith, no scientist that I know disputes the Tyndall gas effect. I guess I am somewhat surprised by your question. Is that what we have disputing all of this time? I don’t think so. What we are really trying to ascertain is whether anything that humankind is belching into the atmosphere is responsible for the hysterical claims by some that it is Man, and Man alone, that is directly responsible to climate Armageddon via man’s unconscionable oxidation of carbon.
We all know that climate was different in the past than it is now and it will be different in the future. It is only a question of attribution. We’ve had ice ages and warmer periods in our historical record, and does anyone doubt we will have them in the future. Is it true that ice ages occur in a 100,000 year cycles, and is it also true that there have been previous periods that were warmer than the present despite CO2 levels being lower than they are presently? Despite what Mann et al. says, we’ve had the medieval warm period as well as the little ice age. We are now warming from the LIA, but very slowly and incrementally small. Other questions:
• Has there been any significant warming during the past 14 years despite CO2 rising?
• Is CO2 the only polyatom responsible? We know other polyatomic molecules such as CH4 are ten time more potent than CO2. What is the potency of N2O, NO2, O3, Freon, and H2O? Is there synergy amongst these polyatomic molecules?
• What is the role of clouds?
• If positive feedback exists, would it not be evident by now? Is there a conflict between data and models?
• Does anyone deny that the level of heat in the debate would be much lower if the cAGW group did not adopt the Hockey Stick with such vigor (a stick that has been completely discredited by many others, especially McIntyre)?
• If it is true that we might warm 1 degree in the next century, because of natural cycles or even AGW, has any reputable NGO, scientific team or Gov. agency discussed the potential societal benefits of slight warming. Has any money been directed towards these studies?
• A great many people feel that the quality of the land temps, despite some agreement with other datasets, are less than reliable. The models depend on scrupulous data quality. Are we confortable with Hansen’s control of at least major dataset? What are to make of the Jones/Wang UHI debacle? Quality data – I don’t think so
• Statistical quality: I am still amazed that climate scientists don’t make adequate use of professional statisticians. I am not sure most people understand the methods employed in the development of a new drug. Every pivotal Phase 3 trial conducted requires the establishment of an independent Statistical Advisory Board (SAB) when a trial is first contemplated. Before patient one is enrolled, the protocol has to be written and approved by the sponsors in concert with the FDA and the SAB. Obviously the trials are double blind and the only group that can break the code is the SAB. It is amazing that even with the rigor that goes into these trials, 50% of them cannot be repeated. It is shocking that we are contemplating reordering the world’s economy based on academic publications whose statistical rigor is so low that if they were a drug development project, the FDA would not allow them to begin even pre-clinical studies.
The Tyndall effect is agreed. What’s next?
The evaporation doors are wide open yet no-one whispers the name Ice Age for fear of being classed a nutbar.
The sun heats the planet and it is the atmosphere that figures out what to do with this heat with the help of changing oceans.
I would like to take this quote from Raymond Pierrehumbert article as a point of departure. He says:
“The atmosphere, if CO2 were removed from it, would cool enough that much of the water vapor would rain out. That precipitation, in turn, would cause further cooling and ultimately spiral Earth into a globally glaciated snowball state.10 It is only the presence of CO2 that keeps Earth’s atmosphere warm enough to contain much water vapor. Conversely, increasing CO2 would warm the atmosphere and ultimately result in greater water-vapor content—a now well-understood situation known as water-vapor feedback.”
___
Now personally, I think this is an excellent summary of the technical details of the role of CO2 as a GH gas in a very non-technical way. And so, as you can guess, I’m a “warmist”, and happen to think that AGW is real. To what degree it is happening is another issue entirely, but it is happening. Now then, so very many of the so-called AGW skeptics immediately launch into very long diatribes about how CO2 is only a minor GH “trace gas” that in completely logarithmic in its GH behavior and so could be at 1800 ppm and not be much of a problem, etc. They essentially gut the central role of CO2 in climate regulation. Even doubting the entire rock-weathering carbon cycle and it’s role in moderating CO2 levels through the negative feedbacks. How could there be a chance for a honest discussion if one side simply ignores or refuses to accept what the other side sees as one of the greatest physics accomplishments of the 20th century (i.e. the integral role of CO2 in the regulation of the climate). Where is there a chance for common ground when the two sides see the science so differently?
R. Gates: “Now then, so very many of the so-called AGW skeptics immediately launch into very long diatribes about how CO2 is only a minor GH “trace gas” that in completely logarithmic in its GH behavior and so could be at 1800 ppm and not be much of a problem, etc”
I personally have not seen serious skeptics make this claim, but it is a nice straw man for you to destroy. You could have stopped after “To what degree it is happening” and included most of the skeptics I know in your position. It would then be possible to deal with common ground, if that is what you want to do.
Well, I saw a congressman ask this stupid question of Lindzen.
The trace gas argument is usually made by commenters who heard it somewhere. It’s annoying to see it used. really annoying.
Steven,
You would think physical evidence would trump a theory.
But it doesn’t especially when that evidence has been ignored for years.
The movement of the ocean currents have now opened the evaporation doors to full tilt in the Arctic. Dense cold air over top of nice warm currents.
Joe.
Mosher:
I saw that too Steve and I’m in another jurisdiction. It is annoying and disappointing. That’s why places like Climate Etc. matter. Not just for the basics of Tyndall gases but the (attempt at) integration with rational policymaking.
The fact that some only want to argue doesn’t negate the help provided for those that are more open-minded.
One approach might be to have an extended discusion soley on the Tyndall gases. Where OT comments are snipped away and those who think there is an issue there can get the information they need.
One issue is you start to make progress explaining something and a side fight springs up about hansens models or the iron sun or G&T.
The other issue is that people are not committed to a process where saying “im wrong” is allowed. I see this on both sides. Nobody will allow anyone on either side to make a mistake.
The Hansen’s model deviation was caused by me, and I apologize. I saw this as false, and should have just let it go:
Hansen’s model predicted the streets of New York awash in salt water.
Please don’t u-tube me.
Is what he predicted false (obviously), or is it false to claim that he said it (he did), or is it false to point out that he in fact said it?
Just wondering.
“…—a now well-understood situation known as water-vapor feedback.”
Yes, well understood.
By whom again?
R. Gates,
The only problem with your claims about the idea of CO2 being the main regulator of the climate is the lack of evidence to support it.
The pesky lag of CO2 as a response to increases in temperatures does not support your case at all. That that we still do not know what causes glaciation does not lend itself to your underlying assumption that cliamte science fully understands climate.
Calling the theory of how CO2 works in the atmosphere a discovery of physics is a claim that does not hold up at all.
It is a claim about how the physics of CO2 applies itself- an engineering claim. It would be as if claiming that the invention of refinery oil cracking were a discovery of basic physics. It is applied physics, like the theory of AGW.
Evolution suffered through this in the first decades of the last century when eugenicists claimed that their plans were a direct result of evolutionary science. Eugenics was actually a mix of science with a great deal Malthusian thinking, then current prejudice and elitist group think.
That combination will sound familiar to an observer of today’s great social/science mix.
He is mostly right in a world without clouds and large convective storms, which could act as positive or negative feedbacks. No paleodata exist to tell which, to my knowledge. This is what annoys me–taking the simple case of a moist atmosphere without clouds or storms and saying “see? just physics”.
It’s stretching credulity to imagine that it’s any sort of accomplishment whatsoever – but one has to be seriously deluded to believe that it’s one of the greatest of the 20th century.
I think it is important to at least understand the stepping stones, if not how to get from one to the other in the kind of detail that Pierrehumbert does. I think of the path as something like this.
(a) Greenhouse gases alone can absorb and emit IR due to their molecular properties. Other gases are transparent.
(b) This property allows the surface to be 33 K warmer than it would be without such gases, or equivalently without an atmosphere, which is easy to calculate from radiative balance.
(c) Since science can quantify the effects of greenhouse gases such as this, it can also predict the effect of changes of composition on temperature. Measurements such as spectra confirm that science quantifies greenhouse gas interactions with IR very well.
(d) AGW is built on such predictions that can quantify how much warming is obtained by, for example, doubling CO2.
(e) There are feedbacks, and this is where the research is. Water vapor and surface ice albedo certainly lead to positive feedbacks, while the cloud feedback is still uncertain in sign. This is not because clouds are radiatively more complicated than gases (in some ways they are simpler), but because cloud distributions may change with climate. So far nothing indicates a particularly strong cloud feedback in either direction.
There is a lot of science between each step, but it helps to know where you are going before understanding the details. I don’t know if my steps are the best route from physics to AGW theory, but maybe it helps to have some route like this.
albedo is now a positive feedback?
Without looking it up, how many forms does water in the atmosphere exist in?
“except for clouds”?
For starters.
When ever I read some earnest believer making these deterministic outlines of how simple the climate system really is, I am amazed.
The answer is 2.
H2 18 O and H2 16 O.
:-)
Actually, if you count isotopes, I’d say 54.
H2O, DHO, D2O, THO, T2O and TDO
times O16, O17, and O18
times
ice, water and vapor
That’s assuming the Grateful Dead’s publishing companies version doesn’t exist in the atmosphere.
If you don’t think ice albedo has a positive feedback, you need to look at the Ice Ages again. (Note positive feedback amplifies warming and cooling effects).
A spherical black body without an atmosphere suspended in space at the same distance from the Sun as the Earth, with unity surface emissivity has a radiative-equilibrium temperature of about 278 K. Not 255 K.
Dan – I’m not sure what point you are making, but in case it’s relevant, the Earth has close to unit emissivity in the infrared, but it is far from a black body in relation to solar radiation, about 30 percent of which is scattered or reflected from clouds, the surface, and Rayleigh scattering by atmospheric gases.
Thanks – You’re right that a without significant surface albedo, the temperature would be about 278 K. For Earth, surface albedo is determined particularly by snow and ice, but also by sand and other light colored, non-volatile materials that would exist without an atmosphere, and so the temperature would be higher than 255 K but lower than 278 K.
And, if you allow the atmosphere to contain water vapor, aka clouds, you must account also for that contribution to the radiative-equilibrium-temperature energy budget. I’m referring to that other effect of water vapor beyond cloud albedo, if you get my drift. And, imo, a more nearly complete accounting would assign an emissivity less than unity to the Earth’s surface. The calculated temperature is then greater than 278 K but maybe lower than 288 K.
Dan – You’ve lost me completely. If, as Jim D was stating, the atmosphere had no water vapor or CO2, but all else, including albedo, was held constant (although he didn’t state that), the temperature would drop to about 255.
Realistically, if there were no water, there would be no cloud or snow-ice albedo, but atmospheric and some surface albedo would remain, and so taking that into account would raise the temperature back up, although I don’t know to what level. I think the main point about the 33 deg C difference is that the greenhouse gases contribute that much warming and not that one could realistically envision a world in which they were absent but nothing else changed. It’s a way of artificially isolating one variable from a system in which the variables all interact, and is not meant to be something that could actually happen.
The greenhouse effect of cloud water is also included.
Why would you assign an emissivity of significantly less than unity to the Earth’s surface in infrared wavelengths?
Fred, Not to continue to be off-topic, but you have stated the problem that I have with the usual presentation of the situation; my em-ing in your quote:
That’s my problem with this incomplete specification of the spherical-cow version. A material that is radiatively interacting with the incoming short-wave energy is introduced into the physical picture. But it is seldom, if ever, noted that that same material is also radiatively interactive with the out-going long-wave energy. The effect of the interaction with the short-wave is introduced into the calculation of some kind of grand global average temperature. The completely analogous effect, interaction with radiative energy transport, on the long-wave energy is incorrectly omitted from the calculations. Then it is simply stated that GHGs, if I may use that nomenclature, are responsible for the 33 K difference between the calculated number and the observed number. When all the time the material is right there in the model.
If I made such a presentation to an engineering audience, I would get called on it in short order. I’ve made up my own version of an energy balance approach to the physical picture, in which I’ve conveniently omitted an important aspect; a very important aspect. Why not present the effects of the already-introduced material. Then, to focus on the issue of importance, humankind’s affects by way of introduction of CO2 into the atmosphere, show estimates of the affects of this material alone. Don’t lump it in with the effects of the material that was introduced into the model but incorrectly omitted from consistent treatment of phenomena and processes.
Rules of thumb, spherical-cow versions, and other rough estimates can be effective. But only if they involve some relationships that can actually happen.
I don’t know how the Earth’s Moon got into this discussion.
I think I did not say, significantly less than unity.
oops, em-ing doesn’t show up in blockquote. I was aiming for artificially and is not meant to be something that could actually happen.
Dan – We may be talking past each other. The 33 C figure is correct within the context in which it’s presented. I infer that you are not challenging that, but rather what you see as the implications. However, the intended implications are not that such a scenario can happen but merely that greenhouse gases are what permit our climate to operate at tolerable levels even far from the equator. I’m not sure whether the unrealistic nature of the hypothetical scenario is your point, but if it is, could you illustrate with some specific examples why this might be a problem? Do you think the 33 C figure is being misrepresented as something it isn’t? This, to my mind, is not a spherical cow example, because I don’t see the 33 C as being used to derive erroneous conclusions. In fact, the greenhouse strength implicit in that figure is exactly what can be used for accurate evaluation of current changes in CO2 concentrations.
Also, I don’t know whether this is relevant, but Pierrehumbert is correct that if CO2 were removed from the atmosphere, temperatures would drop precipitously to the point where most water vapor condensed out, and so the total temperature reduction would be severe because of the combined loss of the greenhouse effect of both substances. Something this total has never happened on Earth, but previous times with Earth in an “icehouse” state have come close.
OReily? During the Paleozoic Snowball Earth, CO2 was about 4,500 ppm. And it was low during the Permian, but temperatures surged BEFORE CO2 rose.
Paleoclimate has no comfort for the GHG Believers.
I don’t know how the Earth’s Moon got into this discussion.
You brought it up, it’s the “spherical black body without an atmosphere suspended in space at the same distance from the Sun as the Earth, with unity surface emissivity”.
The point was, it is easy to see that greenhouse gases contribute 33 degrees, all else being equal (including albedo). We know that despite the albedo being 0.3, the average surface temperature is not 255 K but nearer 288 K. Why is that? GHGs is the answer, precisely because all else is held equal to isolate them. Sorry if this is confusing.
You can have an O2/N2 atmosphere and the surface would still be 255 K. The discussion about no atmosphere does not really help with understanding the greenhouse effect.
Alternative explanations for why we are 288 K rather than 255 K are welcome, but none have been put forward, which speaks volumes.
The one we know about has a daytime average of 380K and nighttime average of 120K, that comes to 250K overall average.
Hmm, nightime is a little high and daytime a little low. Must be some kinda Greenhouse effect!!
Those are the values I’ve seen, can’t be a GHE since there’s no atmosphere!
That’s a dumb joke Phil.
A Black Body would go to zero with no absorption and peak higher. What mediates the temperature is the conduction of energy away from the surface layer which is then radiated at night when there is no incoming radiation. Just like the Greenhouse effect!! Yet some would have us believe that all the slowing of the cooling at night is due to back radiation and all those alledged 33c difference between no atmosphere and atmosphere is due to CO2!! When you leave out enough small contributors the last small contributor can be made to LOOK larger than it is.
What happens when unphysical thought experiments become the primary tool of communicating complexities.
Who’s joking? Not me.
Dan Hughes said the following:
A spherical black body without an atmosphere suspended in space at the same distance from the Sun as the Earth, with unity surface emissivity has a radiative-equilibrium temperature of about 278 K. Not 255 K.
We have one, it’s not a thought experiment, it’s called the Moon and the temperatures there are as I stated.
Not quite, Phil. First, the surface is not uniform, nor is the surface emissivity unity. Second, there actually is an atmosphere (although very tenuous).
Nothing’s perfect.
I don’t see the requirement for the uniformity in Dan’s statement? Surface emissivity in the IR is sufficiently close to unity to make no difference and invoking the lunar atmosphere is quite a stretch! Nothing is perfect indeed but the real world (Moon) shows that Dan is off the mark.
I liked this:
“Apart from its role in the energy balance of planets and stars, it lies at the heart of all forms of remote sensing and astronomy used to observe planets, stars, and the universe as a whole. It is woven through a vast range of devices that are part of modern life, from microwave ovens to heat-seeking missiles. ”
I think we still missed the boat Judith on the application of RTE in everyday engineering.
Steven,
There is far more missing than that.
Science and science fiction are very much interwoven.
I think we still missed the boat Judith on the application of RTE in everyday engineering.
I’ve been under the impression that this was given up due to the perception that seemingly everyone agreed that RTE was reality.
The article from Pierrehumbert reads as “assume a spherical cow.”
The point of depature vis a vis climate seems to be the notion of equilibrium. Energy out = energy in, sure, but the atmosphere is dynamic; the Fermi satellite was just in the news re detection of gamma ray and antimatter bursts from thunderstorms. I don’t think anyone has enough of a handle on the mechanisms of energy release to space. This speaks to our understanding (and lack thereof) re energy equilibrium.
Moreover, until such time as we have a working explanation of the MWP and the LIA (something that isn’t just ‘scientific’ guesswork aka SWAG) then it seems presumptuous at the least to start running with RTE and making grand astrological quality predictions about the weather 50 years hence. The problem with discussing RTE is that the dubious explanations of the past says that we don’t *really* know how the climate (and RTE) worked in 1276 to create those conditions, therefore we don’t understand it well enough to make useful predictions. And again, the climate of the MWP was a point of equilibrium, as was that of the LIA.
The “out of balance” meme re climate alarmism suggests that there’s a natural equilibrium that man is upsetting. The notion of “tipping points” is also invoked metaphorically to make the same suggestion. The pushback from the skeptical circles is therefore likewise tied to equilibrium: “out of balance? Sez who?” For example, much of the claim of increased storms etc seems to come from the idea that man is pumping GHGs too fast therefore the equilibrium is upset and energy MUST be released, so increased energy in the atmosphere ought to translate to more violent forms of energy release hence more storms. Disruptive storms, too: global climate disruption.
(And I probably have this wrong. This of course is remarkable given that I tend to follow this stuff, and if I’m that far wrong then the idea that most people have any sort of useful clue is preposterous.)
Anyway, it’s not RTE that’s the problem. It’s that cows aren’t spherical.
randomengineer:
Yep, the grbs were already widely known, the antimatter only to folks like Lubos Motl.
But I enjoyed hearing about that. One of the problems with the role of ‘climate science’ as harbinger of doom is that, surely, some of the fun of discovery goes out of the field.
Feynmann’s already been mentioned, for good reason. Roger Penrose is another mathematical physicist with an irreppressible sense of fun. I like that. Something about how little we know about thunderstorms tickles me, given how many of them are active over the earth every second I type this. Of course I want to know more. But the journey is the reward and all that. I think we lose a lot of that because of the pressure of the activism.
“I’ve been under the impression that this was given up due to the perception that seemingly everyone agreed that RTE was reality.”
It would be nice to collect a list of skeptics who say so.
Further, One need not understand the LIA and MWP to make useful projections. You can, and people have, made useful projections on the back on an envelope. For example, just by looking at the chinese stealth fighter I can tell you that its has severe problems with broadband all aspect stealth. to a first order that can be determined on inspection, if you understand the physics. Same with climate, you don’t actually need a GCM or an understanding of the LIA or MWP to make a good first order approximation.
http://www.realclimate.org/index.php/archives/2010/07/happy-35th-birthday-global-warming/
In fact, Unless your goal is a regional adaptation plan, I would argue ( maybe just to piss people off) that the back of the envelop calculation is enough information to get people to take the need for nuclear seriously. Its enough information to get people to change the way they plan for floods and droughts and storms.
“Further, One need not understand the LIA and MWP to make useful projections. You can, and people have, made useful projections on the back on an envelope. ”
Oh, brother!
Further, One need not understand the LIA and MWP to make useful projections.
Of course this depends on *which* projections, doesn’t it? Seems to me Trenberth is right from a certain POV. That is, imagine climate as being the same curve as sunspot cycles; the current projection from Hathaway is max = 59. Look at the deviation from the fitted curve; the deviations in sunspot counts are essentially superimposed on it. Climate similarly should look the same — deviations up and down from the projected curve. The deviations (i.e. hot/cold temp records) are relative to the era average. All Trenberth really says is that we’re at spot X on the curve, so everything will be relative to this. Which of course is sorta obvious: in an Ice Age hot and cold records are relative to the period as well.
Its enough information to get people to change the way they plan for floods and droughts and storms.
Certainly what I just outlined is sufficient. But this isn’t even a back of the envelope calc, this is just a simple and generalised guess based on a curve shape. I’m not convinced that GCM’s and 2.5 billion USD are telling me any more than this.
For my 2.5 billion USD I don’t think it’s a lot to ask to nail the conditions causing the MWP and LIA. It might make me warmer and fuzzier re predictive ability beyond what I outlined.
Reading most responses here, I think the answer to Judith’s question tips heavily to a “no”.
As usual, Bart, you choose (perhaps cherrypick) way too small a sample
Not just “No” but “Hell no”.
The “let’s discuss all points of view” approach coupled with Dr Curry’s lack of knowledge about the specifics of greenhouse calculations and radiative transfer (I don’t know anything about it either but I’m not claiming my lack of knowledge is any way notable or unusual) has created the impression among many that this means the physics of the greenhouse effect is actually uncertain.
Even topics like the direct radiative effect of a doubling of C02 which were previously a point of agreement between knowledgeable skeptics and non-skeptics alike are now up for “debate” as to their certainty and even existence.
Examples:
Half of the energy is flung out to space… (along with the model projections)
“Like Judith Curry (see her blog, Part I and Part II), we think the calculation of a 1.2C warming for CO2 doubling is opaque and uncertain, and open to challenge. On the face of it, it may well be half that, around 0.6C. (And it’s not like those who aim to alarm us, ever exaggerate or hide behind obscure and unexplained data or calculations, is it?)”
Climate sensitivity
“As a starting point, the authors assume the magnitude of CO2 radiative forcing as it is used by the IPCC, however, there is currently no way to accurately verify what they assume to be the sensitivity of surface temperatures to radiative forcing due to CO2 in the absence of any feedbacks.
Judith Curry has an interesting article on CO2 sensitivity that is well worth reading.”
In Search of a Lost Greenhouse Effect
“In the recent post CO2 No Feedback Sensitivity Curry questions even the very starting point of CO2 alarmism, namely a climate sensitivity of 1 C from a direct application of Stefan-Boltzmann’s Law Q = sigma T^4, which in differentiated form with Q ~ 280 W/m2 and T ~ 280 K reads dQ ~ 4 dT and thus gives dT ~ 1 C upon input of “radiative forcing” of dQ ~ 4 W/m^2.
This is along the criticism I have expressed: To take 1 C as a starting point for various feedbacks is not science, because the formula Q = sigma T^4 as a model of global climate is so utterly simplistic: One can as well argue that one should take 0 C as starting point, and then enormous feedbacks would be required.
Curry admits that she does not know the physics (“the actual physical mechanism”) of any atmospheric greenhouse effect, and she asks if there is anyone somewhere out there in cyberspace who does. Isn’t this strange? Is the greenhouse effect dead? Was it never alive?
Compare with Slaying the Sky Dragon: Death of the Greenhouse Gas Theory (now #1 on Amazon ebook lists).”
To me the issue appears to be going backwards rather than forwards. Perhaps some now grudgingly accept some energy can be transferred back to the surface (though naturally not enough to be a problem or measurable etc etc) but many others now have reduced confidence in the physics of the greenhouse effect.
shaper00,
So Dr. Curry is ignorant, where as you, our anonymous internet self-declared expert, get it?
Hmmmmm…….who should be credible in this? Someone who writes text books on the topic, is willing to at least admit there are problems, or someone who sources those who deliberately misquote and mischaracterize the issues and our hostess?
shaper00 or Judith Curry…..a tough call. Not.
Hunter,
I’m going to have to give you a failing grade for you understanding of what you just read. I’d restate it in an attempt to help you understand it better but having read your many many (many!) comments I already know it will simply trigger another reflexive and poorly thought out response.
shaper00,
Fortunately you are not my teacher.
Let’s not get into a slanging match again chaps, huh?
As some others above have intimated, i think you’re asking the wrong question Dr Curry. The GHG theory, or the science behind it seem reasonably well defined to my mind (though i get lost in the more complex calculations embedded in the physics) it’s just the APPLICATION that’s the issue.
I don’t think many would argue that in a simple system increasing GHGs will raise temperature. It’s just our undesrtanding of the feedbacks, sensitivity and specifically clouds that’re the issue.
I’m still reading through the entirety of the article (it takes me a while when equations are involved lol) but that’s my understanding of the situation at present.
Labmunkey, I agree that the APPLICATION should be the issue, I’m hoping I can get people to agree on this so we can move on to the more challenging issues.
Well, you’ve got one to agree for what it’s worth :-)
Also, just like to say that the climate clash article is very informative for a novice like myself- thanks for posting it.
How many have to agree before we can move to the juicy bits :-)
42 :-)
That’s 101010 in digital form. It has to be right answer therefore. But no pressure :)
Dr. Curry,
Yet how many posts here assume that AGW is basic physics and that the climate is deterministic?
err, I have yet to see that
Steven,
I will look for examples of this, but it seems clear that the climate science consensus effort at declaring a worldwide climate disruption off of a few degrees change in the global temp average is fairly deterministic.
The tragic mistakes in Australia regarding not building flood control due to the predicted droughts of AGW seems rather deterministic as well.
The Curates egg springs to mind; good in parts!
On page 8 we are told that” CO2 is just planetary insulation” like “adding fibreglass ” insulation.
I will buy that, so no more of the miraculous properties of backradiation where a colder surface (atmosphere) increases the temperature of a warmer surface (Earth surface).
All CO2 does then, is reduce the loss of heat of the Earth surface.
There is always more radiation of every frequency leaving the Earth surface than entering it.(excluding temperature inversion)
N2 and O2 also play their part in reducing the 3 methods of heat transfer and the central role of H2O phase change is particularly important.
However elsewhere in the article we find that CO2 becomes a sophisticated thermostat.
It regulates the temperature in a way that loft insulation cannot.
Its this other role for CO2 that lacks any convincing proof.
Steven Mosher says in post above.
……”The trace gas argument is usually made by commenters who heard it somewhere. It’s annoying to see it used. really annoying.”….
Well sorry Steven, CO2 is still a trace gas and its effects are determined by that fact.
And Bryan can you tell me have you ever had to calculate the effects of C02 on the propagation of IR thru the atmospheric column? Has anyone’s life ever depended upon your ability to do this? So tell us what physics did you use to do this calculation?
Steven Mosher
Its not to hard to calculate the thermalisation of the atmosphere by absorption of say 15um radiation, if that what you mean?
However the bulk result of experiments carries out by for instance R W Wood show that the radiative effects of CO2 at atmospheric temperatures to be very small.
Steven Mosher
At the TOA the importance of CO2 becomes more evident because of its ability to radiate to space.
Steven,
Has anybody’s life ever depended on you, Dr. Curry, Hansen, SoD or anyone else, to determine how the IR propagates through the atmosphere due to CO2 being calculated properly?
I can only speak for myself. Yes. Anyone who has ever worked on a weapon system since say 1980 will understand the importance of getting those calculations correct. Anyone who has ever built a satillite system that senses the state of clouds and temperatures throughout the atmospheric column ( and warnings provided by these systems do save lives) understands the importance of getting these calculations correct.
Just for starters look at the charts ray provides that show the effective height where the planets radiate to space. If you can understand that, then we are on the way. And, whats more, you can believe everything Ray says and still be a skeptic. So you can learn something and still be a skeptic. Bonus time
Except that isn’t an altitude, it is an average temperature which is being INTERPRETED to be an altitude partially based on observational data. With the inversions in temperature in our atmosphere, could you demonstrate to us how that affects the actual radiation altitude(s)?
Steven,
Excellent answer. I stand corrected.
However, do you see equal significance in the world of climate?
Well sorry Steven, CO2 is still a trace gas and its effects are determined by that fact.
As far as radiative transfer in the troposphere is concerned N2, O2 and Ar are trace gases and their effect is determined by that fact!
Phil. Felton
CO2 and the other radiative gases play a vital role at TOA radiating away the long wavelength EM radiation.
Near the Earth Surface CO2 s radiative effects dont seem to have any major practical effects.
Certainly not the claimed 33C increase in Earth temperature.
This is because it is a trace gas.
R W Wood did a famous experiment to prove that point.
Yes he did a quick experiment and unfortunately his analysis was wrong. As he said: “I do not pretend to have gone very deeply into the matter, and publish this note merely to draw attention to the fact that trapped radiation appears to play but a very small part in the actual cases with which we are familiar.”
Emphasis mine.
Phil. Felton
R W Wood was probably the best experimental physicist that America ever produced.
The quality of genius is that they quickly get to the point.
Wood nailed two points in this experiment.
1. Greenhouses(glasshouses) work by stopping convection.
2. The radiative effects of CO2 are very weak at atmospheric temperatures.
G&T did an experiment to confirm the conclusions of Wood.
This also is an interesting paper especially as it comes from a source with no “spin” on the AGW debate.
The way I read the paper is it gives massive support for the conclusions of the famous Woods experiment.
Basically the project was to find if it made any sense to add Infra Red absorbers to polyethylene plastic for use in agricultural plastic greenhouses.
Polyethylene is IR transparent like the Rocksalt used in Woods Experiment.
The addition of IR absorbers to the plastic made it equivalent to “glass”
The results of the study show that( Page2 )
…”IR blocking films may occasionally raise night temperatures” (by less than 1.5C) “the trend does not seem to be consistent over time”
http://www.hort.cornell.edu/hightunnel/about/research/general/penn_state_plastic_study.pdf
Near the Earth Surface CO2 s radiative effects dont seem to have any major practical effects.
Apart from heating up the atmosphere by absorbing IR from the surface which N2, O2, & Ar can’t do!
Phil. Felton
……”Apart from heating up the atmosphere by absorbing IR from the surface which N2, O2, & Ar can’t do!”……….
They dont need to!
The non IR active gases such as N2 and O2 get their energy in the main from conductive transfer with the Earth surface.
This is why to atmospheric temperature in the troposphere is at its highest at the surface.
No the rate of loss from the surface to the atmosphere is much lower than by radiation. CO2 being a very strong absorber of the IR emitted by the surface and transmits a lot of heat to the atmosphere (maximally near the surface). The profile exceeds that allowed by convection so that sets up the actual lapse rate. The profile is the result of a radiative-convective equilibrium.
Phil. Felton
The discussion on lapse rates continues between myself and Pekka below.
The dry adiabatic (maximum) lapse rate for the troposphere is derived without radiative effects considered.
The actual lapse rate at a particular time must include in addition the effects of convection, latent heat and radiative effects particularly water vapour.
To be clear, most of the 33 C is due to another trace gas called H2O. CO2 contributes about 20% to the greenhouse effect.
Jim D
Do you realise that the dry adiabatic lapse rate for the temperature profile of the troposphere is derived without including any radiative effects whatsoever.
The 33C invention is based on the fictitious Earth with no Oceans or Atmosphere, whats that supposed to prove?
Bryan,
The adiabatic lapse rate would not apply without radiative effects. It is a upper limit for stable temperature gradient, but this limit would nor be reached without radiative effects.
Pekka Pirilä |
The alteration from the dry 9.81K/km is mostly down to convection of “moist” air and its latent heat implications.
Brian,
This is a completely different thing. I was not commenting on the value of the limiting lapse rate, but on whether the limiting value is reached.
Without radiative effects the real lapse rate would be much smaller that these limits. It would not be 6 or 9 K/km, but something much less like 1 K/km. I do not know the value, but the radiative effects are the only reason for reaching the limit set by convection and thermodynamics or even close to these limits.
Without radiative effects there would not be any large scale convection, but only conduction and some mixing. The radiative effects create the temperature differences that drive the convection.
Pekka Pirilä
I was careful to say I was talking about the troposphere.
Above the tropopause the radiative gases largely set the rate at which energy leaves the Earth.
So the way I see it the major limiting effects on the whole atmosphere are;
1.TOA – radiative
2. Earth surface- the effects of solar radiation.
3. Troposphere – Gravitational compression, convection and phase change with radiation playing a minor part.
Pekka
Take a single molecule of air moving vertically upward.
This gives the vertical profile of temperature against height.
It leaves with a temperature characteristic of the surface say 288K.
Its RMS velocity will be around 560m/s.
As it moves up vertically its KE changes to PE.
At any point Loss in KE=mgh
3kT/2=mgh
g =9.81N/kg the gravitational field strength.
This is where the magnitude of the lapse rate comes in.
By a simple application of the Kinetic Theory of Gasses in the Gravitational Field we derive the adiabatic dry lapse rate without reference to any radiative effects whatsoever.
Bryan,
Your statement about the origins of the lapse rate is not true.
Assuming that the temperature at the surface is 15 C the average vertical speed of air molecules (N2 and O2) is about 290 m/s (total average velocity about 500 m/s). In free space sending a molecule upwards with that speed would rise for 30 s reaching an altitude of 4300 m. Stating in another way the temperature would be close to absolute zero at that altitude, if your argument would be correct. The atmosphere would also be so thin as no molecules would have enough energy to go much higher up.
In order to get any understanding of the lapse rate on must consider the equation of state of air (near to ideal gas, when relative humidity is well below 100%) and some additional physics.
Pekka
The example I gave was of a thought experiment to simplify the problem to examine the energy changes involved.
Its quite correct.
I carefully said;
….”Take a single molecule of air moving vertically upward.”…….
You can then gross it up for x,y,z directions for N molecules but the basic physics stays the same.
As we move higher all objects gain PE and this comes from the gases KE therefore the temperature drops at 9.81K/km
Bryan,
I cannot see any logic in your statement.
How do you reach the conclusion that the temperature drops 9.81 K/km?
Do you propose that there is some connection between the 9.81 in 9.81 K/km and in 9.81 N/kg?
If that is the case, what is the connection between these two values given in completely different units?
Pekka
Yes the magnitude is no coincidence.
See page 40 of Rodrigo Caballeros notes on Physical Meteorology.
These are available freely online if you Google.
Bryan,
The same numerical value (given as 9.8 in the notes) is a coincidence of no deeper meaning. The numerical value would be different, if the SI units would have been defined in a different way – or if the temperature is measured in Fahrenheit instead of centigrade. It is pure coincidence.
The theory presented in these notes has no relationship with your earlier messages. It contains the correct theory that I was referring to.
The lecture notes of Rodrigo Caballero help in explaining some relevant points.
The equation of state of ideal gas is hidden in the presentation of these notes. Going backwards from the equation (2.92) that defines the adiabatic lapse rate, you can notice that it is based on the requirement that the potential temperature theta does not depend on the altitude. The definition of theta and its connection to the entropy involve applying the equation of state.
Then you can continue to the next chapter 2.21 which tells that the adiabatic lapse rate is indeed an upper limit, not the only possible value. Faster cooling by altitude would lead to instability and bring back to adiabatic lapse rate, but a slower cooling is stable and can be maintained permanently. The adiabatic lapse rate is observed when something tries to create a stronger temperature gradient as this leads to the instability which is forced back to adiabatic lapse rate, but without radiative effects nothing will lead to this and the real lapse rate will remain less than the adiabatic limit.
Pekka
Your last two replies contradict one another.
The first maintains your previous position.
However the second seems to accept my account without being explicit.
To clear the matter up for others who follow this particular discussion, do you agree with my two points;
1. The DRY adiabatic lapse rate is calculated without reference to radiative effects in the troposphere.
2. The value of 9.81 K/km has its origins in the 9.81 magnitude of the Gravitational Field Strength used as part of the calculation.
A simple yes or no will suffice.
Bryan,
Simple yes or no is seldom appropriate, as they will almost certainly be misinterpreted.
1) Yes and no. The adiabatic lapse rate does is a value, whose calculation does not involve radiation (‘Yes’ for that part of the answer to your first question). The adiabatic lapse rate does not exist without radiation (‘No’ as the second part of the answer to your first question).
2) Strongly ‘No’. It is true that g appears in the formula (2.92), but it is pure coincidence that the divisor, the specific heat capacity of dry air has a value very close to 1 in units J/g/K. The value is more precisely 1.0035 and there is really nothing fundamental in its closeness to one.
Pekka.
Thanks for setting the record straight.
The maximum or dry adiabatic lapse rate is derived without reference to radiative effects.
However you still seem to be persisting with your point that there is not a close relationship between the dry adiabatic lapse rate and the Gravitational Field Strength(g).
A second source makes it even more consise.
http://www.tech-know.eu/NISubmission/pdf/Politics_and_the_Greenhouse_Effect.pdf
Near the bottom of the page you will find
dT/dH = -g/CpT
A substitution of magnitudes gives the
Lapse Rate as – 9.8K/km
Bryan,
Certainly that calculation is true, but you should try to understand, how the Cp enters the formula. There is a lot of thermodynamics in that. Therefore your statement that it is only gravity on the molecules is totally false. It comes in when the equation of state of air is applied to the adiabatic expansion and combined with the dependence of pressure on the altitude.
The other point is that the appearance of the same digits (9 and 8) in both numbers is pure coincidence, because it is a consequence of the decisions made when SI units were defined and making the numerical value of Cp of dry air to be close to one was not among the factors influencing the choices. This proximity to one is really pure coincidence.
Pekka
At the start of the discussion I pointed out that as molecules of air move higher away from the surface they gain potential energy.
This Work Done against gravity has only one source of energy of supply .
The internal energy of the gas.
The molecules MUST lose KE if they are to gain PE.
Therefor their temperature must drop.
Now PE =mgh
So g is retained until the last line of the calculation.
To imply that its presence is a mere coincidence is highly misleading.
Any careful reader will come to the conclusion that all my points are essentially true.
Bryan, you are quite correct. The answer to your question 1 is very simple: “yes”.
Bryan,
Your theory is so badly incomplete that it is useless. Of course the g enters the formula as a factor describing the strength of gravitation. This is important, because the pressure gradient is determined by it. The change in pressure is transferred to a change in density by the equation of state, but this must be done in a way defined as adiabatic as the temperature is changing at the same time. In this connection between pressure, density and temperature we get the specific heat capacity to the calculation.
The gravity is not considered on the level of individual molecules but on the level of a parcel of gas that is large enough to move without significant transfer of heat or matter through it bounding surface. The speed of motion of this parcel is not important, indeed it is natural to think that it moves very slowly compared to the speed of individual molecules. The reduction in the speed of molecules is seen in the change of temperature, but this is calculated from the change in pressure and the equation of state as I said above, not from the change of the potential energy of individual molecules in the gravitational field.
It is essential to take into account the properties of air as a nearly ideal gas. It is not possible to get right answers without such considerations. The lecture notes describe all this. If they do not explain everything well enough, you should start with an introduction to thermodynamics and proceed so far that you understand well the thermodynamics of adiabatic expansion of ideal gas.
Pekka
….” when SI units were defined and making the numerical value of Cp of dry air to be close to one was not among the factors influencing the choices. This proximity to one is really pure coincidence.”….
So to paraphrase you the value of Cp being close to unity is a pure coincidence.
It could have been any old haphazard constant.
Well lets see;
Potential Energy Gained in lifting One Kilogram of Air through 1000m
PE = mgh = 1×9.81×1000 = 9810J
If this energy is gained by the air dropping in temperature we would expect
Heat energy lost = mCpdeltaT
9810J = 1KgxCpx9.81K
this implies that
Cp = 1000J/kgK
Cp = 1J/gK in grams
So by simply using a potential energy gain from a measured temperature drop we find the heat capacity has to be close to unity.
Or by using thermodynamics to find the heat capacity we can then predict the temperature drop.
The Physics is quite consistent and if a correct consistent use of units applied the constants must give definite magnitudes.
Gravitational Field Strength must be close to 9.81N/Kg near the Earth surface and the heat capacity has to be near unity at the Earth Surface when the lapse rate is quoted at 9.81K/km
Pekka
You say
….”The gravity is not considered on the level of individual molecules”……
This is utter nonsense!
What is further it must come as a shock to anyone looking for information here, that a number of apologists for AGW (including Pekka) did NOT REALISE that radiative effects play no part whatsoever in the calculations of the dry adiabatic lapse rate.
That gravity plays the dominant role in the temperature profile of the atmosphere is news to them!
I will produce a more extensive reply to Pekka later.
However to deny that air molecules are subject to the force of gravity is so preposterous that I had to nail this gross error instantly!
Don’t worry about more arguments. At least not for me. I am not going to read your messages anymore. Your insulting style has gone beyond what I care to read.
Pekka,
Why would the adiabatic lapse rate not apply without radiative effects?
As the effective radiating altitude would be at the surface, the surface would be much colder.
But wouldn’t the TOA be even colder?
Peter,
The adiabatic lapse rate is the maximum possible lapse rate. It is reached, when other mechanisms try to create a higher lapse rate, but the radiative effects are the only effects that influence significantly in this direction.
Without convection the lapse rate would be much larger and the greenhouse effect much stronger. The earth surface would be some 30 C warmer than it is now. A temperature gradient that is larger than the adiabatic lapse rate induces strong convection as the density difference between different layers exceeds stability limits, but a gradient that is less than the adiabatic lapse rate is stable and prevents vertical convection as it does in stratosphere.
Thus the adiabatic lapse rate is obtained when some mechanism tries to create a larger gradient and the convection stops this attempt. Without radiative effects this will not occur.
But why should the lack of radiative effect prevent the gradient? Surely, even though the surface would be colder, the stratosphere would be colder still?
There will not be any large gradient unless something drives it. It is more natural for the temperature differences to disappear than to appear. Only a strong driver can invert this natural tendency and the radiative effects are the only strong driver that exists.
They drive the temperature difference because the allow for a significant transfer of heat from the top of atmosphere to the space while the heating comes to the surface. Without radiation from the atmosphere both the incoming flux and the outgoing flux connect the surface directly to the space (and sun). The atmosphere settles gradually to a state of small vertical temperature differences.
Pekka,
Thanks. I see your point.
However, that raises yet more questions. As the atmosphere would not be able to lose heat (well, very little) it would have gotten steadily warmer until it stabilised at something approaching the maximum daytime surface temperature.
Which would make it much warmer than it is now, wouldn’t it?
Of course, that also assumes no water in the system.
Peter,
Because the atmosphere would not prevent radiation from the surface from escaping, the surface would reach an effective average temperature determined by the solar irradiance and albedo (to a lesser degree also on the emissivity of IR, but this is certainly closer to black body than the absorption of solar radiation).
If the earth would be totally black and the influence of atmosphere would be left out completely the earth would radiate as strongly than a black body at 278 K. There would, however, be a very large difference between equator and the poles.
At poles the temperature would be extremely low as the sun would not heat at all. The temperature could drop to the level of cosmic microwave background radiation or 3 K.
At the equator the daily average would correspond to 296 K and the difference between night and day might in theory reach 390 K. At the latitude of 60 degrees the effective average would be only 209 K.
These numbers are theoretical, but they tell, how much the the temperatures would vary, if radiation would completely determine the temperatures. The large temperature differences between latitudes would induce convective circulation basically similar to Hadley cells of the present earth, but certainly very different in details. That would create also some temperature gradients, but I do not know anything more about this. In any case it is obvious that a large part of the earth surface would be very cold and the existence of sufficient water would lead to a snowball earth with a high albedo and very low temperatures everywhere even on equator due to this low albedo.
On the last line I should have written “high albedo”, not low.
That’s an amazing bit of physics you describe, Pekka…where our atmosphere traps radiation. The way a house of mirrors traps a beam of light, I suppose. Let’s take a short pulse from an IR laser and trap that radiant energy in a gas, okay? That would be really exciting to watch.
However, be careful! Don’t let it get out of control, with that “back radiation” greater than incident radiation. You’ll disappear into a mushroom cloud and that would make us all sad.
Pekka,
I didn’t say the surface would be warmer. The average surface temperature would be at the radiative equilibrium temperature.
But the atmosphere would not be able to lose any heat through radiation, and would only be able to lose a small amount due to conduction to the surface at times and places that the surface was cooler. Convection pushes heat upwards, and dry air is a good insulator.
What would stop the atmosphere from gradually heating up towards the maximum surface temperature?
As I posted somewhere else here, a pure O2/N2 atmosphere would also be 33 C cooler at the surface. I am still waiting for an alternative explanation to even be put forward of why the surface temperature is not 255 K.
…… ” still waiting for an alternative explanation to even be put forward of why the surface temperature is not 255 K”….
Well Pierrhumbert gave a hint when he said the Atmosphere acts like insulation.
Radiation however is not the only means of insulating the Earth in fact the N2 and O2 molecules find it very difficult to lose their thermal energy.
…and where would the N2 and O2 be getting their thermal energy from? The sun heats the surface which heats the atmosphere, but the average surface temperature is 255 K, so how would the atmosphere become warmer?
During the day the Sun heats the exposed Earth surface.
In the absence of radiative gases the day heating effect would be even stronger than at present.
The evaporation from the Oceans would be higher.
Latent heat collected by day would be released at night as the atmosphere cooled.
To imagine as you do that an Earth without an atmosphere and oceans would have exactly the same average surface temperature does not seem very likely.
If you imagine a kind of water vapor that has no radiative effect, but has latent heat, the atmospheric temperature is still not going to exceed the surface temperature. Even with latent heat the equilibrium lapse rate is negative.
Jim D
The insulation effect of the atmosphere depends on
1. Conduction
2. Convection
3. Radiation
4. Phase change
5. The Suns effect on the Earth surface would be stronger than at present.
For your proposal to become true when 3 is withdrawn 1,2,4 becomes zero and the Suns radiative output would have to be reduced
You have not explained how the surface could exceed the radiative equilibrium temperature with any of these processes. An atmosphere transparent to IR has no insulating effect at all.
Actually the temperature might be rather high at the equator (average about 23 C if albedo is zero), but much colder at high latitudes. Thus there could be evaporation in the tropics, but an ice age would be unavoidable as the water would end up in growing glaciers.
The atmosphere would be ineffective in moving heat and the lapse rate in the atmosphere would be small even in the tropics.
Jim D
……”An atmosphere transparent to IR has no insulating effect at all.”…….
A simple experiment would prove this wrong.
Ambient outside temperature 5C (say) a Cup of water with lid at say at 90c placed inside an enclosed box made of double layer of polyethylene.
The gas inside the box is pure N2.
This makes the experiment transparent to IR.
The temperature of the water is taken over a suitable time scale.
The experiment is repeated with the cup outside the box.
I hope you agree that the cup of water inside the box would lose heat less quickly than outside.
From the G&T paper:
“Alfred Schack, who wrote a classical text-book on this subject [95]. [In] 1972 … showed that the radiative component of heat transfer of CO2, though relevant at the temperatures in combustion chambers, can be neglected at atmospheric temperatures. The influence of carbonic acid on the Earth’s climates is definitively unmeasurable [98].”
—
Of course, the DIY Jackasses-Of-All-Sciences at The Team know better than actual professional specialists in every and any field.
Correction
……”There is always more radiation of every frequency leaving the Earth surface than entering it.(excluding temperature inversion)”…..
Here I intended to add at night with reference to “backradiation”
Bryan,
Does CO2 move ocean currents?
Does CO2 inhibit evaporation or precipitation?
Does CO2 change salinity?
Dr Curry, Are there in your view any new facts or concepts in Dr Pierrhumbert’s article? Most of it looks broadly familiar. Is there anything, for instance, which has not been covered (perhaps distributed among several different posts) at Science of Doom? If so, would you be able you draw our attention to any significant departures?
Well, you’re persistent if nothing else.
Is this that special poster again?
it would appear so.
Sorry about the typos: ‘Pierrehumbert’ and ‘…able to draw…’
Judith,
When you put the satellite map of the cloud cover:
http://uk.weather.com/mapRoom/mapRoom?lat=10&lon=0&lev=1&from=global&type=sat&title=Satellite
Over the map of the Ocean Surface Temperatures:
http://wattsupwiththat.com/reference-pages/ensosea-levelsea-surface-temperature-page/
There are two areas in the Arctic that are pumping great amount of evaporation from the warm currents to the dense cold air.
The colder ocean surface area over the equator pretty much lines also with the ocean salt changes in the 1970’s as it was expanding too.
Joe.
Do you believe those pictures from space are accurate?
If so, do you believe in the physics used to calculate those pictures?
If so, then you accept what ray is describing.
If you reject Rays physics, then those satillite images are wrong. In order to calculate them, in order to transform a EM field that hits the sensor INTO an image you apply certain physics. Precisely, the physics that ray describes.
Thus, the evidence you cite REQUIRES a physics you deny, to actually be true.
If you doubt this I’ll suggest you go look at the description page of the data products produce by satillites,
Steven,
The more I dig in science, the more mistake I find.
So, what is correct?
The whole area of planetary rotation and the energies it creates was missed with much garbage in it’s place. Angles of deflection of solar energy on a moving planet was missed. Found the same mistakes in power generation.
In some cases, just simple measuring would show mistakes.
But this bring another problem…where is NASA measuring from?
Suns carona or suns core?
Is the measurements from satellites include trajectory angles?
As I answer questions many more come into play that was not contemplated.
Being mathematically challenged, this is all very difficult for me to grasp – but I have always accepted that the GHG effect helps keep temperatures on earth higher than they would otherwise be. So I’m not inherently looking for there to be any flaw in that principle, so much as in finding a way to understand the phenomenon without recourse to a level of mathematics that is beyond me.
Just recently, I came across a post at Jo Nova’s blog by Joe White:
http://joannenova.com.au/2011/01/half-of-the-energy-is-flung-out-to-space-along-with-the-model-projections/
Now, the purpose of this post is to challenge the consensus view, but what Joe does is to develop, over a series of stages, a diagram explaining the earth’s energy budget. He displays a sequence of 6 diagrams, and I’m wondering if we can all agree that what he says is uncontroversial as far as diagram #4.
I can certainly follow his logic that far, and I would be interested to know whether anyone here would challenge that. I’m not thereby trying to engage in polemic, but rather to establish what we can all agree on, and whether, indeed, this is represented at diagram #4. And if not, why not, hopefully without the need to launch into maths that will be beyond me and perhaps others here.
I think it would be really useful to try to establish what we can all agree on before taking things further, and that’s all I’m trying to do here.
Here’s hoping for some constructive responses…
Correction – the article is by George, not Joe, White.
Pierrehumbert’s article includes the following blatant falsehood:
” The same considerations used in the interpretation of spectra also determine the IR cooling rate of a planet and hence its surface temperature. ”
The title of his piece is also misleading, suggesting that temperature is determined by radiation.
As has been discussed repeatedly here and elsewhere, you cannot determine the surface temperature unless you can quantify the heat flux from all sources, including convection and evaporation.
I am surprised that Pierrehumbert would make such a claim that he knows perfectly well is not true.
I’m afraid that Judith seems to have a blind spot in this issue. It was raised repeatedly on the threads linked at the top of the post, by Leonard Weinstein, David L. Hagen, Nullius in Verba, philc, Tomas Milanovic, Gordon Robertson, myself and others.
I am not sure if blatant falsehood is the best way to describe this.
Many of the AGW community seem to think that radiative cooling is the only issue….even as they still admit they do not understand clouds, think water vapor is only a feedback, and only positive, and seem to gloss over the oceans. Especially since the oceans stopped cooperating so much on pesky things like OHC.
I see it more as a general inability to do more than find things that support the pre-determined answer. Not deliberate cynical lying.
Everybody admits that very much is unknown. At one end of the spectrum are those who do not accept essentially anything. They may disagree on the human role in increasing CO2 in the atmosphere or the basic radiation physics of CO2. At the other end people may think that IPCC is far too dismissive and that only the activists with most threatening views are close to reality. Most of us are somewhere between these extremes.
My interpretation is that Judith is trying in this and some recent posts to find out, how far we can come from the extreme of no knowledge towards accepting certain results of science without loosing too many behind. The progress has been very slow and it is not possible to get everybody in, but we should be able to proceed to the next issues where the disagreement is much more widespread and where the scientists themselves start to have significantly differing views.
Many comments of this chain have already gone into these more difficult issues (e.g. climate modeling, or cloud feedback), but I think their place is not here but will come soon in some other chain.
Pekka,
There is a built in prejudice that science is correct and any outsider should stand in his place as we are 100% correct and should not be questioned.
I found a great many answers but not by prejudiced science but by good hard detective researching.
This in many places conflicted with current science.
Joe,
Science is never 100% correct, but in many cases it is 99% correct or even 99.9% correct (as long as it is understood that these figures are only illustrative without well defined meaning).
It is really the same issue than in accepting Newtonian mechanics in spite of the fact that Einstein’s special relativity and quantum mechanics have shown that the Newtonian mechanism is not strictly correct. In the same fashion the radiative calculations are basically correct although we know that we do not know all the details well. They are based on the well accepted theories of quantum mechanics and electromagnetism, but to calculate anything we must know many features of the atmosphere, which we can pick from experimental observations without recourse to theories – or we can alternatively use thermodynamics and some fluid mechanics to calculate the adiabatic lapse rate.
All this is known well enough for performing reliable calculations of useful accuracy, but not enough for determining the real climate sensitivity with feedbacks.
(There are no effects related to rotation of the earth that would prevent this approach or make it suspect, but the rotation is certainly one important factor in more comprehensive calculations on the level of GCMs or even at a somewhat lesser level.)
Pekka,
This is where your prejudice is showing.
The Law of relativity fails when brought back into the planets past and the system was very much different with no evaporation and the planet was rotating faster. The atmosphere was completely different with much less friction and increased motional speed.
Quantuum mechanics fails as it does not take into account of motion and rotation. In a lab, two points are easy to connect but in space, there are NO fixed positions as everything is in motion. This also does not take into account of rotation.
Pekka, unless you know every single area has been covered to prevent contamination or mistakes, the experiment is prejudiced by the outcome wanted and not the true outcome.
Many areas in science are now coming under fire for not being able to recreate experiments published and turned into learned science.
Joe,
I knew, what you were going to say. I have already noticed that you give no value to the huge accumulated knowledge base of science, while I believe that this knowledge base is essential.
The most important single characteristic of science is precisely that it is a process that continuously improves knowledge by the process of using earlier knowledge as far as it helps and adding to it where it fails. In doing that it is always critical of both earlier knowledge and the new ideas, but being critical does not mean denying its significance.
Pekka,
I do not deny science significance.
Just correct science.
If you do not know each point on this planet is different due to the diffent energies at each point, how can you understand the complexity of it?
Joe,
Nobody knows what is correct science, but capable professional scientists have a better basis for making the judgment.
In physical sciences the order of magnitude of various factors is often clear to scientists working on issues close enough to the problem considered. They can also verify that their intuition has not failed. The effects that you have introduced in these discussions appear to be several orders of magnitude (factors of ten) too small for being of significance.
When I mentioned 99% correctness, I did not want to imply that the atmospheric calculations would have an error of less than 1%. In this case I only have 99% trust in that the basic effects are understood correctly and can be calculated with an useful accuracy (which allows for sizable errors). The list of these basic effects is short, much shorter than the list of all important features of the atmosphere.
In general the micro-level physics of atmosphere, oceans and radiation is well known (of the 99.9% class), but only few macroscopic effects can be calculated reliably from microscopic theories and general laws of physics. Even for these the accuracy is not perfect. The behavior of radiation in a fixed atmosphere is one of the things that can be calculated, but fixing the atmosphere is never really realistic. Still the results obtained are useful and the article of Pierrehumbert discusses to a major part issues that belong to the well understood class.
Simple question.
Why is the planet cooling then if ALL these scientists know best?
Joe,
What happens for the average temperature in one decade or less is definitely not one of the things that can be calculated from well known laws of physics.
All my comments are comments of a physicist who has never worked with atmospheric sciences. I have strong opinions only on issues that are simple enough to understand based on what I know about physics combined with only basic knowledge of the atmosphere.
Pekka,
Up to 2000, many CAGW supporters, many professional societies, the many governments of the world, the main news media, and yes, you, claimed the evidence is in, the models are conclusive, we know enough, the arguments are over, move on. As the temperature stopped rising (on average), as the hockey stick was shown to be poor evidence, as the low latitude hot spot in the troposphere failed to show up, as the water vapor content of the mid to upper troposphere was shown to be flat to declining, as the ocean heat storage was shown to be inadequate, and as models were shown to have no real predictive capability (and etc.), many admitted “Everybody admits that very much is unknown.”, and many, including yourself only did so after being shown to be wrong on many issues. It is astonishing to me how the extreme supporters of CAGW backtrack and claim they knew there was real uncertainty all along, and expect to not be admonished.
Leonard,
We know that many people have been thinking that they must simplify the message. It is, however, not necessary to go any further than to the IPCC reports to find out that the incompleteness of the knowledge has been acknowledged by the scientists.
The reports are far from perfect in describing the uncertainties, but most definitely they admit the existence of major uncertainties and have done it in all four reports over twenty years.
As ACYL sez, below, that “admission” of uncertainties seems to have few or radically minimized consequences. IMO, the proper conclusion from that admission is that the systemic response of the atmosphere is well outside any possibility of modeling given current understanding of its processes.
Yet the IPCC and Warmists leap immediately to the opposite position, and claim the little they know is enough to go on. I call BS.
‘many admitted “Everybody admits that very much is unknown.”, and many, including yourself only did so after being shown to be wrong on many issues. It is astonishing to me how the extreme supporters of CAGW backtrack and claim they knew there was real uncertainty all along, and expect to not be admonished.’
Yup
That certainly fits in with my own observations of how the conversation has gone in the last few years (though, I don’t know of Pekka’s specific positions, so only speaking generally).
For a layman like myself, trying to get to grips with the science but having a lot of it go over my head, there is a basic trust issue here. Certain cAGW scientists make strong assertions of confidence – “it’s all simple physics” etc, which may not mean exactly what Al Gore and others meant when they said “the science is settled”, but these kind of phrases resonate and reinforce one another.
So sceptics point out that the “simple physics” is agreed, but that the chaotic, non-equilibrium effects of physics equations are far, far removed from “simple”, in modelling the real atmosphere.
“Yes, of course we know that” chime back the cAGW proponents, “what, d’ya think we’re dumb or something?”
I don’t think they’re dumb – I do think, however, that many subtly exaggerate certainty, until they’re challenged about it. Then they backtrack, but still argue on the basis of their own scientific authority, as if no backsliding had ever happened. As if the media wasn’t still full of the same catastrophic predictions they now admit are open to uncertainty. Why don’t these guys ever correct the record?
On this particular issue, there seems to be an air of “Look, we’ve shown you the physics, it’s a simple step from there to global warming. Why won’t you take that step? You must be either dumb or a denier! I won’t waste time talking to the likes of you.”
I will continue to try to be open-minded, but as the old saying goes – don’t p**s down my back and tell me it’s rainin’!
So, we have to be careful with where we draw the line. Saying one believes in the physics of the greenhouse, shouldn’t be extrapolated to mean one believes in cAGW
Your right Hunter.
The problem here is partially correct with CO2 MINIMAL interference.
Ocean surface salt starting back in the 1970’s interfered with the solar penetration of radiation to depths that the heat would normally be dispersed. If the solar radiation hit shallow water, the sand below absorbed some of the radiation before releasing it depending on depth.
So, in essence, the surface salt reflected back solar radiation by blocking partial penetration simular to the CO2 hype.
But now the ocean heat is gone and moved into the Arctic waters where the cold air is creating massive precipitation.
It is REALLY impressive to see almost ALL of the land mass in the northern hemisphere covered in clouds(water vapour).
OK, sorry, let’s replace ‘blatant falsehood’ by ‘error’ :)
…or ‘blatant error’…. lol
It’s been a while since I looked at the relevant charts in the IPCC report, but I think they do recognize that the water vapor feedback is both positive and negative. An increase in temperature leads to both an increase in evaporation as well as an increase in the ability of the atmospere to hold water vapor. That’s a gross oversimplification but it is an area of ongoing research.
Water vapor feedback (always positive) and lapse rate feedback (always negative) are typically combined because the variance of the combination is less than the variance of the individual components. The net is a positive feedback under most circumstances, but may vary in some tropical locations depending on circumstances.
Dr. Curry,
You mentioned Slaying the Sky Dragon. I have not read it, but would be interested in reading your review. A well-written review, describing points of agreement and disagreement, can truly advance understanding of the science. I hope to read a lengthy review of the book here. It’s one vote. I hope you will consider it.
An atmosphere is a mixed gas of matter and photons.
— Raymond Pierrehumbert
I can only imagine the awful mess Dr. Pierrehumbert would create if he was given a tough-but do-able engineering problem—for example: passively storing heat energy for a long time. Imagine if the task was to store a hot fluid so a working man’s coffee was still hot for his afternoon break. An engineer would be thankful the fluid is dense and has a lot of thermal mass, then would attack the problem by reducing the most-effective method of moving heat energy around…conduction. Hmmm, how will we reduce conduction to a practical minimum? How about a bit of a vacuum…that would work (similar to a planet in space). Let’s make sure the supporting structure has little direct contact because that would be a thermal path to the outside world. With a vacuum, we’ve also addressed the next most-effective way of moving heat energy…convection. Outstanding! The coffee will convect itself and integrate its temperature, but that’s irrelevant. So, we start with coffee at say…90C and we want it to still be 60C eight hours later. The fluid will radiate, and we can cheaply get a little benefit by silvering the surfaces, but the delta T is low and the radiation is small—we can ignore it. Problem solved!
How would Dr. Pierrehumbert solve this problem? Every good climate scientist knows about “back radiation” so he’ll suspend a glass bottle in a “greenhouse” gas with a lot of CO2. It would make sense to maximize the CO2 for a given area, so he’d pressurize the gas…not a lot, perhaps a couple of atmospheres. He’d need to be very careful to make sure this chain reaction does not get out of control and meltdown…what with all that “back radiation” amplification and all. A careful design would be required. The resulting flask would be expensive, but it would work great (we have models to prove it)…so a government grant would be appropriate to help the worker with the purchase price. Oh, it would be wonderful to demonize the vacuum flask (don’t you care about the children?) and eventually make them illegal…we’ll make workers use the Pierrehumbert flask for their own good.
I know the clamor which will follow. Radiative balance. Yeah, yeah. We’re fortunate to live in a thermal system where there’s a lot of incoming radiation (big delta-T) quickly storing heat energy in large, watery thermal masses…with a slower release back to space (smaller delta-T). If there is any heat left to start our morning in any kind of comfort (often not nearly enough!)…that’s why.
If you prefer the Pierrehumbert Flask for your coffee…don’t let me stand in your way. Have at it.
You forgot one thing – a flask also keeps things cold
Reminds me of the following cartoon:
http://www.vermonttiger.com/content/2008/07/nasa-free-energ.html
The comments are good too.
From my perspective, I can readily accept the principles of radiative transfer and the fact that the earth is warmer than it otherwise would be as a consequence of greenhouse gases including CO2. I have no problem with this at all. However, after reading some of the earlier threads again, I still have trouble accepting that we can quantify the effect at the surface of a doubling of CO2 with no feedbacks. ie supposedly 1.2degC. I freely admit that I don’t understand the complexities of the models etc, but from my simplistic point of view, if we can’t accurately quantify the ‘no feedbacks’ effect then we don’t seem to have a valid starting point for further work that examins the effect of feedbacks and makes valid long term projections. Finally, I am also still confused about what parameters are actually measured and which are inferred or calculated by modelling.
You’re asking the right questions, Rob. I don’t deny a temperature effect from CO2…if you want to describe atmospheric CO2 as a radiative dissipator or diffuser, that’s way-cool by me. I simply don’t think its a significant contributor to our surfaced temperature…the effect is far too small to measure or quantify.
I believe something very strange…that 1,000,000PPM of N2, O2 and Argon atoms and molecules have temperatures. 390PPM of CO2 bathed in IR radiation contributes to the N2/O2/Argon temperature, but the reverse is also true…and is a much, much (much!) larger contributor. I do not believe in Little Carbon Dioxide Suns…though some apparently do.
There are many bitter clingers around here who will argue, but the human-caused global warming theme is dead. Sorry, Warmies, you had your day in the sun. It’s over. Don’t fight the inevitable…it’s time to think about getting real jobs.
Thanks Rob, I think that’s a very good summary of where many of us are. Yes, we agree that greenhouse gases have some warming effect. No, we don’t agree that this effect can be accurately quantified and it is probably lower than the 1.2C figure sometimes quoted.
Judith asks
……..”I’m asking these questions because I am wondering whether any progress in understanding of an issue like this can be made on the blogosophere”……
The nearest we came to a consensus on this site was around the atmospheric structure proposed by Leonard Weinstein and Nullus in Verba(sp?).
Previously on SoDs site, Leonard and William Gilbert said much the same thing.
Leonard thought that even SoD was in broad agreement, but I’m not so sure that this is the case.
So dialog is useful.
Judith you wrote
But the basis of greenhouse don’t seem to me to provide much intellectual fodder for a serious debate. I’ve lost track of the previous threads (with over 1000 comments on one of them); can anyone provide a summary of where we are at on the various unresolved discussions?
If by “basis of the greenhouse” you mean the absorption and emission of IR by gaseous molecules (what is what Pierrehumbert talks about) then yes, this point is utterly uninteresting.
The Einstein coefficients are known since … umm … Einstein and he didn’t even need QM to compute them correctly. That is 100 years ago.
Statistical mechanics is known since Boltzmann what is more than 200 years ago.
All this is VERY old established physics.
Gas lasers and thermometers work as expected.
No, I don’t think that there is any debate worth mentioning in there and this was the case for centuries already.
However if by “basis of the greenhouse” you mean the dynamics of the Earth system in all its components (solid, liquid and gas) and the role of the elementary radiative properties of gaseous molecules in these dynamics then you open a problem whose formidable complexity and difficulty dwarfs problems of quantum gravity.
In quantum gravity there is at least a serious quantitative mathematically well developped theory, the string theory, while in the problem of the Earth dynamics such a theory doesn’t even exist.
We are still in the prehistory with crude numerical computer simulation and even cruder naive statistics where people arbitrarily average everything and anything because they don’t know what to do with the data.
I think that I have written in my very first post here why I became interested by the climate science and where I see the problem.
I certainly don’t see any problem in boring, trivial details about radiative physics or statistical mechanics.
I see the main problem in the paradigm.
95% of the climate scientists (I don’t say 100 because there is at least Tsonis) believe that the system can just be treated by 19th century physics based on statical equilibriums.
In papers you read all the time phrases like :
“… the perturbed system returns to equilibrium“
“… non linearities are just noise that cancels out”
“… after interaction the system stabilises in a new equilibrium state”
“… climate is not weather and time averages can be deterministically predicted”
Etc
All these statements are just a variation on the same theme – the system is a simple (linear) system in equilibrium where everything that is not linear is stochastical noise irrelevant for the equilibrium.
And what I learned here with surprise through the exchanges was that indeed almost nobody realizes how hopelessly irrealistic and unfounded this paradigm is.
To illustrate what the REAL problem of climate dynamics is, I have posted in the Tsonis thread a link to this paper : http://amath.colorado.edu/faculty/juanga/Papers/PhysicaD.pdf
Despite the fact that this paper finds a MAJOR result and is the right paradigm for a study of spatio temporal chaotic systems at all time scales so also for climate, I suspect that nobody has read it.
And probably only few would understand the importance of both the result and of the paradigm.
Of course the climate is more difficult than even a network of chaotic oscillators because, among others, the coupling constants vary with time and the uncoupled dynamics of the individual oscillators are not known.
Also the quasi ergodic assumption taken in the paper is not granted for the climate.
Yet even in the general case it appears completely clearly that the system doesn’t follow any dynamics of the kind “trend + noise” but on the contrary presents sharp breaks , pseudoperiodic oscillations and shifts at all time scales. Of course the behaviours in the case when the coupling constants vary will be much more complicated and are not studied in the paper.
Unfortunately people working on these problems are not interested by the climate science and those working in climate science are not even aware that such questions exist , let alone have adequate training and tools to deal with them.
Concerning these paradigm issues, this belongs obviously to the unresolved questions and as far as I am aware, it is only on blogs and among others on your blog that they are discussed.
There are other people apparently knowledgeable in dynamical systems and numerical solutions (Jstults, Dan Hughes and some others …) that I have seen contributing here too.
Actually yourself, having had a “Navier Stokes past”, are obviously more sensible to these questions than a crushing majority of climate scientists.
Last but not least.
In the above I have dealt with the theoretical paradigms for the climate science only.
You will have noticed that I didn’t say a word about numerical models.
This has 2 reasons.
First is that numerical models are a special case – they are neither theory nor data.
They are simulators in the same sense as you have flight simulators.
I have been flying both with real planes and with simulators.
A simulator gives you something that looks like flying with no obvious aberrations and is suitable for rudimentary training.
Yet when you pilot a real plane you understand immediately that the behaviour of the plane and of the environment is quite different from the simulator.
Second is that it would need a long post to explain why the climate simulators must always fail over long periods because they don’t solve and will never solve the PEDs governing the real dynamics of the system.
But as this post has already been long enough, I will keep it for another occasion.
Actually Tomas, as a retired airline captain I should like you to know that the airliner simulator is uncannily similar to the real thing in every practical respect. The first time I ever landed a Boeing 767 was at Orlando Sanford airport after a 10 hour flight from London Gatwick. (I didn’t tell the passengers). However, I’m not suggesting that climate simulators are any where near being in the same league.
Tomas,
I agree fully that there are no valid theoretical reasons to trust that the earth system has simple statistical properties.
I do not believe that the climate models have such properties unless they are forced to such stability. Thus we cannot use the models to prove simple ergodic properties of the earth system. Here the similarities of different models may be totally misleading, as one of the first things a modeler has to learn is, how to make the model stable enough. Soon he starts to think that this stability is inherent in the system being modeled even, when there is no evidence on that.
The only arguments that we may have to support stable statistical properties of the earth system come from empirical observations and it is questionable whether they support this conclusion rather than the opposite. There are all kind of irregular cycles from short term fluctuations to glacial cycles and as far as I understand none of them is really well understood. Of course many details are known about ENSO type cycles of duration suitable for collecting empirical data, but even their understanding is badly incomplete.
Tomas, I agree that the real issues are feedbacks, sensitivity, and nonlinear dynamics. On this thread, i’m hoping to declare the basic physics of ir absorption/emission pretty much a closed subject, so that we can move on to the more complex topics, without getting the threads cluttered up with “the greenhouse effect doesn’t exist.” That is what I am trying to do, anyways
so that we can move on to the more complex topics, without getting the threads cluttered up with “the greenhouse effect doesn’t exist.”
Are we there yet? Or close?
Tomas. I read what you have written, and I do not pretend to understand it in detail. The impression I get is that you are saying something like this :-
“The earth’s atmosphere is complex, not to say chaotic. Anyone, like the IPCC and climate modellers, who claim to have captured the physics of how the atmosphere works, are simply wrong. The atmosphere is much too complex, so that simple approximations will always give the wrong answer”
Am I anywhere near correct?
….After reading through a couple of thousand old comments on various old posts in the last two days, I would say we are there actually. In fact I recall seeing hardly any comments along the lines of ‘the greenhouse effect doesn’t exist’, but there are plenty questioning what that means in a practical sense. :)
Tomas, thanks for this comment; one of your older comments inspired this post of mine about predictability and averaging.
Up thread a-ways, Fred Moolten says:
Nope; the climate is a huge multi-scale problem; there is no meaningful “separation of scales” that would support what you are claiming (as is the case in many turbulent, reacting fluid-dynamic systems). This one is not controversial either. Averaging does not make the chaotic predictable (or unchaotic). Lorenz was clear on this fact. Modern climate scientists like James Annan haven’t forgotten it: see page 3, discussion of Figure 2 (though it might be useful to pretend that the chaotic is stochastic for certain tasks like parameter estimation). It’s just the uncritical, activist cheerleaders who seem to have forgotten (or never learned it in the first place).
your new post is very very good. I would be most interested in hosting a thread here on your post, let me know if you would like me to do this. Alternatively, i can write a more practical post on ensemble interpretation (and the pointlessness of the ensemble mean) and refer to your article.
Sure, I’d appreciate a link when it makes sense as you move the discussion more towards dynamics (I don’t think my notes are worth a whole discussion thread though). For the folks that are interested my little Lorenz63 series of posts starts from setting up a numerical solution method, and proceeds to using the toy to demonstrate various things and gain insight into certain questions. As Pekka points out below, it is just a very simple toy, so the insights we can gain are limited (but I still think useful!).
good. peter webster and i are preparing a post on hazards of averaging, hope to have it up on monday, will link to your post
jstults – I can agree with you that in a hypothetical climate on a hypothetical planet in a hypothetical mllennium, chaotic elements might overwhelm an ability to discern trends because of their magnitude and/or overlapping timescales. Currently, in this climate and millennium, that is not the case. The most conspicuous internal, unpredictable variations, such as ENSO, AMO, PDO, do average out – probably not totally, but sufficiently for the centennial-long warming trend to be discernible.
This is not to say that elements outside the long term trend are easy to separate out on shorter timescales, or for that matter that much longer variations (e.g., multimillennial) might not also be hard to disentangle, but on timescales of greatest import to us, the non-chaotic elements can be well characterized, and their trends quantified. An informative method for extracting this trend data from contaminating variations, known or unknown, is described by Wu et al – trends and detrending .
It is important to recognize chaotic elements in our climate, but equally important, given the empirical evidence, not to mischaracterize the climate system as a whole as chaotic.
Thanks for the chuckle Fred; I was wondering why that paper read like a marketing pitch until I noticed that one of the main references for the method was a patent by one of the co-authors.
As Pekka points out below, the more interesting thing is to learn about what we can say about the trajectory of a mixed periodic / stochastic / dynamic system. The periodically forced model begins to explore this by changing parameter values so that the response is chaotic / non-chaotic (notice they don’t suppress chaos by averaging!). As far as “letting the data speak for themselves”: Don’t Bleach Chaotic Data.
Jstults – This is a circumstance in which interested readers should review the data to make their own judgments, so as to remain independent of assertions made during these exchanges. I expect that most will agree with you (and me) that a climate system might be in theory hopelessly chaotic, but will also conclude, as I have, that our climate is not. The unpredictable elements (ENSO, AMO, PDO, etc) do in fact average out fairly completely, and data from past centuries suggest that this property is not unique to current times. Regarding the link I provided to trends and detrending, I don’t see the article as invalidated by the holding of a patent by one of the co-authors, but again, readers can judge for themselves. I found the capacity to extract a climate trend from the data persuasive.
Finally, without revisiting the entire climate change literature for evidence, I would also make a point that perhaps is not adequately appreciated by individuals more familiar with chaos than with climate. The known properties of CO2, solar irradiance, aerosols, water, etc., and their behavior as reflected in the Schwartzchild and Clausius-Clapeyron equations, and as confirmed by satellite, atmospheric and ground-based measurements, tell us that the quantified role of these entities must account for a substantial portion of the observed trends. This does leave some wiggle room for chaotic elements, but the wiggle is limited. The climate harbors chaotic elements, but it is not chaotic.
Fred,
I do not think that one can learn much new using this methodology to find trends. Its objectivity is not real. It is making similar model assumptions as other methods that have been used. The method is based on the assumption that the reality is a trend plus shorter term variations. How ever these assumptions are made, the results are approximately the same, but it is still based on the assumption that what is given as trend is really a trend and not part of another fluctuations. No objective model can get around this problem and the problem is just there.
The alternative is that there are longer term fluctuations, either regular oscillation or less regular mode changes or whatever. Without better knowledge it is likely that there really are such fluctuations, but we do not know how strong they may be and how much they have influenced the climate of last couple of decennia.
The scientific approach applies thinking similar to Bayesian statistics. It is unavoidable that we must make subjective judgments on what is possible and what is likely neglecting all relevant empirical observations. Then we can use every observation to strengthen or weaken our trust in any specific hypothesis. The more empirical data there is the more weight have the observation in relation to the subjective preconditions, but never can the influence of the preconditions be completely eliminated. At present the preconditions may still dominate the outcome for the climate attribution and the empirical data only modify it to some extent.
Of course you’re right in that, sorry that what I wrote may have suggested that fallacy in the reader’s mind.
I did not; here’s why (also another thing I got a little chuckle from, so thanks again ; – ), from the intro:
Off to a good start; criticism of various ad hockeries is a theme running through much of Jaynes work. “This is promising,” I thought.
From the methods:
Um, yeah, that’s not ad hoc at all…
As Pekka points out below (“never can the influence of the preconditions be completely eliminated”, well stated btw), there’s no magic here.
I’d encourage you to read up more on predictability (I even link some of that lit in the post I mentioned before); it’s an interesting topic. I’d be glad if you added links to parts of the lit you think are relevant to predictability for our climate in the comment section of my little open notebook.
None of the classic simple systems of non-linear autonomous ODEs with constant coefficients which exhibit chaotic response can exhibit a trend in the dependent variables as a function of the independent variable.
All results from deterministic chaotic models leave unanswered the crucial question: How much of the observed behavior is dependent on the deterministic nature of the model. The real atmosphere is not deterministic, it is stochastic. Models of the atmosphere may be deterministic, but the atmosphere is not. The real atmosphere follows the laws of physics, but the final outcome does not follow from these laws and the initial conditions only, but it is influenced continuously by additional stochastic perturbations. The way these stochastic perturbations enter the process depends on where we draw the system boundaries, but wherever the boundaries are drawn, stochastic perturbations will cross these boundaries.
This observation has made me always doubt many results of chaos theory. My intuition tells that many of them are really consequences of the deterministic dynamics, not of the properties of the real process described approximately by the deterministic model. In some stochastic models one may end up closer to the behavior of the ensemble mean of a deterministic model than to the behavior of individual trajectories.
Concerning the predictability of the climate, we do not know. It is possible that the climatic statistics is indeed dominated by boundary conditions on the level we are interested in, but it is also possible that it is not. Fred made the statement that the oscillations are really oscillations that can be averaged out at the time scale of interest, but for making this claim justified we should have a reason to believe that there are no important variations of a longer time scale. We should understand the earth system much better than we do to make such claims.
Actually we know that the earth system has variations on much larger time scales. The dynamics on the scale of tens of thousands of years is not only external forcing and rapid convergence to new boundary conditions. On the contrary there is definitely a lot of internal dynamics involved. I do not see justification for saying that there would be any time scale between the molecular processes and the lifetime of the earth that would not be strongly influenced by some internal processes that are not understood at the level that would allow for making good predictions.
It is quite possible that the internal processes are less important at some particular time scale, but I do not see evidence for stating that that this would be true for any of the time scales important for understanding the present issue of climate change. The uncertainty affects our decision making in both directions. It makes it more difficult to know, whether we really have any serious problem but it makes at the same time more difficult to make any definitive statements concerning the risk of reaching a dangerous tipping point by continuing to add CO2 to the atmosphere.
Pekka – To some extent, my response is the same as to jstults above. There are theoretical reasons for expecting climate unpredictability, but we know from observation that the unpredictability operates mainly on timescales shorter than the multi-decadal or centennial intervals of particular interest to us. On those timescales, predictions have been shown to perform reasonably well for some variables (global temperature), although less well for regional forecasts, hurricanes, or other climate variables. Indeed, as one proceeds from annual to decadal to multidecadal scales, predictability increases rather than decays with time.
You raise the important point as to whether even longer timescales might reintroduce substantial unpredictability. The answer is that we don’t know, but the record again indicates that there may not be reason to expect major surprises on a global scale – regional or hemispheric phenomena are probably different in that regard. We do not yet, for example, fully understand orbital forcing, or why 40,000 year oscillations that dominated in the past have been superseded by 100,000 year domination, but we still have a good idea of what paleoclimatology gives us to expect over the coming thousands of years. We also have an even better idea that if current anthropogenic activities continue, they will modify the future in ways they have modified our recent and current climate.
Raymond Pierrehumbert (RP) and his article were discussed on Climate Clash and a topic came up that I am trying to get my head around.
Tom Vonk (TV) disagreed with RP about the consequences after CO2 absorbs a photon from the Earth surface.
Both agreed that since at atmospheric temperatures all CO2 molecules would be in their ground state (translational) and hence ready to absorb a 15um photon from the surface.
Agreement also that only 5% of CO2 molecules would have the necessary energy to re emit a 15um photon at the temperature of the colder atmosphere.
RPs position was that thermalisation took place and the photons energy was shared out with N2 and O2 molecules almost instantly by collision.
TV disagreed about the thermalisation net effect and cited Kirchoff’s Law of Radiation to prove that the net heating effect did not occur and that a 15um photon was emitted nearby in a random direction to keep Kirchhoff happy.
RPs article somewhat confused the picture by agreeing that Kirchhoff law had to be complied with.
So to me there seems to be a problem here.
They both cant be right, or perhaps both are wrong.
Scenario 1.
One 15um emissions for 20 absorptions; result; the atmospheres temperature increases slightly with further emission of longer wavelength to balance energy. Kirchhoff’s Law apparently not followed.
Scenario 2(TV)
One 15um photon in and one 15um photon out.
No net heating effect.Kirchhoff’s Law followed.
Scenario 3.(RP)
One 15um emissions for 20 absorptions; result; the atmospheres temperature increases slightly with further emissions of unknown wavelength to balance energy. Kirchhoff’s Law said to be followed but there seems to be a problem of how this complies.
I tend to think that Scenario 1 is most likely.
I discussed this situation with Pekka Piriläin on an earlier post and he might like to give his views.
Bryan,
I do not know the actual ratios, but I know that the CO2 molecules are not all continuously in ground state, but actually a large fraction of them is at any moment at one of the excited states that may emit at 15 um. The excitation energy of these states is close to the typical energy released in a molecular collision. Thus any CO2 molecule in ground state before collision has a fair chance of getting excited by the collision and a molecule in an excited state has an even larger change of releasing its energy in the collision.
The power of radiation from a particular volume of CO2 containing atmosphere is proportional to the number of CO2 molecules, the share of molecules in states that can fall to ground state by radiation and the coupling constant that applies to that particular radiation. In troposphere the huge frequency of collisions with N2 and O2 determines almost fully the number of CO2 molecules in excited states. The share that ends to these states by absorbing radiation is minuscule in comparison and such states release their energy almost always by a collision. For every emission and absorption there are very many collisions that lead to similar transitions between the same states.
The high frequency of the collisions is also the reason for the line broadening. This leads to the Lorentz profile of pressure broadening which is really the broadening caused by frequent collisions.
Kirchoff’s Law is related to the fact that the coupling constant between to states of the molecule and a photon works both ways. A strong coupling increases the probabilities of both emission when the excited states goes to the ground state and the absorption when the transition goes in the opposite direction.
Pekka Pirilä
Thanks for the reply.
……”In troposphere the huge frequency of collisions with N2 and O2 determines almost fully the number of CO2 molecules in excited states.”….
Raymond Pierrehumbert and Tom Vonk both seem to accept 5% excited hence 95% in the ground (translational) mode.
I’m not sure from your reply if you agree with them.
Kirchoff’s Law in this case seems to be of no use since 99% of the photons energy is with the non IR radiative N2 and O2.
If on the other hand Kirchoff’s Law is fully complied with there is no net heating of the atmosphere and Tom Vonk is correct.
Bryan,
Individually one main vibrational state that corresponds to the 15 um line has an occupation that is approximately 3.5% at the temperature of the lowest atmosphere and 2.5% in upper troposphere, because the temperature is lower there (the ratio is exp(-E/(kT)), where E is the energy level, k Boltzmann constant and T the temperature). There are, however two independent transfer directions which doubles the total ratio. There may be some difference also due to different rotational states, but I would have to dig deeper to say, whether this influences the total ratio.
I haven’t actually read Vonk’s comments, but if he believes that Kirchoff’s law requires each CO2 molecule that absorbs a photon to emit a photon (on average), he doesn’t understand Kirchoff’s law. The vast majority of CO2 molecules excited by photon absorption are de-excited by collision rather than photon emission, and the result is almost immediate transfer of the energy to neighboring molecules to create a local heating effect – so-called “thermalization”. The emission of photons comes almost entirely from other CO2 molecules excited by collision rather than photon absorption. There is no violation of any law, and the local heating effect is well understood, and in fact, can be measured under laboratory conditions. It may be that Vonk confuses absorptivity with absorption, and emissivity with emission, but then, since I haven’t read what I states, his confusion may arise elsewhere.
Not a Freudian slip, but it should read “I haven’t read what he states”. Except for the typo, I did actually read what I stated before posting the comment.
To elaborate further on what might be an additional claim regarding absorption/emission balance – in an atmospheric layer of infinite thinness (zero height), all emitted photons will leave the layer because by definition, internal absorption of a photon emitted within the layer is infinitely unlikely. In a real layer, some internal absorptions must occur, and their frequency increases with the concentration of CO2 or other absorbers. These internal absorptions create a heating effect via the absorbing molecules that is not balanced by a cooling effect from the emitting molecules. The result is net heating. As the vertical height of the entire atmospheric column where absorbers reside is very great, internal absorptions overall play a very prominent role in GHG-mediated warming.
But when N2 and O2 etc., heated by convection, collide with CO2 why wouldn’t there be a net cooling when it triggers emission? It really is quite simple Fred.
Well, if I had realized it was that simple, I shouldn’t have spent so much time commenting on it, should I?
Incidentally, convection is not a heating mechanism but a heat transfer mechanism.
Tom Vonk is constantly trotting this out and is wrong both in theory and observation but no amount of discussion will change his mind and it’s a total waste of time even trying. Also Kirchoff’s Law says that absorbtivity is equal to emissivity not that emission = absorption.
No one has learned anything here, or from Pierrehumbert’s recapitulation of “The Radiative Transfer Theory,” in capitals to communicate its divine status. The theory ignores convection; it ignores the ideal gas law and the gravity that compresses the atmosphere, and increases the temperature, as a monotonic function (increase) of depth (Pierrehumbert makes the insane claim that “An atmospheric greenhouse gas enables a planet to radiate at a temperature lower than the ground’s” — NO, the thermodynamic lapse rate, depending only on gravitational g and the atmospheric specific heat does that); it ignores the Venus/Earth data that proves there is NO greenhouse/Tyndall effect whatsoever, on either planet. It assumes that anything with a temperature is a blackbody (absorptivity=emissivity), including the surface of the Earth (obscene misunderstanding of basic physics). Pierrehumbert assumes no scattering of IR radiation, because he does not even have an understanding that absorption and re-emission does just that; heat diffusion is beyond his The Radiative Transfer Theory, as his cartoon Figure 1 well demonstrates. Which is less correct: “Half of the radiation is directed south by southeast, and half north by northwest” (see Shakespeare aficionados for the meaning of the latter direction), or “Half is directed back to the surface…”? And having figured that out, what is the effect of radiation from a cooler body on a warmer body? No, to both the Greenhouse Effect and to The Radiative Transfer Theory, as it is applied in climate “science”. It is sheer idiocy on the part of physics today, and of Physics Today as well. Incompetence to the nth power.
I don’t agree that’s an insane claim. An atmospheric greenhouse gas(es) does indeed make it possible for the surface temperature to be higher than the planet’s effective radiating temperature, however the greenhouse gases are not necessarily responsible for the increased surface temperature. As you say, gravity etc all have a part to play.
Judith says:
“I’m really wondering if we can get past exchanging hot air with each other on blogs, learn something, and collectively accomplish something? If you have learned something or changed your mind as a result of the discussion here, pls let us know.”
Yes I think so. Discussions here over the last 6 months have solidified my understanding of RT physics, specifically heat transfer mechanisms between the oceans and the atmosphere which was a sticking point for me and fundamental to the concept of AGW.
Where can we go from here? Over the last 100 years we have added a thick 100 ppm CO2 blanket to our atmosphere and this must cause long term warming at some level. Even though current observation suggests a low sensitivity, temps will surely increase in an unnatural (manmade) way for a very long time. Finding ways to unravel this blanket without returning to the use of stone axes, drastically disturbing quality of life, or wiping out 1/2 the population in the process seems like a good place to move discussion.
Some ideas:
*Restart proven technologies that generate enormous amounts of energy while leaving a very small CO2 footprint (nuclear).
*Reforestation and low impact agriculture techniques.
*Continue to improve efficiency to reduce energy needs (electric motor technology, batteries, Solar LED lighting.
* Vast public subsidy of alt energy that have not proven to be economically viable should be avoided as a misuse of available resources. (wind farms of Europe that generate little or no energy, solar panels in use above 40 latitude as examples).
I am sure there are many more good ideas that would move us away from fossil fuels without enriching or empowering third world tyrannical dictators.
After more than 100 comments so far, I have the sense that Dr. Curry’s expectations for the thread are being met to an extent that may be less than she hopes, but more than is typical for blogs on climate topics. She has cited Pierrehumbert’s Physics Today article as an excellent description of radiative transfer, and it appears as though the majority here, whether or not in accord with every point Pierrehumbert makes, agree in general that this aspect of geophysics is well understood. There are others who dispute this, and a few others still who choose to introduce extraneous topics to the thread, but by and large, the level of agreement is a good omen for the prospect of moving toward more challenging issues in climate science.
My own perspective is that Pierrehumbert has provided a cogent and accurate perspective on the topic. More important, he (and others elsewhere) are correct in pointing out that much of the basic geophysics (and particularly radiative transfer) operate as expected not only in the laboratory and in models but also in the Earth’s climate system, as confirmed by observational monitoring. Despite climate complexity, expected trends are discernible. Despite the potential for chaotic elements to overwhelm the predictable ones, they don’t (or at least, they interfere at a level amenable to appropriate adjustments). Despite the impossibility of meaningful averaging of global values such as temperature, averaging is neither necessary nor utilized, but rather the mean values and interactions among grid anomalies serve as good estimators of climate behavior, and so on. As pointed out by many, numerous uncertainties still preclude precise estimates of climate sensitivity and other relevant variables, and all these points, of course, remain items for further exchange of views, supported, I would hope, by empirical data rather than exclusively by arguments of a purely theoretical nature.
Above, Hr asked whether Pierrehumbert’s article provided new information. It breaks no new ground, but I wouldn’t be surprised if each of us picked up something we had not earlier known. In my case, two items of interest stood out.
The first was the very large divergence between the mean excited state lifetime for a CO2 molecule to emit a photon and the much shorter intercollision intervals. Although aware of the divergence, I had not appreciated its magnitude (up to 100s of milliseconds for unperturbed lifetime vs less than 10^-7 seconds for collision). Indeed, it is only the very rare CO2 molecule that gets to emit a photon, but CO2 molecules are abundant enough for this process to yield very frequent emissions. Most of the energy, however, goes into local heating of surrounding air, which is therefore extremely efficient despite the low CO2 concentration in the atmosphere (currently about 390 ppm).
The second was the intriguing upward “spike” in the intensity of IR radiance found by both models and observations in the middle of the IR “ditch” in radiance measured at the top of the atmosphere over the main CO2 aborption spectrum region centered at wavenumber 667. The ditch reflects the reduced radiance attributable to the cold altitude at which the IR is emitted – an altitude necessary for the atmosphere to be IR transparent enough (contain few enough CO2 and water molecules) to permit adequate IR escape. The spike, however, is attributable to the fact that at the most opaque wavelength, the required altitude is even higher – in the stratosphere. Because temperature rises with altitude in the stratosphere, the escape altitude is warmer than below, and the IR emission is correspondingly higher.
The spike has interesting implications, some relevant to stratospheric ozone. The stratospheric temperature inversion – warming with altitude – reflects the ability of ozone, generated at high stratospheric levels, to absorb solar UV radiation and thus warm the surrounding air. Ozone depletion due to CFCs was responsible for stratospheric cooling in earlier decades, and the restoration of ozone through the Montreal Protocol has restored some of the warming. Ozone, however, also plays a role in stratospheric cooling consequent to increasing CO2 concentrations. Because IR emission by CO2 becomes more rather than less efficient with altitude in the stratosphere, due to the warmer temperatures, increasing CO2, and thus a warmer escape altitude, permits CO2 to cool more than would the same concentration at lower altitudes. The result is stratospheric cooling – the opposite of the tropospheric warming from increased CO2 in the troposphere. Recently, stratospheric temperatures have been fairly balanced between the warming from ozone repletion and the cooling from CO2 increases, but each of these phenomena can be discerned individually because of the different altitudes at which they maximize.
Other interesting fine points also appear in the Pierrehumbert article, but the above were ones that interested me particularly.
For those interested in CO2-mediated stratospheric cooling in terms of the ability of CO2 to absorb and emit photons, the relevant phenomenon is the dependence of atmospheric absorptivity in the stratosphere on ozone-mediated UV absorption (CO2 has almost no UV absorbing capacity), and in contrast, the importance at stratospheric temperatures, for stratospheric emissivity to be enhanced by the high emissivity of CO2 in the radiated wavelengths. Increasing CO2 therefore increases emissivity more than absorptivity – hence the cooling. It can be thought of as a “safety valve” for ozone-mediated warming.
And this is shown by the flat Strat temps for the last 15 years??
Yes, as I explained above, regarding the balance between warming due to ozone repletion (since 1995) and CO2-mediated cooling, but the data showing each operating at a different altitude.
There is no difference in the ozone column values since 1995 so you need to explain why there is a t diffference.
Maksimovich – After declining for more than a decade, ozone began to recover in 1995, with a slow upward trend as of 2005 –
Ozone Depletion and Recovery
Starting at the same time, stratospheric temperatures, which had been declining significantly, flattened out. However, they did not rise despite the increase in ozone –
Stratospheric Temperature Trends (note that the final article appeared in JGR in 2009 but is behind a paywall).
The failure of stratospheric temperatures to rise in concert with the rising ozone concentrations is an expected consequence of the cooling role of increased stratospheric CO2.
Fred,
based on the RSU temp series I would say you mischaracterized the strat temps. They were stable, volcano step decrease, stable, volcano step decrease, stable till now.
Maybe you can point me to the data that shows they were declining before the early 1980’s and early 1990’s volcanoes that appear to have dropped them??
Volcanoes cause only temporary cooling, followed by a return of temperatures to a warmer level unless there is already a downward trend. Prior to 1995, ozone was declining, and the cooling was compatible with a combination of decreasing ozone and increasing CO2. Around 1995, ozone started to turn upward, and the temperatures stablized but did not trend upward in line with the ozone. This disparity between ozone and temperature is not probative, of course, but it is supportive of the expected cooling role of increased CO2.
Fred, stop waving at me. Please, a data source.
One good source is the JGR article linked to above – see the various figures (e.g., Figure 5). The data interval covered is informative. Even massive volcanic eruptions in the 20th century have cooled for 2-3 years at most, so volcanism doesn’t account for the temperature declines up until about 1995.
I should have added that volcanic eruptions cool the troposphere, but their main direct effect on the stratosphere is warming due to their ability to absorb solar radiation. A cooling effect due to their ozone-depleting capacity is of lesser magnitude.
Fred, you strike out again. The data in the paper shows a cooling trend without volcanoes during 58-75 WHEN THE TROPOSPHERE WAS ALSO COOLING!!! During the satellite era the only noticeable perturbations are the 2 volcanoes during what is claimed as exceptional warming. I would also point out that ozone was alledgedly dropping since before the 1970’s!! Your data is NOT coherent for what you are claiming.
Sheesh.
There comes a time during exchanges of this type when it makes more sense for readers to review the material (including linked sources) to make their own judgments rather than for the participants to continue to propound arguments. I would therefore commend these recent exchanges to the attention of interested readers, and will be content with their judgment.
Sorry there is no upward trend in the total ozone column since 1995 eg unep 2010
Average total ozone values in 2006–2009 remain at the same level as the previous Assessment, at roughly 3.5%
and 2.5% below the 1964–1980 averages respectively for 90°S–90°N and 60°S–60°N. Midlatitude (35°–60°)
annual mean total column ozone amounts in the Southern Hemisphere [Northern Hemisphere] over the period 2006–
2009 have remained at the same level as observed during 1996–2005, at ~6% [~3.5%] below the 1964–1980 average.
Please see the links I cited for the data from 1995 through 2005. These show a flat temperature during that interval with a rising ozone concentration. I don’t have 2006-2009 stratospheric temperature data, but it’s a much shorter interval, and unless the temperature has risen significantly in those three years, the basic conclusions shouldn’t be affected.
Your misunderstanding is failing to account for the solar cycle (hale) that exists in the ozonosphere.This is well understood.The expert assessment review is now available at the unep and includes data through to 2010
There is no trend post 1995 statistical or otherwise in the total ozone column.The increase/decrease in the upper stratosphere and mesophere are consistent with GCR modulation at solar minimum and the 27 day and 11 year cycle ie , ghg is indistingusihible at these levels of natural variation.
The findings are quite succinct
New analyses of both satellite and radiosonde data give increased confidence in changes in stratospheric temperatures between 1980 and 2009. The global-mean lower stratosphere cooled by 1–2 K and the upper stratosphere cooled by 4–6 K between 1980 and 1995. There have been no significant long-term trends in global-mean lower stratospheric temperatures since about 1995. The global-mean lower-stratospheric cooling did not occur linearly but
was manifested as downward steps in temperature in the early 1980s and the early 1990s. The cooling of the lower stratosphere includes the tropics and is not limited to extratropical regions as previously thought.
The evolution of lower stratospheric temperature is influenced by a combination of natural and human factors that has varied over time. Ozone decreases dominate the lower stratospheric cooling since 1980. Major volcanic eruptions and solar activity have clear shorter-term effects. Models that consider all of these factors are able to reproduce this temperature time history.
We appear to agree that temperatures have been flat since 1995. The link I provided showed that ozone started to increase since then. This disparity is consistent with CO2-mediated stratospheric cooling as an offset to ozone-mediated warming.
You state that the ozone increase is not statistically significant. The UNEP site you refer to did not show a graph of ozone change between 2006 and 2010, but I expect that as you state, the upward trend since 1995 may not be statistically significant. This, however, does not contradict the conclusion, based on the evidence, that CO2 and ozone are cancelling each other out.
As the UNEP site states, “The global middle and upper stratosphere are expected to cool in the coming century, mainly due to CO2
increases. Stratospheric ozone recovery will slightly offset the cooling.”
Thanks, Fred. Actually the first point you mention had seemed to me qualitatively obvious, as otherwise there’s no way for the radiative gases to warm the adjacent non-radiative gases. But as you point out, it’s nice to have some numbers.
Fred Moolten,
Your comments are the most valuable and informative on this thread (apologies to everybody else). I appreciate three things: the way your remarks stay on-topic, the relevant technical information that you convey, the good-humored persona that you assume here, and your willingness to amend your statements on consideration of new information.
That’s four things (I was wrong).
Also (FWIW), I concur with the assessment that you offered as the lead paragraph of this comment (January 20, 2011 at 11:53 am), with respect to Dr Curry’s original question, “can we collectively accomplish something?”
Here’s an interesting old quote on global warming from Herman Kahn, a top think-tank consultant from the sixties and seventies.
At a time when Paul Ehrlich, the Club of Rome et al were predicting that billions would starve, shortages would abound, and the oceans would die — all within a few decades — Kahn was forecasting a coming boom, and the general likelihood that humanity would resolve the challenges posed by our environmental and technological growth to arrive at a steady, sustainable population with a much higher standard of living.
The greatest threat Kahn saw for America was that we would be strangled into collapse by the “educated incapacity” of our liberal class.
IMO Kahn is the only futurist from that period worth re-reading on the merits these days. It is useful in a cautionary way to re-read Paul Ehrlich and the Club of Rome.
Very helpful, hunter. I’d not heard of Kahn.
The thing that’s most bothered me in the last month has been the influence Ehrlich and the Club of Rome types had on international institutions responding to the environmental critique of DDT from the 1960s. This is spelled out clearly, without histrionics, in a new book called “The Excellent Powder” which I heartily recommend. The science against indoor spraying of DDT was a crock (from what I can see). But the doomsters influenced people at a high level to take the view that saving too many lives from malaria was going to bad for the planet and effected what amounted to a ban, causing tens of millions to die without cause, mostly the children of the very poor.
The influence of the population doomsters seems to me to be established beyond doubt by Roberts and Tren in The Excellent Powder. It’s a human tragedy – a real travesty, one that deserves much more rigourous treatment as we consider many of the same issues with AGW. This book is a very good start.
Sorry thought that this is well off target for this topic Judith.
The hat tip should go to huxley.
I used to follow Kahn, but was not aware of how he forsaw CO2 and the unimportant role it would play in reality.
Too bad he missed out on how the intersection of illiberal reactionaries and fearful ignorance would intersect to create the social mania we have today.
Richard Drake: Glad you like the Kahn piece. I put it up because I thought his take on CO2 from 1976 was straightforward and refreshing.
Kahn didn”t question the greenhouse effect and he acknowledged that CO2 was a legitimate threat worth watching, but he didsn’t head off into the-sky-is-falling territory.
Btw, I’m “huxley”.
‘pologies huxley and hunter – but thanks hux. Yep, impressive balance from Kahn on CO2 in 76. When I grow up I’d like to demonstrate similar levels of foresight and judgement.
Richard D. & hunter: Kahn was amazing, though he’s largely been forgotten and sadly little of his work is available on the web.
Last summer I ran across the transcript of a talk Kahn gave in 1976 to Gov. Jerry Brown and his staff (the first time time Brown was governor of California) that was stunningly prescient for today. Kahn spoke of the “New Class”:
Kahn saw the New Class’s growing power, its animosity towards “square” Americans, and its “educated incapacity” that blinded it to thinking outside its own intellectual box as a much more serious threat to America than all the environmental crises that the New Class then, as now, was raising alarums about. Kahn was bang-on in predicting the current disconnect we see between Obama blue Americans and red Americans.
Here’s a Kahn quote I did find on the web that explains “educated incapacity” and, I would say, bears directly on the current climate controversies:
Kahn was a highly educated, physics-trained polymath who admitted that he was New Class himself. He had no problem with global warming as a reality and as a potential threat, but he was steadfast in countering the New Class’s obsession with risk and its hostility towards ordinary Americans.
I like to think my intuition isn’t bad. What you’ve spelled out in a second post on Khan exactly corresponds with what came into my head when I first the phrase “educated incapacity”. This is really important stuff. Thanks for drawing my attention to it.
(The other example that springs to mind is when I asked a friend in 1998 what the big stories had been at OOPSLA in the States, from which he’d just returned. He mentioned two things, the second being “extreme programming”. I’d never heard the term before but I knew at once what it meant. At last was fulfilled what I’d been waiting to happen in software engineering since 1986 or even earlier. So it proved. It’s that Blink stuff. But I now really, really digress.)
Richard D.: You’re welcome!
I’m a programmer and, except for paired programming, extreme programming made sense to me too.
Ha, you intuited the one practice I didn’t foresee. There are pros and cons to co-coding – but I don’t want to argue!
The following summarizes what I think was a “semi consensus” of opinion from the earlier threads on this site regarding IR absorption. It does not answer questions regarding potential other factors effecting climate.
In the case of IR absorption, very little of the science requires much more than fairly basic physics. On that basis, we don’t need to take climate scientist’s word for it; we can work out the whole thing from scratch. It you do this based on the official IPCC story, and except for minor details that could nitpicked about, you will find the whole thing was pretty much as the science said it was.
One thing that seems to have stumped climate science is why the number for climate sensitivity is all over the map. IMO there are two basic reasons.
Climate sensitivity depends on whether you calculate it from:
1. first principles the way some people like to, or
2. from extrapolation of how the Earth’s surface temperature has actually responded, which is what a few climate scientists (not enough in my view) call “observed climate sensitivity.”
The advantage of the latter is that it takes into account all the contributory factors.
Trying to simulate the whole planet even approximately over a period of decades even using the most massive digital computers on the planet is an exercise in group wishful thinking. There are just too many important factors we don’t fully understand yet, for example the rate of heat downtake and return of the deep ocean, the amount of extra cooling generated by evaporation of rain while it falls, etc. etc.
Rob – I’m not aware of attempts to calculate climate sensitivity from actual “first principles”, but you’re right in asserting that some methods are based mainly on modeling the combination of individual forcings and feedbacks to yield a composite estimate of sensitivity, whereas others are more empirically based, relying on historical data for temperature responses to CO2 or other variables.
The former approach, for example, starts with well established values for CO2 forcing, adds estimates for water vapor/lapse rate feedbacks (with some observational confirmation via satellite measurements), factors in ice-albedo feedbacks (again, with observational constraints), attempts to estimate cloud feedbacks despite their uncertainties, and then models the interaction of all these independent entities to yield a sensitivity estimate.
The latter approach is simpler in theory – it asks, “If we know the forcings and the temperature response, we can divide the second by the first to arrive at an accurate climate sensitivity value.”
The problem with the second approach is that it is rarely possible to have a clearly reliable and accurate estimate of all forcings, and often even the responses are somewhat uncertain. An example is the Last Glacial Maximum (LGM), which is good starting point because the interval is long enough to permit estimations of equilibrium responses – part of the definition of climate sensitivity. Reasonable data exist on temperature, albedo, and CO2, but other forcings such as aerosols are less well quantified, and even the time correspondence between forcings and changes in temperature harbors some uncertainties. At times, attempts to avoid some of the difficulties involve a “reverse approach”, in which different climate sensitivity values are plugged into a model to determine which range of sensitivities is most compatible with historical data on temperature, CO2, and other known elements of the system.
All of the above are clearly beset with limitations. In concert, they all tend to yield sensitivity values within the canonical 2-4.5 C range per CO2 doubling, but with some outliers.
A further problem comes from the assumption that climate sensitivity computed for one particular forcing applies equally to others. This assumption may be reasonable for different atmospheric forcings operating over long intervals (CO2, other GHGs, persistent aerosol concentrations, etc.), but is probably unreliable when attempts are made to extrapolate from short term changes originating regionally, for example in the ocean (ENSO events) to long term global sensitivities to atmospheric perturbations such as CO2 increases. Examples of unreliable extrapolation include recent studies based on ENSO-mediated temperature changes, reported by Lindzen/Choi, Spencer/Braswell, and Dessler.
It is not surprising that the 2-4.5 range of 95% confidence limits for sensitivity to CO2 doubling has not narrowed appreciably in recent years.
For an excellent description and references related to some of the above, IPCC AR4 WG1 Chapters 8 and 9 are very useful – WG1 Chapters
Wrong thread, CJ. I agree with a lot of what you say but that doesn’t make it right on this thread, sorry.
Dr C
What is the role of the greenhouse effect on the weather?
I am looking for a more Trenberthian answer. I know the usual “it will result in warmer nights and increase the average minimums” kind of thing, but that doesn’t speak to what the ‘extra CO2 is going to do’ question.
The greenhouse effect alone would raise the surface temperature to a greater degree that what is observed. This is because weather, occurs at the surface, distributing the heat and taking up and getting rid of it etc. In order to say what the climatic effects of increased CO2 are, we should be able to confidently say what its effects on the weather are.
Therefore the question arises – what is the effect of CO2 on weather? ‘Natural variability’ is a nonsensical fig leaf because CO2 is well mixed in the atmosphere, present everywhere and acts via radiative transfer mechanisms which are instantaneous. Therefore CO2 should have a measurable effect on the weather and therefore such an effect should be indentifiable independently (hopefully by clever instruments or experiments)
“On this thread, i’m hoping to declare the basic physics of ir absorption/emission pretty much a closed subject, so that we can move on”
Such a declaration would be a sure indication that it needs further scrutiny.
Andrew
Judith,
I’ve made some progress with people on this topic in blogland myself. Not as much as I would like, but some for sure. There is no question as to whether the basic effects are real, there is only question of magnitude and the people who grasp that point are far more effective in making their case on the real issues.
Don’t give up on it, some see the ridiculous outcomes people call ‘solutions’ and will never be convinced, however, the light bulb does go off for those who can understand the certainty of the basic physics.
Dr Curry
The posts & comments here and the posts at SoD have been invaluable in refreshing my early 60’s vintage undergraduate physics and post-graduate PhysChem. One accepts the basic science. It’s all non-contentious to anyone understanding the physics, the thermodynamics, the math. The effects of feedback (and even the sign therof!) seem less well, even poorly, understood, particularly once clouds enter the reckoning. Everything thereafter is built on a shaky foundation. The establishment (the ‘believers’) seems to have been taken by surprise by oceanic influences, by weather cycles, and, it seems (one predicts), the sun.
Once politics (or post-normality) enters the scene, activist scientists (I surely need not name names) seem to become prominent. They seem to arrogate to themselves data ‘custodianship’, responsibility for ‘homogenizing’ (‘correcting’?, ‘adjusting’? …. probably harsh to go further) the data. They inevitably, it seems, become evangelizers.
The temperature record is, at best, suspect, even contaminated. IPCC ARs become political tracts (at least the ‘Summar[ies] for Policymakers). Highly contentious, extreme views are put forth by the icons of the ‘movement’.
The average, scientifically literate Joe (that’s me) says ‘Whoa!’. Even stronger is his unease when urgency is manufactured, when emotional pressure is applied (those polar bears! that little girl!), when financial interests become so blatant and corruption becomes so obvious.
Try this:
(H/T Michael Hammer
June 27th, 2009)
“The corrected data from NOAA has been used as evidence of anthropogenic global warming yet it would appear that the rising trend over the 20th century is largely if not entirely an artefact arising from the “corrections” applied to the experimental data, at least in the US, and is not visible in the uncorrected experimental data record.
“This is an extremely serious issue. It is completely unacceptable, and scientifically meaningless, to claim experimental confirmation of a theory when the confirmation arises from the “corrections” to the raw data rather than from the raw data itself. This is even more the case if the organization carrying out the corrections has published material indicating that it supports the theory under discussion. In any other branch of science that would be treated with profound scepticism if not indeed rejected outright. I believe the same standards should be applied in this case.”
http://jennifermarohasy.com/blog/2009/06/how-the-us-temperature-record-is-adjusted/
You hope for progress? Good luck!
This would only reinforce a point I made:
http://noconsensus.wordpress.com/2011/01/20/what-evidence-for-unprecedented-warming/#more-11278
Judith,
I took a quick look at the Jan Physics Today article, loosely read about the first half due to time constraints and work load. I didn’t have enough time to see what the ramifications were of not going with the Eddington approximation etc. My own version of this is not for climatology but I do have a simple atmospheric 1-d model based on Hitran.
Raymond talks about a continuum in association with Kirchoff’s law. My understanding is that one really gets continuums with optically thick slabs and with particles and dimers. Consequently a cloud droplet produces not only scattering but also absorption and emission in a continuum.
My interpretation of Kirchoff’s law is that it is valid as a function of wavelength. Note, I use wavelength, not /cm frequency and I don’t work well upside down. With LTE, which should be valid for most of the stratosphere, if not all, emission becomes a function of the energy states as described by planck’s law distribution and of the absorption spectra of the the slab. That means a slab neither absorbs nor emits if it is at the same temperature as the earlier one and the radiation is at the same BB temperature.
Also, with the geometry of the situation near the boundaries that start to be transparent, one has emission upward and downward from the slab but has only absorption from below and that causes a conservation of energy reduction in T needed if additional ghg absorbing gases are added.
Essentially, I found nothing surprising or controversial in the first half of the paper. I also found it marginally interesting and relevant.
Such models are relevant mostly for clear skies and tend to lose meaning in the presence of clouds. Clouds bring in true continuum absorption and emission and per K&T 97 (khiel & trenberth 97) take up nearly 62% of the coverage of the Earth with practically total optical absorption and or reflection, depending upon SW or LW as defined by their 3 layer model.
According to my 1-d model, around 120 w/m^2 of absorption occur using the 1976 US std atm model. I think KT97 claimed around 125 w/m^2. Simply taking the average surface T and stefan’s law gives 391 w/m^2 of emission and for a radiative balance, there cannot be more than around 239 w/m^2 of power escaping Earth. That leaves about 152 w/m^2 of total average blocking which is really occurring. I think KT97 used the number 155w/m^2. The difference must turn out to be due substantially to cloud cover and other atmospheric effects. Roughly 25 w/m^2 is the cloud blocking contribution for the roughly 62% cloud cover fraction.
We’ve now pretty much gone past what simple 1-d and radiative transfer modeling can do for us. What’s more, we haven’t even started with what can be done by this simple averages approach.
(btw, I am not RobB.) I’m new to Curry’s blog and so far find it quite good. I have some questions and doubts in the area of radiation transfer. “Well understood” is a relative judgment, and I would not refer to it as “robust” in the common scientific meaning of the word. Though in come contexts and at some levels both terms I would agree are accurate.
I need to get back home to my information and need to read through the radiative transfer threads here and SoD (which I deduce means Science of Doom???) to do it right. But I’ll throw out a couple of short examples.
While I think the mathematics is probably pretty close, the forcing formula for CO2 is a tad short on physics theory and observational verification: How was the coefficient (currently 5.43 IIRC) determined? Based on what physics? What is the physics that supports if, when, and why forcing goes from linear to logarithmic to something else? Will it stay logarithmic and 5.43 at 400ppm? 800ppm? 1200ppm? How do they know?
The saturation question is usually answered with, as the concentration (pressure) increases the absorption line broadens. Yet if one does the math, the half-width of the “line” broadens almost infinitesimally even with pressure broadening. And the math and physics of pressure broadening is viewed by many (including 4 or 5 of the textbook authors I’ve looked at) as very complex and difficult with considerable uncertainty. HITRAN lists wide variances for the coefficients. This I would not call robust – pretty good maybe, but not robust.
As an aside it would seem the spectrum broadening seen is predominately due to the rotational sidebands (sidelines??).
I may find that I’m commenting wrongly or in the wrong thread; I’ll try to learn quickly. Thanks for the indulgence for my first (and frankly IM_own_O not terribly well written) comment here.
Rod – The logarithmic (approximately) relationship is not primarily a consequence of line broadening. Rather, it reflects the approximately exponential decline in absorption coefficients as one proceeds outward from the center of the CO2 main absorption region at wavenumber of 667 (wavelength 15 um) into the wings. As CO2 increases, the poorer absorbing wavelengths dictate more and more of the increase in optical thickness, and so the absorption effectiveness becomes correspondingly less powerful. For any foreseeable CO2 concentration, there will always be wavelengths where absorption is low enough for those wavelengths to remain unsaturated. In general, maximum changes with CO2 increases occur at wavelengths where optical thickness (tau) is about one, and these wavelengths depart more and more from the center with increasing CO2.
This is not a universal property of greenhouse gases, and in fact, water vapor follows a pattern less clearly logarithmic. In addition, the logarithmic property disappears at very low or very high CO2 concentrations.
Fred,
The approximately exponential decline in the wings is nicely visible in a logarithmic graph of emissivity which extends several decades towards weaker lines. Unfortunately I do not remember, where I saw this graphic presentation, but it might help many in understanding where the approximately logarithmic behavior originates.
Fred M, I appreciate the response, but I fear my syntax created some confusion. My log question (and the 5.43 coefficient) referred to the basic forcing equation — F = 5.43[ln(conc/conc_0)] — and not to the broadening/saturation question.
I still have problems with broadening stuff as in your comment, “…maximum changes with CO2 increases occur at wavelengths where optical thickness (tau) is about one, and these wavelengths depart more and more from the center with increasing CO2.”
My difficult is (and please bear with me: I’m still away from home and will quote from memory numbers I calculated, so they won’t be accurate but maybe enough in the ballpark to make my point…): Doppler broadening (which is pretty clear cut and calculable) will spread the 15um line to about 15.000004um (half-width half-max). Pressure broadening maybe spreads it to 15.0005um at HWHM though it spreads much further but at lower intensities to something like 15.01um at 0.1max. Even given my memory-guessed numbers there really aren’t any wavelengths that ‘depart more and more from the center’.
Or I’m all wet! ;-)
Rod B,
The main issue here is not the shape of individual lines but the number and strength of the weaker lines related to various rotational states.
It turns out that taking into account lines which are by some fixed factor weaker than lines that were already included adds every time roughly an equal number of new lines over a wide range of line strengths. As long as this is a reasonable approximation of what happens, the radiative forcing will increase nearly logarithmically with concentration.
Rod – I don’t know whether you have access to Pierrehumbert’s article, but if you do, refer to Figures 2 and 3 (particularly Figure 2). What it shows is that the main absorbing band for CO2 is centered at wavenumber 667 (wavelength 15 um), but that this region actually consists of hundreds of individual lines at different wavelengths extending in either direction from 15 um. Each line represents a potential quantum transition in a CO2 molecule, the energy of which depends on the particular combination of vibrational and rotational states in which the molecule finds itself. The absorption coefficient indicating the absorbing capacity of a line at a particular wavelength is greatest at the center (reflecting the strength and probability of quantum transitions with that energy content). For lines found further and further from the center, the absorption coefficients decline (approximately exponentially), so that a line at a wavelength fairly far away from 15 um represents photons with energy quanta that are still absorbable by those CO2 molecules whose energy state makes them capable of that absorption, but with much lower probability (i.e., most CO2 molecules are not in a state that will alllow them to undergo that particular quantum transition).
At high CO2 concentrations, photons at 15 um are efficiently absorbed, and so the atmosphere is opaque at that wavelength up to high altitudes, despite the reduced CO2 concentration with altitude. Further increases will do little to change the total absorbed energy there. However, photons with energies lower or higher than at 15 um now contribute more and more to total absorption, because even though they are absorbed with lower probabilities, the higher CO2 concentrations offset this reduction. With even further increases, the lines fairly near 15 um themselves become less capable of increasing total absorption, because they now are nearer saturation, and so the location of greatest change moves even further away from the 15 um center. At considerable distances, the absorption coefficients are so low that even extremely high CO2 concentrations would fail to allow all the photons to be absorbed before reaching high altitudes, and so on. This is why CO2 is, for practical purposes, unsaturable at all reasonably foreseeable future concentrations.
Note here the distinction between line broadening, which involves only small percentage changes in absorbing ability, and the shift of the location where absorption is most affected by CO2 increases from those lines at or near the center to the lines further away. The lines themselves aren’t “departing from the center”, but rather the location of those lines (i.e., photon energies) where changes in CO2 concentration make the most difference.
Fred,
any chance of getting some simple number on that?? That is, with a doubling of CO2 the central band is saturated and the wings could add what percentage of what the central band is absorbing?
I don’t think the computations are simple, because they require a set of radiative transfer codes combined with input data over a range of wavenumbers – all that handled with a great deal of computing power. However, the resulting logarithmic relationship has been described in a simple expression – Delta flux = 5.35 ln C/Co, where Co is a CO2 concentration with which a new concentration, C, is being compared. The basis for this estimate is described by Myhre et al 1998
For a more qualitative and less rigorous treatment, but one that is more intuitively understandable, Pierrehumbert’s description of a few years ago is informative –
CO2 and Saturation
OK Fred, this was a particularly wasted amount of time. The one paper said NOTHING about line spreading and the link talked about how it was real but in a tube experiment only reduced transmission about 1% if I understood it.
This is more gobbledy gook. The fact that it is almost impossible to saturate the wing lines tells us that the effect is so minor that it is silly to discuss it against the main effect which has had no discernible effect on the earth’s system.
You can rest your arms now if you don’t have any hard data.
Fred and Pekka, I greatly appreciate your responses. It sounds like pressure broadening per se is not the primary factor in saturation (unless the term is misused: as I understand it pressure broadening refers to a particular physical process that actually changes the energy level of say the first vibration mode in a single molecule through molecular collisions (more or less…).) You all seem to be saying that it is the side rotational energy lines that increase their probability of absorption (each rotation line corresponding to a quantum energy state) as the concentration is increased. I find this interesting and will have to mull it over and do some more research (including reading the rest of the thread) before I can comment further.
Just for the record (to fix my earlier comment) I calculated HWHM true broadening at 15um to be 0.00006um (doppler) and 0.02um (pressure, at STP), though the latter can vary quite a bit.
Judith: I’m surprised you recommend this article. Figure 1 shows the usual slabs of atmosphere with emission (at at given wavelength) depending only on temperature, not the amount of GHG in the slab. Slab models such as these give the mistaken impression that increased GHGs give increased absorption without increased emission (a dilemma that frustrated me for a long time). However, this situation only applies to optically thick slabs of atmosphere, and the Earth’s atmosphere is not optically thick at many relevant wavelengths. This is especially true at and above the tropopause, above almost all of the water vapor and at least three-fourths of the well-mixed GHGs. Here, the assumption that the atmosphere is optically thick appears grossly wrong, and convection doesn’t provide whatever cooling is needed to maintain a stable lapse rate. Unlike Pierrehumbert, most climate scientists writing about slab models usually include the term “optically thick” somewhere in the text – often, of course, without explaining how this limitation applies to our atmosphere. Pierrehumbert has omitted the phrase. Pierrehumbert closes this section by implying that one can take infinitely thin slabs of optically thick atmosphere and obtain the fundamental physics described by Schwartzschild’s equation! (Why can’t climate scientists admit that the radiation emitted by atmospheres is fundamentally more complicated than the radiation emitted by black- or gray-bodies?)
Pierrehumbert goes on to say: “The top panel of figure 3 compares global-mean, annual-mean, clear-sky spectra of Earth observed by the Atmospheric Infrared Sounder (AIRS) satellite instrument with spectra calculated after the radiative transfer equations were applied to output of a climate model driven by observed surface temperatures. The agreement between the two is nearly perfect, which confirms the validity of the radiative transfer theory, the spectroscopy used to implement it, and the physics of the climate model.” Hasn’t Stainforth proven that different climate models – which give radically different predictions about climate sensitivity – are consistent with observations from space? So how can AIRS data “confirm” the physics of a climate model? (If we MEASURE the temperature and composition of a clear atmosphere at various altitudes, the observed upward and downward radiation do agree reasonably well with theory. I’m not sure we can accurately reproduce emission from all types of clouds.)
Finally, Pierrehumbert manages to discuss planetary temperature while only mentioning the word “convection” twice, once in the phrase “turbulent convection”.
(If I have gotten any of my facts wrong, I would appreciate being corrected.)
Frank – I believe you misinterpret Pierrehumbert’s article, which is accurate, but focused only on radiative transfer. As a result, convection, which he addresses in detail in his book, is not discussed. The article does not claim that increased absorption is accompanied by no increase in emission – just the opposite – the fundamental equations depend heavily on emissivity – nor does it claim that the atmosphere is optically thick at high altitudes. In fact, the AIRS -derived spectrum demonstrates how optical thickness varies according to wavelength and how it ultimately declines to low values even at the most opaque wavelength, which in the case of the spike in the center of the CO2 region is not reached until the stratosphere.
Admittedly, the article is a synopsis. For a detailed mathematical treatment, you will need to read the book.
I would add that the Pierrehumbert article is unrelated to climate sensitivity modeling as discussed by Stainforth. The differences among models are based on factors outside of the radiative transfer equations discussed in the article.
If x^2-3x+2 = 0, we can’t say the x = 2. x might be 1. Just because one climate model reproduces the AIRS data, we can’t know whether that model is the only one that can. Do we have any idea how far we can perturb a climate model and still produce a good fit to the AIRS data?I’m under the impression that Stainforth has found that many different models are compatible with data like that from AIRS.
Proof that the radiation modules of our climate models are correct begins with their ability to accurately reproduce observed downward (and upward?) radiation from a wide variety of atmospheric situations whose humidity, temperature (and clouds?) have been probed by radiosondes. Do they?
The radiative calculations are well understood physics. They are accurate, when the state of the atmosphere is known accurately. All kind of spectroscopy and remote sensing measurements have proven this beyond reasonable doubt. There are also numerous measurements of the atmospheric radiation and they agree with the models to the extent the state of the atmosphere is known (usually they are made to get information on the state of the atmosphere, which means that they are not a strict test of the radiative calculations, but the consistency is still evidence.)
The radiative calculations can be tested experimentally by laboratory measurements, which often cover also pressures not present in the atmosphere, but extending the pressure range leads to more stringent tests of the theory concerning in particular the line shapes which are the least well known part of the theory.
Fred – Thank you for your comments, they helped me go back and see where I misinterpreted some things (and misinterpreted is probably too kind a term given the severity of the mistakes I made). The use of emissivity as a constant in some situations, but as a variable in here, got me off track. I’ve seen slab models that were correct only for optically thick slabs and jumped to conclusions. There is one sentence (more than a full screen below the model) that explains that emissivity is proportional to the number of “absorber-emitters”. The equation ΔI = e[−I + B(ν,T)] seems to be lacking a term for a distance increment (unless that is buried in the emissivity term). Even further below the model, I now see optically thin and thick slabs described in somewhat unfamiliar language (eν;> ≪ 1, “sufficiently extensive isothermal region”). From my perspective, it would have been preferable to express the emission term as the product of an emission/absorption constant, the concentration of GHG present (usually the product of the pressure and the mixing ratio), the Planck function, and an incremental distance, but that is no excuse for my carelessness.
Any synopsis is a summary of major points. This article discusses some aspect of atmospheric physics most relevant to the case for AGW and neglects others.
actually Fred, Frank,
the optical thickness and exponential decay is what is going on exponentially. Beer’s law as I recall. These slabs are extremely thin optically speaking except for strong lines. Implementing Hitran in any sort of a reasonable approach means creating a line width function for each line. that function includes the partial pressure for the molecule of interest, the partial pressure for the rest of the atmosphere present, the temperature and the contribution at each wavelength from each line of each isotope of each molecule present, up to 39 different molecules. It also cannot be used realtime in gcm but it does give one a spectrum potentially with higher resolution than any ever done with instrumentation.
As for the co2 empirical log equation, it’s ok over a short range if you like that sorta thing at all. I get similar numbers as what is considered commonly acceptable at the tropopause looking down, roughly 3.6 w/m^2. The real problem is this is only for clear skies, as I mentioned above in a recent post.
CBA – Beer’s Law does dictate exponential decay for monochromatic EM radiation as a function of path length, but that is not the source of the exponential decay operating to render CO2-forcing relationships approximately logarithmic. See my above comments and those of Pekka for more details. It is interesting that Pierrehumbert’s new book goes into some detail in explaining this.
The 3.7 W/m^2 value now considered the best estimate for CO2 doubling is based on all-sky rather than clear sky calculations – Myhre 1998
There are no data I’m aware of to suggest that the logarithmic relationship does not operate over a large range of CO2 concentrations, including those reasonably probable in coming centuries. I have not seen a rigorous analysis, but I believe there is more than enough unsaturation in the wings of the CO2 bands to maintain such a relationship.
I forgot to mention that the 3.7 W/m^2 value is derived by multiplying the value of 5.35 for the constant in Myhre’s Table 3 by ln 2.
having come up with the value using a 1-d Hitran based model, I’ve seen the results and they are only sort of log. Log would mean each doubling (or halving) would result in the same increment of power, 3.7 w/m^2. After about 8 or 10 halvings, that value is reduced to about 2.7w/m^2, roughly the same amount as the TOA reduction for a co2 doubling instead of tropopause. Total contribution to the atmospheric blocking by co2 is under 30 w/m^2 (over the range of 0.2 to 65 microns).
As for all sky, I’ll believe that when I see it calculated, and not with some gcm time iterative video game. Above cloud tops the pressures are lower and the line widths are less. Also the drop in T is less as higher up, T rises again. Blocking is associated with all of these factors and despite lower T values for top of clouds than for the surface, emissions are going to be continuum not spectral. I just do not believe that these factors will permit the co2 to have the same effect as in clear skies.
” I just do not believe that these factors will permit the co2 to have the same effect as in clear skies.”
It doesn’t have the same effect as in clear skies, but the calculated value of 3.7 is an all-sky value, not a clear sky value.
every thing I’ve seen says it’s a clear sky value, including my own 1d model results. Archer’s modtran calculator set for 1976 std atm and 12 km (~tropopause) shows about 3.5w/m^2 vs mine which is about 3.6w/m^2. At the top of the atmosphere, that number drops by a w/m^2 for mine at 90 or 120km. I don’t think the modtran calculator goes beyond 70km. Tossing in a cloud for the modtran calc also results in a drop of about a w/m^2.
The 1976 std atm is not really an average but more of a most typical value.
Ron Cram says on 1/20 @ 8:38 am:
”
Dr. Curry,
You mentioned Slaying the Sky Dragon. I have not read it, but would be interested in reading your review. A well-written review, describing points of agreement and disagreement, can truly advance understanding of the science. I hope to read a lengthy review of the book here. It’s one vote. I hope you will consider it.”
I would also be interested in such a review. There are some interesting thoughts in the book. Some “real physics” that disagrees with some of the other “real physics.”
Jae, i originally intended to review the book, then I read it and decided not to. it is not a serious book IMO, it is thrown together essays, and there is so much to rebut that it would take me weeks to do. There are a few interesting albeit speculative thoughts, but they are overwhelmed with highly flawed material
That’s an excellent review. You got’r done quicker than you thought you could.
Judith: I do hope SOMEONE offers more than the vague arm-waving you did here. How about just a few facts? Some snippets on some of the “flawed material.”
BTW, I know this is heresy here, even among self-proclaimed “skeptics,” but the GHE is purely speculative, since there is no empirical evidence for it :)
Dr. Curry and Fred,
I would like to make an observation here. Earlier Fred claimed that Ray’s paper demonstrates that the theory is confirmed by observations at the top of the atmosphere. But aren’t these observations at the top of the atmosphere the very same observations Kevin Trenberth was referring to regarding his “travesty” about “missing heat?” I remember Kevin saying the observation system must be wrong.
So, how is Ray’s paper any help when the theory is confirmed by observations known to be wrong?
It’s a legitimate question, Ron, which I would answer as follows. It is not clear whether the TOA observations relevant to Trenberth are the source of the “missing heat” problem. In fact, if there are errors in TOA flux measurements, they appear to relate mostly to reflected solar shortwave radiation rather than outgoing longwave IR.
However, even if the latter is inaccurate enough to generate the missing heat problem, that would be because a very tiny percentage inaccuracy looms large when one tries to determine a small difference between two large quantities (incoming solar and outgoing IR). The same error (less than 1 percent) would be of minimal importance in terms of confirming the radiative transfer computations.
Fred,
Thank you for the quick reply. I’m not sure I buy it. If I remember correctly, Trenberth’s observations would lead one to think the planet is warming six times faster than the IPCC says it is. This is not a minor discrepancy.
No, Trenberth’s data do not differ from IPCC by sixfold. His TOA imbalance is 0.9 W/m^2, whereas the fluxes in and out are of the order of 340 W/m^2. Even if the 0.9 disappeared completely, and even if that were because the outgoing flux was the inaccurate one (unlikely), the percent error would be very small. Trenberth, in fact, appears to think that much of the error may reside in an inability to account for heat content stored in the deep ocean, but whether that’s correct is very speculative.
Ron – in looking at your comment, I believe the “sixfold” difference you refer to does not relate to the IPCC but to the difference between Trenberth’s modeled value of 0.9 and some of the extreme CERES observational data several times that amount. The CERES data must certainly be wrong, but it is also very probable that this involves their difficulty in correctly assessing shortwave solar radiation reflected from clouds. The outgoing IR measurements are much more reliable.
I know this is esoteric, but part of the problem with CERES-derived values of incoming solar radiation may lie with apparent errors, recently corrected, in the value of the solar constant, such that the corrected value is now about 1360-1361 rather than 1365 W/m^2. If the earlier values were truly too high, the CERES-based calculations of absorbed heat would be too high. This would probably have little effect, however, on the Trenberth value of 0.9 W/m^2 TOA imbalance, and wouldn’t completely close the energy budget to Trenberth’s satisfaction.
Fred,
Can you direct me to the paper which corrects the CERES measurements and explains the error?
It will have almost no effect on the 0.9 W/m^2 value as it was adjusted to model estimates:
From Trenberth, Fasullo, Keihl “Earths global energy budget” AMS 2008
“The Clouds and the Earth’s Radiant Energy System
(CERES) measurements from March 2000 to May 2004 are used at TOA but adjusted to an
estimated imbalance from the enhanced greenhouse effect of 0.9 W m-2.”
and later
“As noted in section 2, the TOA energy imbalance can probably be most accurately determined from climate
models and Fasullo and Trenberth (2008a) reduced the imbalance to be 0.9 W m-2, where the error
bars are ±0.15 W m-2.”
trenberth’s more recent paper in 2009 or 2010 describes this. The ceres data was adjusted using some value associated with TIM of the SORCE before the neasured value from TIM became the accepted. The value trenberth used was like 1365.2 instead of 1365.4 w/m^2 as shown in his table. He didn’t use 1360.8w/m^2. The difference when averaged over Earth’s surface turns out to be 0.8w/m^2 less for the new TIM value compared with the originally accepted value. That is also with the assumption that the albedo reflected power must be reduced.
This is effectively enough to eliminate the 0.9w/m^2 imbalance trenberth assumed existed.
As an aside, it would be really helpful to understand the source and methodology for each element of the energy balance diagram. I am still unclear about what aspects are measured and what are calculated or modelled.
The correction for solar irradiance does not eliminate the 0.9 W/m^2 imbalance, and probably has little influence on it. The 0.9 value is from model estimates rather than CERES measurements, and if the solar input is reduced in the models, the outgoing flux will also be reduced, so that an imbalance remains.
So you are saying the models are not connected to reality??
No, what I was saying is that the models use observational data as inputs and then compute outputs on the basis of the physical principles utilized (conservation of energy and momentum, heat capacity, etc.). The computed value of outgoing radiation will depend on the input, and so if the inputted solar radiation is corrected downwards, the computed value for outgoing radiation will also be reduced. The TOA imbalance represents the difference between the two, and so it is unlikely to be eliminated simply via the corrected input. I can’t estimate whether it will change, but if so, it would be by an extent smaller than the 0.8 W/m^2 correction.
As I understood it, the models provided 0.85 w/m^2 and utilize 1365.4 w/m^2 as tsi, the old accepted value. The table in trenberth’s 09 article uses 1365.2 w/m^2 and claims to be using ceres slightly corrected somehow by TIM. And the imbalance is given as 0.9 w/m^2 and I think the paper mentions that the calculation from measurements agrees very closely with the model.
In either case, changing the value from 1365.4 to 1360.8 w/m^2 results in a significant difference. What is uncertain is whether this correction affects the albedo measurement also. The difference being 0.8 W/m^2 with albedo being corrected and 1.15w/m^2 if albedo reflected power does not need the adjustment.
In either case, model or measured, this correction is going to have the same effect, a reduction of at least 0.8 w/m^2 because the initial numbers are off.
The modeled imbalance utilized the uncorrected solar irradiance, but correcting it in the model won’t have the same effect as correcting the actual flux measurements, for the reason stated above. If only the solar flux is corrected, but the measured outgoing IR is not, the difference drops by about 0.8 W/m^2, because only one of the two fluxes is changed. Making the correction in the model will reduce both incoming flux (derived from measurements) and the outgoing flux (computed from the model), and so the imbalance won’t disappear.
Fred,
although gcm code is not something I have time to dissect, outgoing lw is not tied to incoming but rather only to temperature and atmospheric transparency. Unless the gcm is totally non physical, it should not be connecting SW incoming to LW outgoing. SW outgoing is a different matter entirely as that is albedo.
Are you saying that the quantity of solar radiation that is absorbed by the surface and atmosphere does not affect the calculation of OLR? During a forcing, much of that energy does not appear immediately in the form of a temperature increase but rather as stored ocean heat that has the potential to increase SST and OLR. You may be right, but I’d be interested in evidence for your statement. I noticed that the Lean et al article on the corrected solar flux did not claim that their correction eliminated the 0.9 (or 0.85 in their paper) imbalance, even though they pointed out that the magnitude of the correction was about the same as that of the imbalance (and of course of the opposite sign). Perhaps they were simply being cautious.
Fred,
I’m saying that. As you noted they stated the magnitude of 0.8 w/m^2, while the actual value for only the incoming radiative is 1.15w/m^2, when you subtract the albedo value from this, which may or may not be derived or measured from the incoming, one then gets the 0.8w/m^2. They also include in the comment that the old value of 1365.4w/m^2 is used in all models.
whether this was caution or a compromise to get through peer review I have no idea. The implication was clear to me. It also might be possible that since dissecting model code or running the program to actually determine if this would be the result was not within their ability to measure in the project so they opted not to make the claim.
The problem with TIM reading the values 5 w/m^2 below the commonly accepted value is not a new discovery, only the validation of TIM being right is new. The claims were not being made that TIM was correct and the former best estimate was wrong until the validation process took place.
It may be that some gcms do not correct for the imbalance once they have had the new value put in. If this is the case, it may well be that those gcm codes could have problems and are not correct and do not actually provide a legitimate difference, for instance the prospect exists of hard coded values for TSI in some.
To answer your first question, I’m saying yes, OLR is not related to incoming solar other than over the long term there must be energy balance. OLR is onlly related to surface & atmosphere temperatures and the ability of the OLR to pass through the atmosphere.
most ocean heating will occur due to SW incoming that can penetrate the surface. Additional incoming LW to the surface (like caused by more ghgs) will only heat the surface and that will be reflected in immediate T increase or in increased evaporation cycle activity . Besides the need for a missing heat to be warming the lower ocean no longer exists with the current measurement calculations.
The evidence I’m providing is really what is in the paper and the application of rational thought to it.
also, There’s no reply button to your comment so not sure what to do.
Trenberth, Fasullo and Kiehl (TFK 2009) describe the logic as follows:
Based on the hypothesis that the fluxes were in balance before the GHG levels were increased and on the estimated climate sensitivity they accept the value 0.85 W/m^2 of Hansen et al for the TOA imbalance and present it with one decimal as 0.9 W/m^2. The CERES data and its analysis leads to a imbalance of 6.4 W/m^2, which the consider to have significant uncertainties. These uncertainties were discussed in more detail in Fasullo and Trenberth (FT 2008) were the annual cycle and geographic distribution were discussed in more details. As the value 0.9 was considered to be much more accurate than the direct analysis of the CERES data they made adjustments to the analysis in order to reach consistency.
This is not a calculation of the value 0.9, which an input constraint, not a result. Changing other inputs like the solar SW heating does not affect at all this input constraint, but it appears to help in reaching the consistency, because the starting value of 6.4 from CERS analysis would be lowered and the need for adjustments is smaller. The change in solar SW is, however, not large enough to make adjustments unnecessary or to change their direction.
The observed radiative imbalance can be used to determine the TOA imbalance only when the data coverage is more complete and the calculations that link the actual observations to the full annual imbalance for the whole earth more accurate. The uncertainties should be reduced roughly to one tenth of the present uncertainties to make this approach really valuable for the determination of the earth energy balance.
There are comments both referring to the hansen model value and to an adjusted ceres data. Table IIa shows the imbalance and the incoming / outgoing/albedo reflection that is used to calculate the imbalance. If they are only using hansen’s value rounded off, then which other numbers are they accepting from hansen as they add up to the 0.9 imbalance. Note that each column of Table IIa is used for this. The ASR is the difference between TSI and albedo reflected power. The outgoing LW is subtracted from the ASR to give the balance.
So which of these values came from Hansen or were fudged to work with the Hansen value rather than a correction being done to the cere’s data, F&K08a ?. In any case, the incoming solar is listed at 341.3 which corresponds to 1365.2 w/m^2, just under the old accepted value of 1365.4w/m^2. If you use the new value in that column and reduce the reflected albedo power to the new value, the result will be a reduced imbalance by 0.8w/m^2.
cba and Pekka – I have traced the source of the model-derived imbalance to Hansen – Science 2004
The imbalance is computed by comparing imbalances due to forcings from changes since 1880 in GHGs, solar irradiance, aerosols, volcanic eruptions, and similar phenomena with the magnitude of the climate response to those imbalances as computed from temperature increases. The imbalance of 0.85 W/m^2 is the extent remaining to which the response fell short of eliminating the forcing-derived imbalance over that long interval.
The solar contribution is only one of many. More importantly, the modeled imbalance is not computed directly from the magnitude of solar input but from the magnitude of its change over time. This would imply that a small perccentage correction in the value assigned to solar irradiance itself should have little effect on how much the value changed since 1880. I interpret this to mean that the correction probably has no appreciable effect on an imbalance calculated via the Hansen model used by him, Trenberth and others as an estimate.
I want to interject, in case it helps the discussion, that to warm 500 m depth of ocean by 0.15 C per decade takes about 1 W/m2. (The atmospheric heat capacity is minimal in comparison). Of course, I chose 500 m carefully to match these numbers. There is no particular reason why this depth would be representative of the warming layer.
so Fred,
do you really thing hansen’s model can actually forecast or hindcast an imbalance without knowing how much radiation is entering or leaving the atmosphere?
It appeared that hansen’s paper made the assumption that there was no adjustment for imbalance other than surface T. There appeared to be other potential serious problems as well, such as not dealing with a variable albedo.
Ultimately, it’s all a computer simulation with approximation upon approximation of only partly understood relationships and dynamics.
According to the Kopp Lean paper, all of the models are using 1365.4 w/m^2. As for responses, especially from the ocean, where does Argo stand with the more modern measurements? Are you ready to dismiss Douglass et al ?
The Fasullo and Trenberth paper (Journal of Climate, 21, 2297-2312 (2008)) lists in Table 1 possible adjustments to CERES analysis taken from a 2006 paper by Wielicki et al for reaching consistency with the Hansen estimate of 0.85 W/m^2. They give 10 possible adjustments to SW and 6 to LW components. These possible adjustments add to a maximal 6.4 W/m^2, while the discrepancy that they believe to exist is 5.5 W/m^2.
Their reference to the Wielicki et al conference paper leads to a short abstract, which is of little help.
Pekka,
“Based on the hypothesis that the fluxes were in balance before the GHG levels were increased…”
This is a flawed hypothesis as all relevant information would indicate we have been warming since the Maunder and Dalton Minimums.
There goes a little bit of their imbalance. 8>)
I used the formulation “based on the hypothesis ..” precisely to indicate that it is not strictly true, but the point is that it was considered useful to add some constraint, when the nonconstrained data was clearly more inaccurate than the differences between alternative constraints. What is known is that the imbalance is not significantly more than 1 W/m^2. Beyond that fact there is same arbitrariness in the choice. FT 2008 and their later papers chose 0.9 and presented their justification.
That’s all. They could have explained this more clearly in the later papers read by wider audience.
pekka,
that table is the list of changes made to the ceres data. It appears that the 1365 w/m^2 TSI is being adjusted by 1 w/m^2.
Rt is the net in – out where in is TSI and out is albedo and LWIR emissions at the TOA.
The TSI was not adjusted to the new TIM accepted value as the table indicates only a 1w/m^2 adjustment to TSI.
I see nothing in the paper on a quick review to indicate that the Kopp & Lean paper TSI change is not applicable. That means the 0.9 w/m^2 is going to essentially be 0, once the correction for TSI occurs.
The 0.9 W/m^2 modeled value for TOA imbalance will be little if at all affected by the TSI correction, because it is not based on the magnitude of the TSI. On the other hand, CERES-based imbalance estimates will be reduced by about 0.85 (via subtracting albedo from the correction and dividing by 4). However, those estimates are much higher than 0.9 to start with, and even with the adjustment, will probably remain above 0.9.
The Kopp and Lean paper refer to CERES data adjustments, but aside from stating that the modeled imbalance (0.85-0.9) has about the same magnitude as the adjustment, they don’t imply that one should be subtracted from the other.
It is unfortunate that this misunderstanding of the papers of Trenberth et al persists. They do not calculate the 0.9 from TSI and CERES observation. They assume the value 0.9 and modify the data based on CERES observation forcing the numbers to agree with the imbalance of 0.9. That they did with old data and that they would do with new data. The modifications to the CERES based date will be different with new data, but the value 0.9 is not affected as it is assumed.
Just how do you get the imbalance for incoming SW versus outgoing SW (albedo) +LW(emissions) without dealing with:
1. incoming TSI magnitude
2. albedo
3. radiated emissions
?????
That is where the imbalance, Rt, comes from.
It has to be in any model and in any measurement other than a differential measurement and one would have to differentiate between apples on one side and apples and turnips on the other. I don’t believe such a sensor exists. If ceres has one and it only reads 6w/m^2, then the kludges used to fix it use the TSI.
This TSI information does appear in all papers mentioned.
Please explain if you know exactly how Rt can be determined without a value for TSI.
I know only, what I have read from the papers of Trenberth, Fasullo and Kiehl.
The idea is to use all information available. When it is done without adjustments they ended to an imbalance of 6.4 W/m^2 at the top of atmosphere. The uncertainty of this value was known to be approximately equal to its value on the lower side. Thus this analysis tells that the imbalance is likely to be positive (warming) and that it is not likely to exceed a value of roughly 10 W/m^2 (the uncertainty is not fully symmetrical and I do not know the upper limit).
They could have published the various components of heat flux on this basis, but they felt that the estimate of imbalance of 0.85 W/m^2 presented by Hansen et al is good enough to be used as an additional constraint on the data. Thus they looked at the data and uncertainties in its various components making adjustments that lead to the final imbalance of 0.9 W/m^2. The idea of this exercise is not to justify the value of 0.9 and it does indeed not give any additional support for this value. The idea is to force the other numbers to values that represent in a more consistent way the various factors – more consistent in the sense that the imbalance at TOA has a possible value not likely to differ much from the real value that they cannot determine.
One of the adjustments did concern TSI. It was adjusted from 1365 to 1361 W/m^2 reducing the remaining need for adjustments by 1 W/m^2. Other adjustments were applied to absolute calibration, spectral correction and several other factors influencing the analysis.
admittedly the paper is very muddy at that point but adjusting from 1365 to 1361 is 4 W/m^2 not 1w/m^2. At the time, 1361 w/m^2 was not accepted as being valid, however, it looks like they considered a 1w/m^2 reduction to be plausible. If that were really 1 w/m^2 as averaged over the entire globe’s surface, then that puts the 6 W/m^2 discrepancy at 2.5% rather than 1/2 of a %. That is just paying lip service to the notion of some sort of measurement. How about a new error measurement? 0.85 +/- 0.15 give or take 6.5 w/m^2 ???
If that is the case, then the 0.85 is solely a model number which is again subject to the model using 1365w/m^2 and not the new 1361 w/m^2 and if it is a legitimate model, that means the 0.85 +/- 0.15 is going to become 0.05 +/- 0.15.
I thought this whole series of papers was about measurements of real physical properties. And evidently, it is not.
The reason that the 0.85 W/m^2 flux imbalance (currently 0.9 W/m^2) estimated by the model is not reduced perceptibly by the TSI correction is that it is not the magnitude of TSI that is used by the model, but rather solar forcing – i.e., the change in TSI since 1880. That change will be affected only minimally by a very small percentage reduction in the value assigned to TSI itself –
Hansen 2004 – Science.
The adjustment must be divided by 4 because the values 1365 and 1361 correspond to the cross section (pi * R^2) of earth but the other numbers are for the whole surface that is four times the cross section (4 * pi * R^2).
the solar forcing is not known from 1880. It is now being assumed that it does remain quite constant, varying during cycles only. Again, I do not see how a model based upon physical principles can get away without the fundamentals.
As for the 0.85 mentioned it hansen 2004, there are some serious problems claiming that the measurements are accurate. While hansen claims 0.85 +/- 0.15 w/m^2 (from Levitus) for measurements and 0.06 +/- 0.12 w/m^2 for 1993-2003.
Looking at Douglass & Knox 2010, they reference Lyman et al (Nature 2010) with a trend of 0.63 +/- 0.28 w/m^2 from 1993-2008. Note that while the time frame is extended by 5 yrs, the differences place Levitus outside its own error bars if it were consistent with Lyman and it is a bit curious that with more data, lyman has significantly larger error bars.
It appears that data prior to 2003 is from XBT, the expendable bathythermograph, described as having biases and systematic errors referenced by Douglass from Wijffels et al. this is the data hansen describes as extremely accurate.
After 2002, the Argo buoy system was providing much more accurate data which was used by Douglass & Knox , http://www.pas.rochester.edu/~douglass/papers/KD_InPress_final.pdf . Douglass & Knox use only Argo data which is slightly under 10 years, 2003 – 2008.
net result for the Argo data, Douglass & Knox plus several other analyses listed in the Douglass & Knox paper are indicative of a small negative value for heat coming in, not really statistically significant from zero but definitely not indicative of a positive heat imbalance as promoted by trenberth, hansen, lyman, or levitus.
so now we have no current heat imbalance. That means whatever was potentially causing the results earlier, either instruments or a real imbalance, is no longer causing the imbalance.
FWIW, here’s a puzzle (at least it’s a puzzle for me, fully admitting that I’m old and maybe wanting in some areas. :-)
(cross-posted from here: http://blogs.chron.com/climateabyss/2011/01/the_tyndall_gas_effect_part_4_of_4_what_would_happen.html#comments )
I spent quite a bit of time farming on the Great Plains (eastern CO) when I was younger and brighter. An interesting thing about that area is that the soil can be as dry as toast and yet the humidity can still vary greatly, due to the “game of tag” between the cold, dry artic air and the warm, moist Gulf area. And often there is no increase in cloudiness as the humidity increases. What I find most interesting is that it is absolutely no hotter on the humid days than on the dry days out there (it is definitely more uncomfortable, of course, because the sweating does you little good). Due to all the change in water vapor (the big Tyndall gas), wouldn’t one expect the humid days to be hotter?”
I have been harping on this theme for over 3 years, and I have yet to have anyone explain just why more GHGs don’t have any discernable (OBSERVABLE) effect on temperature IN THE REAL OBSERVABLE WORLD. Not in the temperatue record. Not in the models (there is NO “hot spot,” as predicted). Only in radiation cartoons is there an effect. Something is definitely issing. Folks, I continue to say that , without EMPIRICAL evidence of a “Tyndall effect” (GHE), you don’t have ANY CAGW science. Even Einstein had to wait a few years, until an eclipse occured, to prove his relativity theory–by EMPIRICAL EVIDENCE!! Just where is the empirical evidence of AGW?
BTW, anyone who has not read Slaying the Dragon should do so, even if s(he) thinks it is bunk. At least s(he) can say, “I did that.” Just sayin’ it is BS does not advance the knowledge (Judith).
How about a humid night, doesn’t it cool less quickly than a dry night if both are clear? That would be the greenhouse effect.
JAE – The problem with just thinking in terms of temperature is that it doesn’t tell you about the actual heat content of the air. Ninety degree air with 50% rel. humidity does have a higher heat content (i.e., joules of energy) than 90 degree air at 10% rel. humidity.
Jim D:
“How about a humid night, doesn’t it cool less quickly than a dry night if both are clear? That would be the greenhouse effect”
This is what one of the things that is so difficult to discuss about the “atmospheric greenhouse effect.” Yes, you can credibly explain the slower cooling via a GHE. BUT, you can also explain it by the very basic fact that water vapor has TWO TIMES the thermal storage capacity (Cp) as the rest of the molecules in the air. It simply takes longer for all that energy to dissipate.
Here’s another related puzzle: Atlanta and Phoenix are (virtually) at the same latitude and elevation. Yet the temperatures, DAY AND NIGHT, in Phoenix in the summer are MUCH hotter than Atlanta, even though Atlanta has three(3) times as much greenhouse gases as Phoenix. Why?
The ground cools radiatively faster on a dry night. This has nothing to do with the air heat capacity that changes by at most one part in a thousand between dry and moist air.
Phoenix gets hotter because the surface is dry, and all the energy from the sun goes into heating the ground rather than some into evaporating soil moisture. This is the idea of the Bowen ratio, sensible and latent heat fluxes, and does not relate to IR at all. The best time to look for IR effects is at night.
“The ground cools radiatively faster on a dry night. This has nothing to do with the air heat capacity that changes by at most one part in a thousand between dry and moist air.”
I think it is about 1 part in 100 at STP, going from absolute humidity of 5 g.m-3 water vapor to 20 g.m-3. So for a temp. change of 10 C, that’s 20 joules/1000 joules, or 2 parts in 100. That might be significant.
“Phoenix gets hotter because the surface is dry, and all the energy from the sun goes into heating the ground rather than some into evaporating soil moisture. This is the idea of the Bowen ratio, sensible and latent heat fluxes, and does not relate to IR at all. The best time to look for IR effects is at night.”
This is correct. But it also shows how evaporation offsets at least some of the heat gains from radiation (solar and GHE) (note that this is a NEGATIVE feedback, BTW). If evaporation serves to balance heat gain, there is no problem, eh?
Yes, I underestimated and it is 1% for the maximum effect of humidity on the heat capacity, still insignificant because instead of cooling by 1 degree, you cool by 1.01 degrees when it is drier, which goes no way to explain the real difference in cooling rates at the ground surface.
Evaporation is putting latent heat into the atmosphere that later turns into real heat when condensation occurs, so it is not a way out of heating.
Jim:
I don’t understand your comment. Please explain the connection between 1% and 1.01 degrees.
For a given amount of energy loss due to radiation for example in J/kg/s, the cooling rate (K/s) is this divided by the heat capacity. If the heat capacity changes by 1%, so does the cooling rate.
?? I still don’t understand. Can you provide more logic/background/reference on this? An equation?
rho*cp*dT/dt = Q
where rho is the air density (kg/m3) , cp is the heat capacity (J/kg/K), dT/dt (K/s) is the temperature change rate and Q (J/m3/s) is the heating rate per unit volume.
No, you still underestimated it; it’s 2%. Please re-read my response (HOH has twice the Cp of air).
Whatever. You said:
“The best time to look for IR effects is at night.”
Well, what do we compare in the real world in order to see if there is a GHE? The temp in Phoenix is still way hotter at night than it is in Atlanta, regardless of the fact that there is only 1/3 as much GHG.
I wish we could begin by agreeing that there is absolutely NO empirical evidence of an “atmospheric greenhouse effect.” There is only some radiation cartoons.
The evidence is by measuring the downward IR on a clear night with any simple radiometer that is used in atmospheric field studies or student practical courses. It is not zero, more like a few hundred W/m2.
Jim: The presence of IR is NOT evidence of a GHE! Of course there is IR, absorption, emission, collisions, LTE, etc., etc. What I want to see is a demonstration that the presence of this IR “overcomes” other factors, such as convection, evaporation–to make things yet warmer than they would be otherwise. If that demonstration existed, we could even call the belief in such a process “scientific!” As it is now, you only have an unproven hypothesis, no matter whether 99% of all scientists believe it.
It amused me when I figured out that not only is the GHE not the reason a greenhouse gets hot, but there is no way to arrange or manipulate things to demonstrate the greenhouse effect in a greenhouse. I bought a CO2 gauge (it also measures temperature) with the intent of checking for myself the 3C rise per doubling of CO2 with a given IR source…but…
There was a recent study of cassava growth in CO2-enhanced atmosphere, but I did not get a reply when I asked what cooling mechanism they used to offset the increased temperature in their greenhouse. Want to guess why they did not disclose the cooling mechanism in their paper?
http://academic.research.microsoft.com/Paper/5017614
If human emission of CO2 causes global warming, why is that after human emission of 235 billion metric tons of carbon (http://bit.ly/gIkojx) the global warming rate of 0.16 deg C per decade for the period from 1970 to 2000 is nearly identical to that for the period from 1910 to 1940 as shown in the following data?
http://bit.ly/eUXTX2
Experiment is the final judge of a scientific dispute. As the previous maximum global warming rate of about 0.15 deg C per decade has not been exceeded with 5-times increase in human emission of CO2, there is nothing unusual or unprecedented about the current global warming rate of 0.16 deg C per decade.
As a result, there is no evidence of man-made global warming.
If human emission of CO2 causes global warming, why is that after human emission of 235 billion metric tons of carbon (http://bit.ly/gIkojx) the global warming rate of 0.16 deg C per decade for the period from 1970 to 2000 is nearly identical to that for the period from 1910 to 1940 as shown in the following data?
Well one possibility is that there were other factors in play between 1910 and 1940 which contributed to the warming during that period but have not been major factors in recent years. For example solar activity rose in the early part of the last century but has been fairly flat or falling since about 1960, also volcanic activity was relatively low during the previous period.
andrew adams,
When skeptics invoke ‘one possibility’ as an explanation they regularly get excoriated.
Could your ‘one possibility’ be that climate science has misunderstood significant aspects of how the atmosphere works?
Hunter,
No, I don’t see any need to resort to that assumtion on this particular question. And I have no problem with people exploring “possibilities” as long as there is actually some evidence or logical reason to support them and it is not just baseless speculation.
In my case there is certainly evidence which supports my claim – for example Lean 2005 on the solar influence and Zielinski 2000 on volcanoes.
A basic element in the article caught my eye, because it is something I did not realise before, while it is a fundamental part of the radiative transfer theory: it seems that the absorption/emission due to CO2 and other gasses depend only of the mass of gas in an elementary volume, at least in a dilute “regime” valid for the whole earth atmosphere. It means that 1 kg of air would absorb and emit the same amount of radiation if it is at the same temperature, whatever the volume it cover (low near the ground, high near the TOA).
Is this correct?
If it is, could somebody tell me what is the mass ratio between the troposphere and the rest of the atmosphere? And does this ratio change with temperature (ground temp? TOA temp? Top of troposphere temp?)
Pekka
There are all kind of irregular cycles from short term fluctuations to glacial cycles and as far as I understand none of them is really well understood. Of course many details are known about ENSO type cycles of duration suitable for collecting empirical data, but even their understanding is badly incomplete.
Yes, this is the point.
You are being kind by saying “not well understood”. The reality is that they are not understood at all.
An excellent example is ENSO where some people claim “partial understanding”.
Indeed if you consider an INDIVIDUAL ENSO, the mechanism is trivial – it is just winds and pressures. Consider a delayed oscillator, throw in a bit of Gaussian noise and you obtain something that looks like ENSO.
However it is basically a tautology which “explains” a conundrum by a mystery.
The parameters frequence, amplitude, phase and noise are just fitted.
Why is the frequence what it is? Of course it is because the whole oscillation is a kind of quasi standing wave resulting from an interference of a large number of spatially interacting waves with an infinite number of different frequencies. Nobody has even a beginning of understanding what these waves are and what is their dynamics. There is no understanding of ENSO or any other “oscillation” for that matter. The fact that numerical simulations are unable to reproduce correctly these quasi standing waves is due to the fact that they don’t solve the equations of the dynamics because these are unknown.
Jim Cripwell
“The earth’s atmosphere is complex, not to say chaotic. Anyone, like the IPCC and climate modellers, who claim to have captured the physics of how the atmosphere works, are simply wrong. The atmosphere is much too complex, so that simple approximations will always give the wrong answer”
Am I anywhere near correct?
Yes 95% correct. The 5% are that I am not saying that they are wrong (implying wrong in everything).
Much of what they do are tautologies which boil down to statements like “If there is equilibrium, then there is equilibrium”.
A more sophisticated variant mostly used by those who pretend that climate is qualitatively different from weather is “If there is ergodicity, then there is ergodicity.”
Of course a tautology being always true, these statements are not wrong.
You may say that they are useless and explain nothing but they are not wrong.
The corollary is that they may get randomly some features right, namely those where the premise of the tautology is right during a given time window too.
If you want, you can develop your own theory that will not only fit with observations but predict a very different evolution.
Postulate that there is a mix of a 200 years and a 400 years oscillations that interfere with the (known) higher frequency oscilations.
Fit the phases and amplitudes. Introduce time constants , coupling constants and noise if necessary .
Predict cooling by the end of 21st century and getting “worse than we thought” in 22 century.
The causes are unknown like the causes of ENSO are unknown.
If you feel like that, you will find the non linear mechanisms in deep ocean circulation, very low frequency albedo variations and heat storage but it is not really necessary.
Of course you will be also compatible with the orthodox GH theory – it’s all inside and the CO2 is just a “small perturbation”.
Can you be falsified? Yes, in 22 century. And then you will be dead anyway.
CBA
Such models are relevant mostly for clear skies and tend to lose meaning in the presence of clouds.
This is also my conviction. Not only clouds but any significantly scattering medium. The equations used to approximate the radiative transfer in the Pierrehumbert paper are only valid for steady states and no significant scattering.
Of course one can also introduce scattering and non steady states but then we are far from this rather trivial homogeneous slab model in LTE.
I have seen somewhere a collection of downwelling spectra from different locations , different times and different conditions and there is a large variability indeed.
Sure, CO2 always has a band around 15µ but that is clearly not the alpha and omega of the radiated power.
Tomas, Many thanks. Jim Cripwell.
Tomas,
At the moment, I just don’t have the time to spend on Pierehumbert’s paper to carefully scrutinize his approach. I read through an online book he had a couple of years ago and have vague recollections of the approach with differences of optically thick and thin shells or slabs and the approximations being made for each.
I did not use this approach and I don’t remember details on it. I chose instead to use an Eddington type approach. I use over 50 shells to represent the atmosphere and take average T and P values for each.
I created a program to generate spectral values for each shell. The optical thickness is calculated for each wavelength bin. The program is variable in bin size and in the bandwidth for the range of wavelengths. I have gone as fine as 1/10 or 1/100th of an Angstrom per bin and as wide as 10nm per bin. I use the standard suggested Hitran approach of Lorentz line widths and so generate the contribution of each spectral line into each bin. The ultra high resolution has permitted me to compare to telluric spectral lines measured in some extremely high resolution.
The approach turns out to attenuate the incoming spectra by the attenuation of that slab at each wavelength. It also adds in the emission at each wavelength which is the boltzman distribution for that temperature (a black body curve) times the attenuation value which is an emissivity factor at each wavelength. This requires LTE. There is no assumption of optically thin or thick slabs and a slab will vary in optical thickness by wavelength.
With IR, scattering is not that big a deal with molecules. That is mostly partial to uV and blue colors. If we have scattering of IR, it’s going to be clouds and particulates and absorption too.
I’ve no interest in debating what LTE means. Basically, it means that there is one temperature for a microscopic region shared amongst the various types of molecules present. the alternative is for co2 to be at one temperature, n2 to be at another, h2o vapor at another temperature. Having lots of collisions/sec mean that the energy is distributed between the various molecule types. A molecule can gain energy by absorbing a photon or by a collision and it can lose energy by emitting of photon or having a collision. LTE is true for most of the atmosphere.
I’m afraid I haven’t spent much time looking at atmospheric generated incoming spectra. so many of the lines present in incoming stellar spectra are actually atmospheric absorption, even in the visible.
Judith,
The lesson I learnt from Pierrehumbert is this statement:
“…the energy of the photon will almost always be assimilated by collisions into the general energy pool of the matter and establish a new Maxwell–Boltzmann distribution at a slightly higher temperature. That is how radiation heats matter in the LTE limit.”
Earlier I have read that that the absorbed IR was readily emitted in all directions and half of it returned to earth.
The question arises if the N2 and O2 molecules gaining the energy can radiate it or are forced to transfer the heat down ta earth surface, and thereby warming the atmosphere.
Finally, today I have got use of my accumulated knowledge from the Websphere when I was asked to explain why a car under a roofed carport does not need deicinging in the cold mornings we have in Sweden these days.
Gunnar,
Neither N2 or O2 can radiate in infrared. O2 can radiate in microwaves, but that involves very little energy transfer. Thus both contribute only through convection, a little conduction and by exciting CO2 and other greenhouse gases through collisions.
There is about as much IR radiation from the thermally excited CHG molecules as there would be if they would radiate with a larger contribution from molecules excited by prior absorption. Thus the amount and distribution of radiation does not change much, but there is very little direct connection on molecular level between the absorption and emission.
Isn’t the photon’s energy being counted twice? It can’t both result in local warming and stimulate equal IR emission (since that emission would cool the “local” molecule(s) right back down again.)
There is no double counting. The addition to the emission that results from the direct succession of absorption and the subsequent emission without intervening thermalization by collisions is simply forgotten. This results in a minuscule error in the opposite direction, but this error is so small that it has no significance.
Saying it as deviation from local thermal equilibrium: the vibrational modes of CO2 are at a very slightly higher temperature than translational and rotational modes of various molecules (N2, O2, H2O, CO2, ..) but the difference is insignificant.
Pekka Pirilä
I think you will need to review your understanding of the state of CO2 at atmospheric temperatures say around 260K.
Both Pierrehumbert and Vonk think the vast majority are in the translational(ground state) and I agree with them on this point.
After absorption of say a 15um photon there is a huge jump in the internal energy of the co2 molecule(3 fold).
Pierrehumbert thinks this is rapidly lost by collision(thermalisation) and I agree with him.
Brian H is quite correct to say that you must be careful on this next step so as not to count twice.
If there is an equal amount of energy on average radiated away this means that all the thermalisation has been reversed.
If this is the case then for all practical purposes all the radiative effect does is to redistribute the radiation with no thermal effects .
Bryan,
Perhaps you did not understand, what I meant by translational mode. It means normal motion of the molecules. Being in translational ground state means no motion which is dominating only very near absolute zero (o K). At translational ground state there are no collisions at all. In collisions energy is typically transferred between different translational and rotational modes and sometimes also with vibrational modes. These events are the way vibrational excitation is transferred to normal thermal motion.
When I wrote the last paragraph of my previous message, I knew that many people do not understand it. I thought that they just skip it, but this did not happen with you. Apologies for unnecessary confusion.
Pekka Pirilä
Yes, that makes more sense, as you know and for the benefit of others, at 260K the RMS speed of CO2 molecules will be around 450m/s.
All internal energy will be in the translational mode with components in x,y,z directions.
When the CO2 molecule absorbs the 15um photon the rotational and vibrational modes will be activated.
What is not clear to me is how the models deal with the issue of thermalisation.
Do they subtract the average emitted radiation from the average absorbed radiation to obtain the average thermal energy gained and hence the temperature rise of the volume under consideration?
This will likely be covered later in the thread, but it might help to understand that absorbing radiation into a vibration mode does not change the thermalization (temperature) of the gas. As emission does not cool it. btw, there is considerable confusion and disagreement with this, probably because a construct called rotation temperature and vibration temperature has been devised to help analyses and discussions — but they aren’t real temperatures like in ‘that feels warm.’
Personally I do not usually like argumentation on what is the real meaning of a word. Sometimes it is necessary, but mostly not. Within one field of application the meaning may be well defined, but very often the same word is used with a different meaning in other applications. None of the fields of application can forbid differing uses in other fields or in more loose discussions.
Concerning the concept of temperatures of various systems, it is often a useful way of describing the energy content of interconnected states, which are not interacting as strongly with other modes of the system. One of my first publications really long time ago was on the behavior of nuclear spins at very low temperatures. The spins of neighboring nuclei had a rather strong interaction with each other but a very weak interaction with other degrees of freedom of the system. Thus it was useful to discuss the temperature of the spin system separately from the general temperature of the material.
The vibrational states may be in a have a similar situation in the uppermost thin part of the atmosphere where collisions are rare and the molecules loose their excitation mostly through radiation. Here I introduced the concept only because the comments had some connection to Gunnar Strandell’s original question, where he discussed LTE.
Your comment is obviously entirely wrong and in error.
It’s “lose their excitation”, not “loose”.
;) ;)
Rod B
That’s a good point, the rotational and vibrational degrees of freedom do not affect temperature.
However by collision KE from these modes can be transferred to translational modes which do.
I agree that there is a lot of debate on this point.
However most on both sides of AGW debate agree with the above.
Bryan,
Yes, that’s exactly how IR radiation warms the atmosphere — CO2 molecules heating mostly N2 and O2 molecules via collision and relaxing the vibration level.
It is probably a minority but there are very learned people that will swear that exciting a vibration mode warms that (ideal) molecule.
It matters to you a lot…when considering the overall coupling of energy…whether a billiard ball is spinning or not?
One way to visualize how these radiative transfer models work is that the CO2 molecules replace a fraction of background photons with photons emitted at the gas’s own temperature. This considers absorption and emission effects together. They separate the photons into upward and downward ones, where the upwards ones are dominated by absorption, and the downward ones by emission. The net effect of these two streams gives the heating rate in a layer.
Jim D
That is true but can be misleading. For fear of opening a can of worms prematurely, the photo emitted from a relaxing vibration energy level is part of a physical process that is totally different from a photon being released because of its temperature ala the planck function. The vibration emission is always at the exact (pretty much) same energy level regardless of the gas’ temperature; not true for planck-type emission.
Can you elaborate? A photon emitted from an excited CO2 molecule should not know how the excitation came about? Temperature does affect the number and energy of the various quantum levels involved in the excitation process, but the process is still quantized. All those quantum transitions are the same as those capable of being mediated by photon absorption, and those released by collision de-excitation will be the same as those released by photon emission.
According to Heisenberg uncertainty principle the accuracy of the energy level is inversely proportional to the lifetime of the level. Therefore the energy level is not very precisely defined, when the lifetime is short for whatever reason. The pressure broadening of the emission (and absorption) lines is due to the frequency of collisions which makes the lifetimes be short.
It does not matter, what is the way the excitation occurred unless it influences also the lifetime of the excitation. In dense gas the lifetime is determined by the collisions. This leads in particular to the Lorentz line shape.
Pekka – As I understand it, it is true there is a small element of uncertainty broadening, but pressure broadening is primarily a consequence of the ability of interactions with neighboring molecules to borrow or lend energy sufficient to eliminate disparities between the energy of an incoming photon and the energy needed by an absorbing molecule for a particular quantum transition.
Fred,
I have not checked this in detail, but I think the two factors are just two ways of expressing the same fact.
The lifetime is short only, when something makes it short and no interaction can broaden the line without inducing transitions, i.e. shortening the lifetime. This is not necessarily true in solids, where particles are bound to their location, but it is almost certainly true for gases. In solids the local conditions may vary in a time-independent way leading to stable differences in the locations of energy levels, but in gases all is related to collisions and they affect unavoidably both the line shape and the lifetime through the same interaction.
Fred and Pekka,
Everything I have seen talks of three distinct line broadening mechanisms: 1) that caused by the energy/time factors of the Unceertainty Principal — all call this academically interesting with no practical effect on the line and then drop it, 2) Doppler, and 3) Pressure (also called Collision). The pressure broadening does not stem from the uncertainty broadening.
The relaxation of the 1st vibration level of CO2 emits a photon a photon at 15um with about 1.4×10^-20 joules. This is true of every CO2 molecule every time (I’m ignoring for discussion the tiny variations coming from broadening.) It is correct that a higher am_bient (don’t know if there is a spam filter here that will vomit that word…) temperature will increase the number of molecules likely in the excited state and therefore the number of photons potentially emitted. But all of those photons will have the same energy, above. Yes. this is the same energy that will transfer in a collision to another molecule’s translation (kinetic) energy, though this transfer is not photonic.
The average molecular kinetic energy of CO2 at 300K is about 6×10^-21, a little less than half the above (It’s about 4×10^-21 at the 220K tropopause). By Boltzmann there are probably a few molecules with kinetic energy around the vibration energy. But if you’ll permit me an idealized hypothetical molecule, if the 1.4×10^-20 vibration energy were instead emitted because of temperature, the required kinetic energy would put the CO2 molecule at a minimum of about 630K. And that would be for every molecule making that emission.
The temperature has no bearing on the absorption/emission of the vibration level, other than indirectly determining via the Boltzmann Factor the likelihood of naturally being in the excited state to begin with. On the other hand temperature is the only parameter in planck-type emissions and the emission is theoretically (and generally) very broadband and not single frequency.
As I understand it…
Rod B,
I think you just answered some questions I probably wasn’t even asking correctly in other venues. Thank you.
Rod,
I agree fully that the uncertainty principle is not the right way of calculating the actual line width even though it is closely related and gives a value not very far from the real value.
Two- and three-atomic molecules (N2 and CO2) are massive and complicated enough for allowing a more detailed analysis of what happens in a collision than just applying a coupling between two states and saying that that is all that we can tell about the occurrence. If this approach of just determining one coupling constant would give a complete description, then the relationship between the life time and the line width would be exact. The full theoretical calculation would involve calculating with quantum mechanics the outcome of all possible collisions (distance between the trajectories of the molecules, directions of their axes, their rotational states, their relative speed, etc). After this is done for a representative set of situations, the average can be calculated. It is clear that the result differs from the limit given by the uncertainty principle (again we have a limit, which is obtained only in specific cases, but is a limit for all). Intuitively I thought that the relationship would be reasonable close to the limit and checking with the figures I could find, this appears to really be the case. Reasonable close allows in this case a factor of two or some more, but not a factor ten.
Uncertainty principle is not a mechanism, it is a lower limit that all real mechanisms must obey. For simple enough quantum-mechanical effects the real value is close to the lower limit, for more complicated ones it is further. Here we are in the intermediate region, where the limit is not far, but still somewhat off from the real value.
The uncertainty principle tells also that it takes time for a molecule to settle in a well defined state of a specific energy level. It is really a property of the state that its energy level is not more accurately defined than the uncertainty principle allows within the time available for the state to persist. If the lifetime of the excitation is short, then the energy of the state itself is not precisely known. There are no infinitely sharp energy levels that are not fully stable (and we cannot observe anything that is fully stable also under observation).
If I use the concept of temperature in connection with energy levels, I use it only in the sense that the relative occupations of energy level are prportional to the factor exp(-E/(kT)), where k is the Boltzmann constant. When several states are involved and part of the transitions are forbidden, there may be exceptions to this rule. This affects often levels well above thermal energies, but does not influence situations where thermal excitations through collisions dominate.
In atmosphere a couple of percent of CO2 molecules are always in a vibrational excited state that corresponds to 15 um radiation. It is a matter of taste, whether a few percent is considered a high or low value for such a share. I consider it a large share, because much lower values are typical for many important excited states in all kind of applications. Still it leaves the share of the vibrational ground state close to 100% (5% may be large, but 95% still close to 100%).
It’s “uncertainty principle” of course… — not the head of a goofy-named school!
Pekka,
Your point that collisional interactions are greatly more complex than the basics describe is well taken. I felt it would be way too much to describe the whole elephant when the basic point can credibly be made. (Plus it goes beyond my smarts…) None-the-less, I agree the details are important, yet not completely understood. This, IMHO, adds uncertainty to some extent to the whole analysis of radiation transfer.
I also agree with your description of the uncertainty principle in this situation, though I don’t agree with your magnitudes (or I could easily be misreading what your saying). As a rule of thumb the uncertainty principle demands that the energy change of the 667 cm^-1 line be greater than about 10^-11 cm^-1, take or leave a magnitude or two. (Sorry to change units to wavenumber on you; it’s all I got here.) It’s no great shake to exceed that probably even immeasurable threshold. A representative doppler broadening is roughly 0.02 cm^-1, a factor of maybe a billion.
Rod – is it possible you’re confusing what will appear as a continuous emission spectrum from a hypothetical black body with the emission spectrum of CO2? Both vary in terms of total and peak energy as a function of temperature, but the black body spectrum appears continuous because it represents the contribution of a theoretically unlimited number of different absorbers/emitters, each with their own spectral distributions. They are all quantized, but it is the heterogeneity that gives the appearance of continuity.
For CO2, the number of emission lines is very large, but still limited, because each represents a different transition or combination of transitions that CO2 can undergo. If magnified, the spectrum will in fact be seen to consist of hundreds of individual lines corresponding to these quanta. Each could, in theory, emanate from molecules that were either thermally excited or excited by photon absorption The emitted photons would not know which phenomenon led to their existence.
(Note that I’m referring to vibrational/rotational transitions, which exhibit energies in the IR. Electronic and other modes are of little relevance to the climate influencing properties of CO2 at atmospheric temperatures, although they are also quantized.)
I return to, what we can conclude from the uncertainty principle.
At earth surface the mean free path in air has been determined as 64 nm. The average speed of N2 molecules is 500 m/s. Thus the typical time between collisions is 0.13 ns. The CO2 molecule is larger and will therefore be hit more frequently by N2 (or O2) molecules. It is likely that most collisions lead to de-excitation of a molecule in vibrationally excited state. Thus the lifetime is likely to be close to 0.1 ns. According to uncertainty principle this corresponds to an energy uncertainty of 5.3*10^-25 J. This corresponds to 0.0265 cm^-1 half width of the line, which is close to one half of the measured line width. I would not expect anything better (or even this good) from such a simple calculation based on incomplete data.
Rod – There is no distinction between what you refer to as Planck type emission and emission from a molecule excited by photon absorption. Collisional excitation, which is dependent on temperature, creates the same quantum transitions in absorbed and emitted energy as photon excitation. The temperature-dependent kinetic energy of molecules capable of activating these transitions is not quantized, but the transitions themselves are. Note that the distribution of energy states upon which an absorbed photon adds a further quantum, is dependent on temperature via prior temperature-dependent quantum transitions created in the absorbing molecule.
Fred,
Yes and no! ;-) One has to be clear on what process is being discussed. A colliding molecule can transfer its translation or vibration or rotation energy to another molecule’s translation, vibration, or rotation energy. If vibration or rotation is part of either molecule the energy transferred is quantized per the quantum levels of vibration and rotation. Translation (kinetic) energy is not quantized. But in any event a collision transfer is not photonic: there is no electromagnetic waves/pulses emitted or absorbed. (A vibration to vibration transfer is possible which does involve a photon, but this is not a collision.)
By far the most common general transfer is translation to translation. With radiation transfer the typical vibration/rotation to translation collision transfer is of most interest. CO2 absorbs a photon (at a discrete energy) into its vibration, then bumps a N2 and transfers that vibration energy to N2’s translation energy, warming the N2. There is no photon involved with the second transfer.
None of the above are Planck function “transfers”, which are always photonic, are broadband and not discrete, and are not quantized (in the quantum mechanics sense.). The amount of EM energy emitted via planck function is directly related to the temperature of the emitter (and also directly affected by certain physical properties of the emitter, which can vary with wavelength.) The discrete amount of energy emitted by relaxation of a vibration level (photonic OR collisional) has a very tenuous connection with the temperature of the relaxer. Higher temperatures will distinctly and always increase planck emissions while higher temperatures might have a tendency to reduce vibration relaxation via photon emission — because higher temperature will cause a vibration level to more likely be filled and not relaxed. Generation of planck-function photons is seldom a result of vibration or rotation relaxation; generation of greenhouse gas type photons is always a result of such relaxations.
A big source of confusion, IMO, is the use of planck-function equations to analyze GHG radiation transfer. While they are not the same physically (and where I have some serious concerns — but that’s another subject) the mathematics and equations seem to do a credible job and, as long as one selects the correct parameters and coefficients, matches observations fairly well. So Planck functions and their subsidiary laws (Kirchoff, Beers, etc.) seem useful to analyze GHG radiative transfer — even though (I’m becoming a broken record…) the process are dissimilar.
Rod B,
You made many statements that either do not understand or do not agree with.
All radiation is quantized. The spectrum is continuous when the source can do quantum transitions at any energy, not only between some discrete levels. Solid surfaces and water are examples of that while gas molecules have line spectra.
The higher temperature does not prevent the emission from a vibration level. If the number of molecules in a particular level increases, the related emission will also increase.
I do not understand, what you want say by the above sentence.
What do you mean by the claim that they are not the same physically?
My impression is that there is some confusion in your ideas.
Pekka Pirilä,
Everything is quantized per Heisenberg, but things like vibration, rotation, and electron levels are quantized in a non-trivial fashion. Translation energy (of a molecule or an airplane) is also quantized but in a non-interesting way since the level differences are infinitesimally small and have virtually no affect on analyses.
On average a bunch of CO2 molecules will naturally have a small percentage with excited vibration, statistically based on ambient temperature — the higher the temperature the more will likely be excited. So I’m just saying if the temperature increases, a larger percentage will be excited which indicates less natural relaxation. Admittedly it’s all pretty loose ala quantum statistics and nothing prohibits relaxation with emission, as you say. My point was to make a distinction with planck function which will clearly and greatly increase its emitted radiation with higher temperatures — internal energy relaxation not so much.
Basically, planck function radiation comes from charge acceleration. Changing internal energy levels within vibration and rotation involves very little charge acceleration — often none. (Though changing electronic levels does cause charge acceleration and so planck-type radiation, though this is discrete radiation, different from the more usual broadband radiation.)
The above generally describes the physical difference between these two types of radiation. (I should say radiation source; once it’s radiation — a photon — they are all exactly the same.) However, with the proper assumptions and coefficients “GHG” radiation can be made to fit very closely the mathematics of planck-type radiation and that is very useful even if not exactly the same.
Rod B,
The second part of this statement is false:
With increasing temperature the rate of excitation will increase and so will also the share of molecules in excited state. That leads to an increase in emission by the same factor as the number of molecules in excited state has increased. The ratio molecules in vibrationally excited state (15 um line) to those in vibrational ground state is proportional to exp(-E/(kT)) where E is the energy of excitation and T the Boltzmann constant. This function tells, how the emission rate increases with temperature.
The Planck law is actually very closely linked to the proportionality that I gave in the previous paragraph. The intensity of radiation at a fixed wavelength (or energy or wavenumber) increases with temperature in the same way for gas and black body or a solid surface. The same exponential dependence on temperature is seen in the Plancks’s law. The difference is that for a black body the increase happens at all wavelengths and in relative terms faster at low wavelengths. Therefore the peak of the distribution moves and the total emitted energy increases more rapidly than the intensity at a fixed wavelength.
There is much less difference between the radiation from gases and from solids than you seem to think.
I still disagree, but not quite as much after reviewing your last comment. It is true that both planck-type radiation and the degree of excited molecular states are proportional to the same factor: e^-[E/kT] But in Planck this factor leads directly to radiation intensity in Watts/m2 (though only in a delta freq portion of the source) In the other it leads only to the portion of molecules in an excited state; and then there is one further step to estimate the degree of radiation emanating from the excited molecules, which I presume gets into Einstein coefficients. So Planck radiation in total is proportional to T^4 while relaxation radiation is proportional to e^-[E/kT] but further lessened ala Einstein coefficients.
On the other hand there may not be such a massive difference that one (like me) initially thinks – as you say. First we’re comparing apples with oranges in some respect. If one adds a pile of CO2 molecules, more will be excited and there will be more radiation. But if you just add molecules to a body, Planck radiation won’t increase at all. One depends on temperature, the other depends on the quantity of excited molecules (which is affected by temperature.) Also if the temperature increases (at least at the example I did – 300K to 350K), the ratio of excited molecules increase is very close to the increase in total Planck radiation. However, I don’t know if this has any real meaning or is just cutesy numerology (in one case the number of excited molecules jumps about 2 percentage points; in the other total radiation goes up 80 to 400 watts/m2.)
None-the-less Planck radiation is generated differently (in most cases) than relaxation radiation is and so (still) they are not the same. Though as I said before planck mathematics can fit pretty well to relaxation and is useful for analyses. How accurate the usefulness is I wonder about but don’t know. The process of CO2 radiation absorption and emission is different from planck radiation absorption and emission. The greenhouse effect stems for the former. Yet it is the latter that is used to explain atmospheric warming with their multiple flat layers (slabs) of atmosphere and planck radiation of the sigma(T^4) variety between the slabs. That may (has??) prove to be reasonable – but is it robust? Unassailable?
Rod B,
The temperature dependence of radiation following the Planck law increases with temperature for precisely the same reason that makes the emission from a gas to increase with temperature. The reason is an increase in the occupation of states that can emit at a particular wavelength.
This is usually not discussed explicitly, when discussing the Plank law, but the reason is really that.
The only difference is that in gases only a discrete set of excited states is available for excitation while a solid or liquid that emits with a continuous spectrum the number of possible excitations is so large the resulting spectrum is continuous and typically close to Planck law.
The black body is an idealization that can be approximated by a cavity with a very small opening (pin-hole) compared to its size. The geometry of such a cavity leads through multiple reflections to the result that any non-zero emissivity of the surfaces will be upped to apparent emissivity of 1.0 at the whole.
Pekka, I agree with what you say though one can easily skip over the significant “nuances” between the different radiations if not careful. Even though, as you say, “…Planck law increases with temperature for precisely the same reason that makes the emission from a gas to increase with temperature…” the radiations are different and that difference is relevant to climatology, yet there are many who ignore or deny the differences.
The other interesting can of warms (which I said earlier I do not want to bring up) is whether a gas does or does not radiate ala Planck. But pretend I didn’t mention it!
BTW, I’m still trying to understand where and how one posts comments. I trust this is going in the logical/correct slot….
Pekka,
what I am getting in this thread and the previous one is that it is believed that CO2 absorbing IR does no cause much emission but the energy is conducted through collisions. CO2 colliding with other air constituents DOES cause emission.
As we are told how powerful backradiation is, does this account for the back radiation correctly??
Depending on air temperature and ground emissions, doesn’t this only happen during heating periods?
In other words, is there a coherent explanation of what happens in the morning going from a cold period of no SW to the afternoon where temps peak from SW absorption and back to the evening with no SW and everything cooling?
This may help with part of your question (but it’s a very very rough calculation). The mean free path in the atmosphere at 1 bar is ca. 70 nm. The mean Boltzmann velocity is ca. 500 m s-1 (I’m just taking very rough values). These imply a mean lifetime between collisions of about 0.1 – 0.2 nanosecs (obviously increases with decreasing P as free path increases). The fluorescence time scale is typically order 1-10 nanosecs. So, one way to think about this is that collisions happen “faster” than photon emission. i.e. the emission time scale is slower than the thermalization time scale. This will be especially true at higher P, but at low enough P emission will dominate (as Pekka hints above). If the ratio of emission to collision time scales is, say, 1/20, then 5% of the absorbed energy is re-emitted a la Kirchoff, and 95% goes to collisional energy (i.e. heating). I haven’t had any time to work out the numbers precisely, so this is just a quickie hypothetical, but it should be in the ballpark
My thinking was OK ,but my recollected fluorescence time scale was apparently way too short for CO2 (the 10 nanosec value I gave before arise more appropriate to some UV transitions). Ray gave the relevant numbers in his article for a 0.1 atm case. Tau (lifetime) of excited state O(1e-2 sec), tau collision O(1e-7 sec). So the thermalization/emission ratio ought to be roughy 1e5. Clearly thermalization dominates in all but the “thinnest” part of the atmosphere.
“Isn’t the photon’s energy being counted twice? It can’t both result in local warming and stimulate equal IR emission (since that emission would cool the “local” molecule(s) right back down again.)”
Brian H (and also Bryan) –
In a steady state, a layer of atmosphere absorbing IR photons, thermalizing the energy, and then experiencing photon emissions as a result will maintain a steady temperature – it won’t heat, because energy gain is balanced by energy loss. The important question is what happens when the IR photon input rises, so that more photons are absorbed (e.g., because there is more CO2 in an adjacent layer). Photon emission rates are a function of temperature, because almost all molecules emitting photons do so from thermal excitation rather than as a result of a photon they have just absorbed. If the temperature were to remain constant, the extra energy absorbed would not be matched by an increase in photon energy released. For a new steady state to be established, the temperature must rise until the new photon emission rate once more matches the absorption rate.
Fred Moolten
Do the models explicitly subtract the average emitted radiation from the average absorbed radiation to obtain the average thermal energy gained and hence the temperature rise of the volume under consideration?
In addition the models would in the same volume have to account for phase change effects, convection and diffusion(conduction).
Turbulent conditions are very likely as well as surface inhomogeneity.
Add in Earth-spin day/night, thunder and lightning and volcanic activity
They are to be congratulated for even attempting what seems an impossible task.
However a little modesty about the predictive value of the models would be appropriate until a track record of some success is evident.
“must rise”. Well, yes, arithmetically speaking. But that rise is not free. Until your new suitable emission temperature is reached, all the energy for that must come from the incoming radiation. So, assuming an artificial “step function”, there would be a pause in all emissions until that temp is reached. So the “cost” of getting there must be accommodated.
Double counting must be scrupulously avoided! At all “costs”!
;)
Hi Judy – I have published a comments on my weblog
Comment On Raymond T. Pierrehumbert’s Article In Physics Today Titled “Infrared Radiation And Planetary Temperature”
http://pielkeclimatesci.wordpress.com/2011/01/21/comment-on-raymond-t-pierrehumberts-article-in-physics-today-titled-infrared-radiation-and-planetary-temperature/
with the text
Judy Curry comments on the Raymond T. Pierrehumbert Article In Physics Today Titled “Infrared Radiation And Planetary Temperature” in her post
Pierrehumbert on infrared radiation and planetary temperatures [from Climate Cash]
I agree with Judy that this is a very informative and valuable article on the role of CO2 and other greenhouse gases in the Earth’s (and other planet) atmospheres. However, there is one major error in my view in the article.
Ray concludes that
“… increasing CO2 would warm the atmosphere and ultimately result in greater water-vapor content—a now well-understood situation known as water-vapor feedback.”
This significantly overstates our understanding of the water vapor feedback on Earth since phase changes of water are intimately involved. In a world without these feedbacks, but in which evaporation from the surface increases if surface temperature increases from the added CO2, his conclusion would be correct.
However, as summarized by Graeme Stephens in my post
Major Issues With The Realism Of The IPCC Models Reported By Graeme Stephens Of Colorado State University
where he wrote, for example,
“Models contain grave biases in low cloud radiative properties that bring into question the fidelity of feedbacks in models.”
“The presence of drizzle in low clouds is ubiquitous and significant enough to influence the radiative properties of these clouds and must play some role in any feedbacks.”
”….our models have little or no ability to make credible projections about the changing character of rain…”
major uncertainties remain.
The water vapor feedback in the real climate system remains incompletely understood.
thanks roger, i totally agree that the water vapor feedback is incompletely understood (not to mention cloud feedbacks, etc.)
Love those adjectives.
The total spherical energy output of the sun is incompletely absorbed by the Earth.
Does “incompletely” cover the territory, “very badly”, “hardly at all”, “minimally”, etc.?
roger, the link to stephens presentation is broken, do you know if this is still available somewhere? thx
Is there really a difference in opinion or only two different ways of understanding the words “water vapor feedback”. Pierrehumbert may have interpreted them to refer only to the feedback by water that is in gaseous state whereas Pielke includes also the connection to clouds. For the first interpretation the statement that it is well understood is justified, for the latter not.
Roger,
Yes, but it appears there is more wrong with the paper than that. Everyone knows the CERES data shows too much warming so in 2009 Trenberth rejected the directly measured data for his guess of 0.9 W/m^2. It appears Raymond has assumed Trenberth’s guess is correct, which appears to me to be a fine example of circular reasoning. Why not 0.3 W or 0.1 W?
Hi Judy – That link no longer works for some reason. I found his powerpoint slides, however, at http://gewex.org/2009Conf_gewex_oral_presentations/Stephens_G11.pdf
I will update on my weblog also. Thanks!
Roger
Ray Pierrehumbert’s article on infrared radiation and planetary temperature provides a useful and informative perspective on the nature of thermal radiation, and how this relates to the radiative transfer of thermal radiation and the greenhouse effect that is a common characteristic of terrestrial-type planetary atmospheres illuminated by solar radiation.
Raypierre describes some of the basic fundamentals that are important to the radiative transfer of thermal radiation. He notes that (1) the coupled vibrational and rotational states of CO2 have very long lifetimes compared to the collision frequency with other molecules; (2) molecular collisions establish and maintain the local thermodynamic equilibrium distribution (and population) of the vibrational-rotational states from which spontaneous photon emission and photon absorption transitions arise; (3) detailed balancing of energy transfer transitions under LTE conditions, as described by Kirchhoff’s Law, requires that Planck function limited thermal emission balance the absorption of thermal radiation at all wavelengths.
Naturally,because of space limitations, details of radiative transfer formulation and the radiative structure of the greenhouse effect are necessarily sketchy. For those interested in the details, there is the 500+ page book by Pierrehumbert, as well as a great many other books and articles on radiative transfer and the greenhouse effect.
There are now well over 250 comments on this thread, some perhaps in response to the question raised by Judy whether anyone has learned something, or changed their mind as a result of the discussion here. A glance at the “same old comments” makes me doubt that anyone has actually learned anything new – but learning is a personal experience best left for those to speak for themselves.
I have, however, been particularly impressed by the comments that have been put forth by Fred Moolten (and on earlier threads by Chris Colose). Fred, if I am not mistaken, is a semi-retired Medical Doctor who only recently has taken an interest in understanding the nature of global climate change, and Chris is a soon-to-be graduate student. Both have demonstrated excellent understanding of the basic facts, physics, and issues that define the global climate change problem that we face. I cannot recall any explanation that they have given that is at variance with our current best understanding of the facts as we know them. Would that all those who work as climate scientists had as clear and accurate grasp of the basic working of the climate system, relevant measurements, modeling analyses, etc. as Fred and Chris.
All this is very encouraging since it means that understanding global climate change is not limited to climate science experts who have been studying the problem for decades. Anyone who has the interest, and is willing and able to spend the time and effort to read and research the literature can come away with a good understanding of how the climate system works, what is driving climate change, including also an appreciation of the complexity of the climate system, and limitations of available observational data, that temper the conclusions that can be drawn.
Fred has been very patient in providing informative and well thought out answers to a great many questions that have been posed here. I believe that Fred has stated as much, that trying to explain a problem to someone less knowledgeable is the best way to learn. In that I am in full agreement.
Thirty-five years ago I had no clue at all as to what thermal radiation is about. I had just finished implementing a solar radiation model into the early version of the GISS GCM, and was asked to do the same for thermal radiation. As you well know, computers are totally clueless (but fortunately computers don’t have the arrogant ignorance that is sometimes exhibited by some of the commentators here), so that very addition, multiplication, subtraction, and division needed to describe the physical problem has to be painstakingly laid out step by step by step.
I was then asking, and having to find answers, to many of the same questions that are being asked here. What is an absorption coefficient? Optical depth? And why does it have to depend on pressure, temperature, and absorber amount? What happens if here is overlapping absorption, like between water vapor and CO2? Do we need to worry about scattering by clouds? Is the spectral variation of absorption coefficients important? If averaging of absorption coefficients is bad, what other options are there? Why is the Planck function required to multiply emission, but not absorption? Is there a ‘right’ answer, and how would we know it if we saw it?
It turned out that in the process of explaining thermal radiation in sufficient detail for the GISS computer to understand it, all of these questions became adequately answered. As outlined by Raypierre, invoking Kirchhoff’s Law under LTE conditions, we find that thermal emissivity must be equal to thermal absorptivity. Radiation emerging through a pinhole from an isothermal cavity of temperature T must be Planck radiation B(v,T). If an atmospheric slab of temperature T and optical depth TAUv is inserted in the cavity just beyond the pinhole, the emerging radiation from the pinhole according to Kirchhoff’s Law must still be Planck radiation, which can also be described as consisting of two components: the transmitted radiation, B(v,T) exp(-TAUv); and the emitted component, B(v,T) [ 1 – exp(-TAUv)]; the sum of which is equal to B(v,T).
Thus, each layer of the atmosphere will be characterized by its transmission, exp(-TAUv,n); its absorptivity, [ 1 – exp(-TAUv,n)]; and its emissivity, B(v,Tn) [ 1 – exp(-TAUv,n)]. Radiative transfer starts with Planck radiation B(v,Tg) being emitted by the ground. The outgoing flux at the top of the first layer will then be the sum: F1top = B(v,Tg) exp(-TAUv,1) + B(v,T1) [ 1 – exp(-TAUv,1)]. The second layer is then added on to obtain F2top = F1top exp(-TAUv,1) + B(v,T2) [ 1 – exp(-TAUv,2)], and so on to the top of the atmosphere.
The above holds for monochromatic radiation. It involves nothing more complicated than exponential extinction (Beer’s Law absorption), specifying the temperature, absorber amount, and absorption coefficient in each layer of the atmosphere, then going through the stack of atmospheric layers and summing up the products of the radiation transmitted through each layer and the radiation emitted by each layer. A tedious task to do by hand, but a rather simple task for the computer.
The complexity arises when we need to apply the above set of calculations to the entire spectrum. In line-by-line modeling, several million monochromatic calculations need to be performed. This is far too computation intensive for GCM applications. For climate GCM applications, we can regroup the brute force spectral calculations in terms of correlated k-distributions that only require a few dozen pseudo-spectral calculations to achieve nearly the same accuracy as the line-by-line calculations.
A Lacis: Did you ever get around to answering Willis E.’s shot across your bow about Pinatubo?
I appreciate Fred M. too, but frankly I thought you were a classic internet blowhard on the climate change side because of how poorly you handled discussion beyond technical issues — though you had no compunction about expressing yourself at that level.
I have to ask, since you ask it of the other participants here, “Have you learned anything new?”
Andy,
I agree with your points. I’ve learned a lot from Fred, Chris, you, Roger except from the posts of Judith and Peter’s. Even though your viewpoints on climate dynamics may be totally different, the debate following the spirit of real science is very helpful to new generation of scientists and also to general publics. Given the complexity of climate system, it is natural for scientists to debate with each other, but the history of development of modern meteorology as shown in the books [ The Atmosphere a Challenge: The Science of Jule Gregory Charney, http://www.amazon.com/Atmosphere-Challenge-Science-Gregory-Charney/dp/1878220039/ref=sr_1_5?s=books&ie=UTF8&qid=1295669247&sr=1-5 ; Meteorology at the Millennium http://www.amazon.com/Meteorology-Millennium-83-International-Geophysics/dp/0125480350/ref=sr_1_1?ie=UTF8&s=books&qid=1295669337&sr=1-1 ] would be able to tell us how climate scientists have been facing the challenge, and I hope the readers might be interested to understand the climate change dynamics from a broader perspective by reading these and other books.
Dr. Lacis,
Because I published a quarterly newsletter for medical doctors, I have to read scores of scientific papers in a different field from climate science. I can assure you I never see the journals in my field the kind of arrogance displayed by Raymond Pierrehumbert. He titled his paper “Infrared radiation and planetary temperature.” Wrong. The paper is about infrared (and I assume visible) radiation and inferences about planetary temperature. The arrogance displayed in the title alone is enough to be off-putting to any careful scientist.
It is a review paper. It reviews the state of the science and presents nothing new. The title is fully appropriate for a review paper.
A Lacis:
“All this is very encouraging since it means that understanding global climate change is not limited to climate science experts who have been studying the problem for decades. Anyone who has the interest, and is willing and able to spend the time and effort to read and research the literature can come away with a good understanding of how the climate system works, what is driving climate change, including also an appreciation of the complexity of the climate system, and limitations of available observational data, that temper the conclusions that can be drawn.”
Puhleeeeeze, dear Dr., stop this patronizing, elitist, ivory tower, know-it-all, arrogant obfuscation, and deal with the QUESTION OF WHY THE HYPOTHESIS DOESN’T MATCH REALITY. We know what you and other CAGW zealots believe; we know the radiative physics. What we DON’T know is why all this nice physics is not explaining anything that is happening with the temperature. Could it be that there is something beyond “radiative physics” that influence the temperature/climate?
Why would a climate scientist not want to engage with statements like these?
To Dr. Lacis: Thanks for recognizing the contributions of Fred Moolten and Chris Colose to these discussions. I would add Pekka Pirilä to this list. He is infinitely patient, knowledgeable and polite. Recognizing that he is not working in his first language makes his effort even more impressive to me.
I have also greatly enjoyed Vaughn Pratt’s not always patient or polite, but always provocative and entertaining contributions.
I agree.
And there are others, including Judy, who, through their comments and perspective, add real value to this blog.
JAE, local ground level temperatures are mostly set by local insolation, cloud cover, albedo, soil moisture, ET, lapse rate (partly a function of local relative humidity), extent of local convection (which can depend on above, plus local differences in the land surface and cover that will set up local convection cells). A classic very simple case is the “sea breeze” problem, given in many elementary texts. As the local land heats up, it sets up a convection cell that draws in cool air off the oceans, lowering adjacent land T. To a rough sort of approximation the Indian monsoon works the same way. The greenhouse effect of CO2 and/or H2O doesn’t set local temperatures. You’re simply looking at the wrong mechanism. Of course there is something beyond radiative physics that sets local temperatures – just ask you local meteorologist. And on of the above posters foamed at the mouth over Ray’s lack of discussion of atmospheric convection. Don’t worry, Ray understands that and it’s importance very well.
Dr. Curry,
I don’t know if you have noticed, but the discussion above between Fred, Pekka and cba this morning is very interesting.
At January 22, 2011 at 3:45 am Pekka quotes from a 2009 paper from Trenberth and co-authors:
“As the value 0.9 was considered to be much more accurate than the direct analysis of the CERES data they made adjustments to the analysis in order to reach consistency. ”
And it appears Raymond has used this figure in his 2010 paper rather than the direct CERES measurements, isn’t Raymond’s paper built entirely on circular reasoning?
thx, i’ll take a look in detail this aft
Ron,
These papers and presentation involve many separate issues. Some of the papers are trying to determine, how much warming is taking place, but neither Pierrehumbert nor the various papers by Kenberth, Fasullo and in some papers also Kiehl are aiming at that. They try only to explain what is going on and present some quantitative numbers on that. They pick the number from the approach that they think gives the best estimate for that number without any implication that this would be contain something new.
These papers are descriptive, not new original research in the sense that it would aim at a more accurate estimate of the warming trend. I think these papers are very useful and they should not be controversial at all, when their goals are understood properly.
While the Trenberth – Fasullo papers tell what is their goal and how they have reached their results, I think that they are not at all as clear as they could be. This is the problem with these papers. If they would be clearer much present confusion would have been avoided.
Pekka,
Usually when measurement data is found to be wrong, researchers are able to explain why – what went wrong with the instrument and how they were able to determine the extent of the bias, etc. It does not appear Trenberth tried to do that at all. It appears he just said “Well, we know that’s not right. Let’s replace it with 0.9 because that fits with our theory.” I don’t buy it. That is not science.
Ron,
It is known that the accuracy of the CERES analysis for the global energy fluxes is not accurate. According to the title of the paper that is, what a 2006 paper by Wielicki et al is about. (It is a conference paper and I have seen only the abstract and what Fasullo and Trenberth have picked from the paper.) Thus it is indeed known that the result given as 6.4 W/m^2 is not accurate, but as an error that may be as large as the deviation from zero.
Whether one trusts Hansen or not, it is rather easy to estimate that the net flux cannot be much more than 1 W/m^2 without heating the earth much more rapidly than it has heated. Anything close to 6.4 is really out of question. Thus this number is really only an indication of, how close to reality the CERES analys can be. The number 0.9 is picked as one value with some justification. Another choice would have been to force it to zero and say that we know that this is not correct, but it lets us to get the rest of the numbers reasonable. Whether it is set to 0.9 or to 0.0 it is anyway a input constraint, not a result. This is a central point that Trenberth (et al) tells in all these papers but not as clearly as he should tell it. In particular the 2009 paper where he is the only author is far from clear enough.
Sorry for the bad sentences (starting with one formulation and ending with something different) and other linguistic errors.
Whether it is set to 0.9 or to 0.0 it is anyway a input constraint, not a result. This is a central point that Trenberth (et al) tells in all these papers but not as clearly as he should tell it. In particular the 2009 paper where he is the only author is far from clear enough.
Actually Trenberth, K. E., and J. T. Fasullo, 2010: Tracking Earth’s energy. Science, 328, 316-317 tells quite a different story.
The human influence on climate, mostly by changing the composition of the atmosphere, must influence energy flows in the climate system (4). Increasing concentrations of carbon dioxide (CO2) (see the figure) and other greenhouse gases have led to a post-2000 imbalance at the top of the atmosphere of 0.9 +/-0.5 W m–2 (5); it is this imbalance that produces “global warming. p. 316 my emphasis.
Reference (5) is to. K. E. Trenberth, J. T. Fasullo, J. Kiehl, Bull. Am. Meteorol.Soc. 90, 311 (2009), which as we know tells quite a different story:
Thus, the net TOA imbalance is reduced to an acceptable but imposed 0.9 W m−2 (about 0.5 PW)
I think you are being far too generous to both authors.
Jim Owen | January 22, 2011 at 12:24 pm |
Yep -cognitive dissonance, JCH. Rewriting history is one of the symptoms.
The irony is that the Westside highway did flood and has since been rebuilt!
“This was probably one of the few elevated roads that could be blocked due to flooding during a rainstorm! ”
http://www.nycroads.com/roads/west-side/
Hunter said:
“Yet how many posts here assume that AGW is basic physics and that the climate is deterministic.”
___
Well, certainly if AGW is occurring it is basic physics and quite deterministic. Some might incorrectly assume that the climate, as a chaotics system is not deterministic, but it is quite so, but being chaotic it is not fully predicable, and will have deterministic but unpredictable “tipping points” where it will suddenly shift to a new balance point of equalibrium.
R. Gates,
Thank you for that, but perhaps I did not make my point as clearly as I wished.
When I say the climate is not deterministic, I am using that term in reference to the atmosphere/climate system it in the sense that it is not linear, all of the variables are not accounted for or even understood, the sample size of relatively high quality data is very small, the margins of error are large and functions of the atmosphere/climate system are not well understood.
Australia is a great example:
Climate scientists proved disastrously unhelpful in actually preparing for the end of the cyclical drought.
Pielke, jr. has highlighted a study that compares Earthquake losses to corrupt societies. Another way of looking at corruption that leaves buildings poorly built in earthquake zones is that the advice policy makers choose to follow or ignore is based on a faulty view of risk.
Climate science pushes global climate disruption caused by CO2 as the main issue of climate today. This is at the expense of studying natural history, of listening to civil engineers, of considering that perhaps they do not have the picture in full or very accurately.
Instead there is the push of CO2.
Not adaptation. Not reflection on the scale of the impacts of CO2. Not an admission that not one natural disaster has been linked to AGW. Just reduce CO2, no matter the price.
That is not really better than letting building inspectors pass concrete buildings with little re-bar and weak foundations in Haiti.
Rebar? In Haiti? Wow, that would be an innovation. It certainly wasn’t used when I was there.
Jim,
I did not want to say ‘no rebar’, and have someone post a photo of a broken Haitian building with some rebar and dismiss my comment on those grounds.
Other earthquakes worth studying are the pre-WWII earthquakes in Japan, where building codes permitted poor quality construction.
If I recall, one thing that made Frank Lloyd Wright famous was that the hotel he designed for Tokyo survived the earthquake due to his insistence on high quality concrete construction.
I put New Orleans and Katrina in as an example of how corruption leads to bad policies which leads to disaster, by the way.
hunter –
I’ll apologize for being late to the party here.
Just wanted to say that consistent with many Third world countries, most construction in Haiti is either concrete block or stonemasonry. Except for the large percentage of people who live in stick and wattle huts. Rebar is certainly used, but not as a standard construction technique – only for “special” buildings. Which, in Haiti, didn’t include even government buildings.
I certainly agree about New Orleans/Katrina.
The biggest issue regarding CO2 as a greenhouse gas, and I’m sure no one is denying that CO2 is a greenhouse gas, seems to be figuring out just how much effect CO2, and all greenhouse gases together are having. This is whether we consider thin opaque layers, or some other strategy, the real question is “what is the total Carbon Dioxide contribution, in temperature terms, and therefore the human contribution?”
We know the Earth’s climate is “just right” for life, and we know much about the other planets’ conditions and temperatures. I hope I’m not being a bit blunt in this highly technical conversation, but it seems we have a perfect example of ‘what if’ very close to us. It’s our Moon.
If we answer the simple question about why the Earth is warmer than the Moon, it should help put a lot in perspective. I’ve not heard any discussions about the actual temperature benefit for Earth’s Climate from Earth’s internal heat. Our crust is only 0.075% of Earth’s radius, with molten rock below. In the 2 hottest places on earth, the crust is only half this value. So, internal heat conducted through the crust is warming us some. There is a thermal flywheel of Nitrogen and Oxygen, which if on the Moon, would hold down ‘daytime’ temperature, greatly reducing radiation losses (temp^4), and increase the average temperature.
The remainder is the insulating value of greenhouse gases. It’s my belief that once we (some talented scientist) puts a number on internal heating and flywheel warming, the remainder will be greenhouse gases, of which there are many, but only a few with significance. At this time, we can put real numbers on the actual effect, be it 10 deg or 0.001 degree.
Harold –
Some time ago I asked Andy Lacis about the internal heat engine that we live so close to (the Earth’s core) as well as about the heat that accompanies all that CO2 production that’s the central point of contention here. I got no answers. Nor do I expect any. Recently I suggested that the new Indian paper re: cosmic ray effects might be worthy of consideration (especially considering the recent success of the Cloud experiment) – and was dismissed out of hand. Expecting answers to “inconvenient questions” is not something I do much – at least not wrt GW/CC. It’s why I long ago coined the phrase ” The Church of AGW”. :-)
I wonder if anyone really believes those things are factors considered by either the models or the climatologists? I wonder if anyone else believes they should be?
Jim,
Thanks for the polite response. I know it’s very easy for a group of enthusiasts to lead themselves off on a tangent, and lose perspective of “the big picture”.
A burning question of mine, is that since the identification of the “Ozone Hole” was coincidental with development of measurement techniques, that perhaps the Ozone Hole has come and gone over the eons. NASA animations of the Ozone Hole shows a ring of much higher than normal concentration (450-500 daltons) around the hole, which makes me think about ‘seasonal displacement’, rather than ‘seasonal destruction’. But now that this is ‘settled science’, perhaps “Peer Review” no longer applies? In years past, people blamed themselves for various natural disasters, “the gods are angry”, and the rational thought at the time was sacrifice, as it is today.
Harold –
A burning question of mine, is that since the identification of the “Ozone Hole” was coincidental with development of measurement techniques, that perhaps the Ozone Hole has come and gone over the eons.
I have had the same question cross my mind more than once. But the truth is that nobody knows. Actual ozone data measurement started in the mid-50’s and showed no thinning of the ozone over the Antarctic. I won’t pursue the question of observation accuracy or technique – nor of instrument accuracy or reliability back then. The Hole was “discovered” in the early 80’s , first by ground based measurement of the ozone column, then later confirmed by examination of the Nimbus 4 and Nimbus 7 satellite data. Since then it has waned and waxed with little regard for theory or the intent or results of the Montreal Protocol.
And recently there was something that crossed my path that indicated some question about the actual role of CFC’s wrt the Ozone Hole. But I didn’t save it and don’t know how reliable it was.
But don’t despair. At various times an Earth-centric Universe, phlogiston and Newtonian mechanics were also “settled science”. None them survived the “cut”. And today’s “settled science” will become tomorrow’s historical oddity. We just may not be here to see it happen. After all, it took 2500 years for Democritus’ atomos to be recognized and transformed into the Bohr atom. Which, in turn, has been replaced several times in less than 100 years. :-)
Jim Owen,
Keeping track of the energy is key to understanding what is happening with the climate system. There are indeed a half-dozen or so different sources of energy that contribute to the energy balance of Earth but are not being included in climate model simulations of terrestrial energy balance. These include the internal geothermal energy (volcanoes, geysers, earthquakes, etc), tidal friction, meteoric accretion, nuclear energy production, and the heat released by the burning of fossil fuel.
All of these are significant amounts of energy (in the human perspective). For example, there is an estimated 1200 tons/day of meteoric dust impinging on the Earth at velocities of roughly 40 km/sec. This amounts to 10^15 J/day (or the equivalent of 250 kilotons of TNT/day). But since the surface area of the Earth is 5×10^14 m2, the total meteoric energy input amounts only to 25×10^-6 W/m2 compared to the global mean 240 W/m2 solar energy. Similarly, the heat energy that is generated by tidal friction is about 0.0075 W/m2. The internal geothermal energy is the largest of these minor contributors, with the global mean energy amounting to about 0.05 W/m2 (Allen, Astrophysical Quantities).
The world production of coal is about 7 billion tons/year. With the energy equivalent of 10^7 J/kg, this amounts to about 4×10^-6 W/m2. Roughly similar amounts of energy are contributed by the burning of oil and gas. Nuclear energy amounts to about 10^-6 W/m2. Thus the direct heat input to the climate system due to the burning of fossil fuels is quite negligible. But not the greenhouse effect due to the CO2 that is being added to the atmosphere.
While we are on the topic of 7 billion tons of coal/year, it is instructive to note that a cubic meter of water weighs a ton, and that the specific gravity of coal is about 1.2. This means that each year humans are digging nearly 6 cubic km of coal out of the ground to be burned. (Oil and gas production results in a similar amount of carbon, but because of the hydrogen content, oil and gas produce roughly twice the energy per kg of carbon.)
For additional perspective, the atmosphere contains about 390 ppm of CO2 by volume. The weight of the atmosphere is about 1 kg/cm2. Thus the total mass of the atmosphere is about 5×10^15 tons, so that 1 ppm of CO2 (44/29×10^-6) comes to 7.6 billion tons. With a specific gravity of 1.5, the effort to extract 1 ppm of CO2 from the atmosphere (as part of some geoengineering effort) would require the accumulation and sequestering of about 5 cubic km worth of dry ice equivalent of CO2 – a not particularly attractive prospect.
The above puts it all in perspective. Globally, compared to solar energy, all of the other energy sources are negligibly small – i,e, much smaller than the uncertainty in the solar energy absorbed by Earth. Locally, there are places where tidal and geothermal energy is sufficiently concentrated to be viable replacements for fossil fuel energy sources. Likewise, solar, wind, and nuclear energy sources are available, but are not being fully utilized.
All this points to coal as being the most problematic greenhouse contributor of atmospheric CO2, and the least efficient in energy production per kg of carbon burned – and thus the obvious candidate to be phased out as quickly as practicable, if we are to be serious in averting the looming dangers global warming.
This picture scanned from the 1979 book “Renewable energy” by Sorensen gives a nice overall view of the energyflows of the earth. Numbers are given in TW. In comparison the present energy use of human societies is about 15 TW.
The data is old and some of it is outdated, but in general they should give the correct picture.
I will say “Thank you” for the answer. It’s the best answer I’ve gotten in 10 years and I appreciate that. I see some problems with it, but I won’t argue those points at the moment. I’m sure that will come later. :-)
A Lacis | January 23, 2011 at 3:34 am | Reply
…
if we are to be serious in averting the looming dangers [of] global warming.
Which begs the question, of course. Are there dangers? Are they looming? Is global warming occuring as a result of CO2 changes?
Your answer to all the above is clearly “yes!”. But none are proven, and there is much evidence against each and all of them.
A Lacis, Pekka Pirilä,
I appreciate the knowledgeable response and figures. Without having calculated the conductive transmission of internal heat, I thought it was much bigger. I went back and calculated the value using conductance of limestone, and the simple math gives exactly 0.05 W/m2. At Death Valley, this rises to only 0.1 W/m2.
How about the flywheel effect of our atmosphere? If on the Moon, etc. (my previous post). The only way I can imagine to consider this, is to consider only non-greenhouse gas, 21% O2, 79% N2 at our pressure.
Surely the surface of the Moon is very highly insulating, and heats up very quickly. An atmosphere with convective cooling of the surface would drastically reduce daytime re-radiation, and the warm atmosphere would greatly warm the night-time surface. Thoughts?
I was aware of the work on measuring Ozone and in the very early days and heard about the CO2 concerns (which my Dad was skeptical about) in the 70’s and 80’s. My Father (rip) is Walter R. Seelig. He was responsible as Project Manager for mapping the Antarctic, he was on the first flight (2 military transports) which flew over the South Pole, has 2 mountains named for him, Mt. Seelig, and Seelig Peak, both in the Antarctic. So, while I’m proud of him, the point here is that he was inside the NSF, went to the Antarctic a lot of times, was Scientific Liaison with the captains of the Eltanin and other scientific cruses, so, I lived through nameless numbers of slide shows, heard all about life at McMurdo, saw stuff from Scott’s Hut, photos of penguins, etc, etc. , and I heard a lot about the programs going on in particularly the Antarctic. I studied “OPERATIONS HANDBOOK – OZONE OBSERVATIONS WITH A DOBSON SPECTROPHOTOMETER
by W. D. Komhyr Prepared for the World Meteorological Organization Global Ozone Research and Monitoring Project June, 1980”, and that making a measurement and expecting it to be accurate to even a few parts/10million was at that time not usual. So while most believe that CFC’s and not solar blasts (the N&A. Lights) caused the Ozone Hole, the results of the Montreal Protocol were not overly damaging to the Economy, and whether the Science is right or not, the downside is not too bad. Sulfur and Acid Rain? Absolutely. Tetra-ethyl Lead? Asbestos? PCB’s? VCM’s? Thalidomide? Nicotine? Did whales sunburn in the past? Did people? We’ve come a long way. Recently, the jury is back in regarding DDT and Eagles.
The cost of a mistake in Energy is extreme.
Harold –
If you can find a copy of Aaron Wildavksy’s book “But is it True?” you might find it interesting. It’s a research report wrt the truth or falsehood of many environmental issues.
Jim,
I have not read Aaron Wildavsky’s book and just checked, what Wikipedia tells about him. Judging from that description he does not like the precautionary principle or at least the way it is used in practice. To state my own prejudice: I do like the principle, but I do not like the way it is used. It is possible that Wildavsky would agree, but it also possible that he does not like even the principle itself.
Why do I write this comment. It is to tell that I consider these issues the most central and essential problem in the whole question on, what we should do with the climate change. They are in my mind so difficult that the wise decisions cannot be reached in an informed way without a deep discussion of these issues accepting opposing points of view and spending a lot of effort in trying to reach wise conclusions.
Presently few people are willing to go deeper in these issues. They rather make up their mind based on their general political attitudes following usually the decisions other people with similar general attitudes have made before. They are for rapid action, if they believe more generally that free uncontrolled development based on market forces is creating more and more problems. They are against such actions, if they have in other connections learned to trust that free markets with minimal regulation is best for us. When they have made up their mind, they try to use the uncertainties in their favor. Either they may say that precautionary principle is easy to apply and we should act promptly, or they say that we do not know whether this is serious at all and we should postpone all action.
The issue is not that simple and it may indeed be very crucial. While I am not at all certain on the outcome, I am convinced that anthropogenic influences are presently so strong that dangerous consequences cannot be simply ruled out as being an a level not reached by human influence.
This means that I do not want to rule out precautionary principle. If this is accepted, we must proceed to think in more detail. Indiscriminate application of precautionary principle leads often to stupid decisions, which are unlikely to help the issue considered and are likely to cause damage elsewhere. This appears to be a point made by Aaron Wildavsky. The short references to his thoughts presented in the Wikipedia article include some very good points. But my own conclusion is still that this is not a sufficient basis for ruling out the precautionary principle. It only tells that we must be very discriminate in applying it and that we should spend much more thought in finding out, what this leads to.
Here’s the most relevant application of the PP:
In the current context, “fallibility” applies to making errors about the impact of CO2, in particular human output thereof, and about the consequences of major arbitrary changes thereto. ‘Man’s fallibility’ when given access to unrestrained government power is massively documented by history, both recent and ancient.
And the consequences of choking off CO2 production in any meaningful way are not in doubt; they are brutally negative for the vast majority of the world’s population. It is notable that this is a “feature, not a bug”, for many of the strongest proponents of CO2 controls and cutbacks.
So the PP, in any sane view, mitigates against empowering those who would willingly, even eagerly, cull the planet with regulatory and economic suppression of emissions.
Pekka –
I’m personally antagonistic to the PP because I see it as fear based. And also because I’ve seen it used indiscriminately to “prevent” actions or occurences that are either extremely unlikely or conversely, are normal hazards of living as a human being.
As Brian H pointed out, the application of the PP to “climate policy” would have disastrous effects on much of the world’s population. China and India seem to understand this. Witness the attempts by both to upgrade their technology and infrastructure to provide better survivability for their people. China, for example, is building massive power generation capability, including wind, solar, nuclear and coal. And India is following suit. There’s a reason why Copenhagen and Cancun failed – the probability of success was somewhat less than that for the survival of a snowball in Hell. This, of course, is 20/20 hindsight on my part.
Anyway, back to the lack of necessity for the PP in this context, if you can find a copy of Matthew Kahn’s 2010 book “Climatopolis” you might find it interesting. He’s a economics professor, a warmist and a believer that the human race will survive “climate change” very well.
Jim,
One of the common problems with PP is exactly that its proponents typically select one risk, claim that PP should be applied specifically to that and neglect its application to the actions proposed. This is one thing that I had in mind, when I wrote that I do not apply the ways PP has been used and that it should be used much more discriminatingly.
On the other hand PP is essentially synonymous to risk aversion, which is accepted generally as a correct guideline in many fields, most concretely in investing. Few people argue that risk aversion is not the correct way of considering risks and risk management. The problem is that PP is often taken as argument by people, who cannot or are not willing to consider risks quantitatively. It is used to justify almost anything to mitigate the declared risk, even taking larger risks of other kind.
A.Lacis wrote
Thus, each layer of the atmosphere will be characterized by its transmission, exp(-TAUv,n); its absorptivity, [ 1 – exp(-TAUv,n)]; and its emissivity, B(v,Tn) [ 1 – exp(-TAUv,n)]. Radiative transfer starts with Planck radiation B(v,Tg) being emitted by the ground. The outgoing flux at the top of the first layer will then be the sum: F1top = B(v,Tg) exp(-TAUv,1) + B(v,T1) [ 1 – exp(-TAUv,1)]. The second layer is then added on to obtain F2top = F1top exp(-TAUv,1) + B(v,T2) [ 1 – exp(-TAUv,2)], and so on to the top of the atmosphere.
This model answers also the question discussed above how compares absorption to emission in any given layer n.
They are exactly equal.
The equation above says that exiting power = transmitted power + emitted power
But as absorbed power = entering power – transmitted power and the equilibrium condition says exiting power = entering power it follows that
emitted power = absorbed power for every n as long as the layer is considered in equilibrium (constant T).
Tomas,
A. Lacis describes what happens to each wavelength separately. The energy need not be conserved for each wavelength but only when all wavelengths and also convection, latent heat and conduction are taken into account.
For these reasons the radiative fluxes do indeed not conserve energy precisely.
Pekka
The energy need not be conserved for each wavelength but only when all wavelengths and also convection, latent heat and conduction are taken into account.
This is correct and I didn’t say something different .
I didn’t even use the conservation of energy just the fact that there was equilibrium (LTE and constant T) which implies that the distribution of the quantum states of CO2 is constant . From there follows that entering power (for v) = exiting power (for v).
As the time scales for collisional processes are 7 orders of magnitude smaller than convection and conduction processes, from the point of view of radiative equilibriums convection and conduction can be neglected for all but the most violent processes.
This is btw one of the reasons why the radiative processes are not really interesting for me because the true dynamics of the system happen at much much bigger time scales where indeed convection, conduction and latent heat play the fundamental role.
Tomas,
I do not really understand, what is the point in your messages. One possible problem is in the assumption of constant T. You may use it in a way that is not correct. If the temperatures of successive layers differ, one must be very careful in using the assumption of constant T even for a single thin layer.
It’s my experience that people with conservative tendencies tend to be cautious regarding real threats, and do indeed take a broader view tempered by rational thought. Caution is a valuable strategy for survival. My perspective regarding controlling CO2 to mitigate a considered threat is that the social commentary at present a highly emotionally promoted strategy….The ‘forget rational thought, it’s an emergency” tactic….What about Polar Bears, etc?
Due to a ban on hunting Polar Bears, the population has skyrocketed over the past ~50 years. There are no reports I’ve seen showing any change in this trend. Of course we can always find dead examples of any species which died of starvation. But I drift from my point.
I admire the rational, data based discussions here. Wildavsky’s book is on Amazon, but it’s a bit steep.
I’ve heard numerous claims that Venus’ unusually high temperature is from runaway greenhouse, but absolute silence regarding Mars. Isn’t Mars’ temperature ‘about right’ for its’ distance from the Sun? In a general sense, Mars’ level of CO2 is equivalent (molecules/m2) to 55,000 ppm of CO2 at Earth’s temperature and pressure. Yet with a low overall atmospheric pressure (~5% of Earth’s), the “Thermal Flywheel” on Mars is lower than Earth’s, and albedo of Mars is 25% lower than Earth’s. Isn’t it a fair comparison to use Mars’ relatively greater level of CO2 to rationally lead one to believe that perhaps CO2 is a minor player here on Earth? Doesn’t this show Earth’s warming is something else?
James Barrante, in “Global Warming for Dim Wits”, though an unfortunate title in my opinion, fairly well proves patterns of Global Temperature and CO2 put cause and effect in the proper perspective….
The geologic record does show higher temperatures cause higher levels of CO2. Temperature goes up, delay, CO2 goes up, and Temperature goes down, then, CO2 goes down. Has Dr. Barrante missed some big point?
http://www.tech-know.eu/uploads/Greenhouse_Effect_on_the_Moon.pdf
Harold,
I don’t spend much time worrying about the health and well being of polar bears. (The best of luck to polar bears.) I have read where changes in sea ice have made life more difficult for some species of penguins, better for others. A noteworthy observation and point well taken, but not something to cause me to write my Congressman about (yet) , as this is not really part of my direct responsibility or research to worry about.
On the other hand, comparing the greenhouse effects on Mars and Venus is quite relevant to the study of climate of Earth and the mechanics of the terrestrial greenhouse effect. Since the greenhouse effect is primarily a radiative effect, the same radiative modeling must be applicable to Mars, Earth, and Venus, and be able to explain their different temperatures.
The relevant Mars parameters (from Wikipedia) are: Bond albedo = 0.25, and mean Sun-Mars distance = 1.523 AU. This means that the absorbed solar radiation by Mars is (1 – 0.25) x 1367/4 W/m2 /(1.523)^2 = 110.5 W/m2, which corresponds to 210.1 K as the effective equilibrium black body radiating temperature, i.e., (110.5 /W/m2 = 5.67×10^-7 x (210.1 K)^4
The other relevant data are: Mars atmosphere is 95% CO2, mean surface pressure = 0.63 kPa (0.62% Earth’s), and 0.376 Earth’s gravity. This gives Mars’ CO2 per unit area as 0.0062 x 0.95 /0.376 = 0.0157, or about 40 times greater than the 0.000390 value of Earth.
Yet, the greenhouse effect that we calculate for Mars is only about 5 K, compared to 33 K for Earth (and about 500 K for Venus). The reason for the big difference is the low pressure on Mars (equivalent to 35 km altitude on Earth). The pressure broadening line width of CO2 absorption lines is directly proportional to the atmospheric pressure. The CO2 absorption line strength (spectral area) is basically the same on Mars as it is on Earth. At the low Martian air pressure, the spectral absorption is piled up to be more than a 100 times stronger at the absorption line centers on Mars compared to Earth (and proportionately weaker in the line wing regions) making the spectrally integrated absorbing ability of CO2 much less efficient on Mars compared to Earth.
On Venus, the atmospheric pressure is about 100 time greater than on Earth. This has the effect of spreading the spectral absorption much more evenly across the spectrum, making CO2 absorption (and greenhouse effect) that much more effective on Venus than on Earth.
There is one additional fact about the greenhouse effect on Earth that is different from Mars and Venus. Earth has a strong water vapor (and cloud) feedback effect that acts to magnify the CO2 greenhouse effect. Thus, of the total 33 K terrestrial greenhouse effect, CO2 accounts for only 20% of the effect, with water vapor accounting for 50%, and clouds accounting for 25% of the terrestrial greenhouse effect (the other 5% comes from methane, nitrous oxide, ozone, and chlorofluorocarbon gases).
If it weren’t for CO2, CH4, N2O, O3, and CFCs (the non-condensing greenhouse gases), which provide the necessary support temperature for water vapor and clouds to remain in the atmosphere, the terrestrial greenhouse effect would collapse, and the Earth would plunge into an icebound state. As feedback effects, water vapor and clouds provide a strong magnification of the CO2 greenhouse warming. That is why it is important to control the amount of CO2 in the atmosphere because CO2 is the principal controlling factor of the terrestrial greenhouse effect.
There is an additional factor to note about the Martian surface temperature. It exhibits large diurnal and seasonal amplitude, ranging from as low as -90 C to as high as 30C. This is because the Martian atmosphere is so thin, and because the Martian soil has little heat capacity. The ocean on Earth has a very large heat capacity, moderating the diurnal and seasonal temperature range, and requiring decades to centuries for the full effects of global warming from increasing CO2 to materialize.
I agree mostly on what you’re saying on Mars. With 40 times the co2 one sees the difference with the total pressures not broadening the linewidths. That leads to less blocking. even on Earth, most of those strong lines have extremely short optical paths so tall narrow lines don’t do much. Another factor is that there is some altitude distance between lowlands and highlands that appear to be enough to affect surface pressure. Also, there are variations in albedo that are significant. I recall seeing a NASA albedo map that shows this in detail and using a single number for albedo may not yield that great an answer. Also, I seem to recall some differences in reported mean T that could attribute as much as almost 10 deg C to the gray body.
I think that co2 on Earth is just under 30w/m^2 while the total is near 110 w/m^2 for ghgs (my model) but the whole blocking including cloud cover is around 155 w/m^2.
I do not agree that there is a strong h2o /cloud feedback effect. In fact, the cloud feedback I’m pretty sure is quite negative. The h2o vapor positve feedback is very limited.
cba:
As you say, of the roughly 150 W/m2 of the terrestrial greenhouse effect, CO2 accounts for about 30 W/m2. Water vapor accounts for about 75 W/m2, clouds for 37.5 W/m2, and the minor GHGs for the remaining 7.5 W/m2. These numbers are for the current total atmosphere (the impact of cloud albedo on SW radiation produces the negative part of cloud feedback).
For small climate perturbations relative to current climate (such as doubled CO2), the different feedbacks are not in linear proportion to the fractional attribution listed above. In particular, the cloud response appears to be much more strongly saturated than the water vapor response. I think the net cloud feedback is still positive, but only marginally so.
A. Lacis,
On average, each each m^2 of total cloud cover will block somewhere around 40w/m^2. My own tinkering yielded slightly lower numbers than you present but I’m limited to 65um on the long end and there’s just a bit more below that. Total ghg for mine was around 110w/m^2 with co2 at almost 29 and h2o at almost 70. I seem to recall a hansen paper claiming 120 total ghgs. I’m not sure if it included the 7.5 nonghg contribution and mine doesn’t.
For overall blocking, I like the simple averages approach. Assuming 288.2k (1976 US std atm surface mean) and 1.0 emissivity for deep IR, one calculates 391.16 w/m^2 via stefan’s law. Taking surfaced averaged TSI at TOA corrected for albedo reflection, we get 238 W/m^2 being absorbed for albedo of 0.3 and TSI (new TIM value) being 1360.8/4 w/m^2 . for practical purposes, we can say 391 is emitted and all but 238 w/m^2 is being blocked for the real Earth average and that amounts to 153 w/m^2 (or in essence, 154 w/m^2 for the earlier TSI value). This is for an earlier point in time when there was an average balance assumed.
A similar breakdown to Trenberth’s, KT97 yields a serious error in their breakdown of albedo fractions. Using reasonable values for clouds, oceans, land, provides their original breakdown of 0.08 for land and 0.22 for cloud/amtospherie contributions. They used a 62% total cloud cover for their model there. They didn’t take that into account as I recall when figured the final contributions of either. 0.08 indicates 27 W/m^2 surface albedo where there isn’t cloud cover. However, with clear skies being 0.38, we get surface contribution of only 10 w/m^2 out of 102 w/m^2 using 0.30 albedo and TIM’s 340.2 W/m^2 TSI . That leaves 92 w/m^2 for clouds/atmosphere and I believe the valid value for atmosphere is around 10w/m^2 or a bit less. That leaves 82 W/m^2 of albedo for clouds.
Using 37.5 w/m^2 for blocking and albedo contribution of 82 w/m^2, we have a net effect of 82-38 = 44 w/m^2. How do you get a positive feedback contribution out of that? with that much net effect if there were any positive feedback present what so ever in the cloud cover fraction, this would be unstable and it would be driven as close to the clear sky rail as it could possibly go and stay there. We’d have a new balance point to achieve, one with an albedo of 0.08 and required mean T increase of around 7 deg C.
Nice, even though Mars has much more CO2 it has little “greenhouse” effect. Silly scientists thinking CO2 was an issue.
On Venus it is different again. The window is open. CO2 is the only thing substantially blocking and even with pressure broadening there is darned little of that window blocked!!So, what does all the RTE really do for us on Venus when there is so little else to capture the IR from the ground which, by the way, due to its much higher temp, is NOT near the 15um primary CO2 band?? Oh, and there is very little in the way of other GHG’s to give that feedback effect. It is what it is.
The fun thing about Venus is that at 480c the black body peak is at about 4um instead of 15 giving about 61w/m2 for the 15um and 990w/m2 at 4um. Since CO2 isn’t as efficient at 4um how does that actually affect the calculations?
http://www.spectralcalc.com/blackbody_calculator/blackbody.php
There are no other particles to speak of for the CO2 to thermalize. When there is a collision it is with another CO2 particle. Wonder what happens to the chances of emission??
Here is an interesting calculation of the probable temp of Venus based simply on the dry adiabatic lapse rate and the atmospheric mass.
http://motls.blogspot.com/2010/05/hyperventilating-on-venus.html
The other fun stuff is that there isn’t even the beginnings of enough SW making it to the ground to heat it. So, exactly how does CO2 in 3 tiny bands heat the surface of Venus to the point it is outputting close to 6000w/m2???
kuhnkat,
you forgot the clouds. 100% coverage, optically thick.
An interesting calculation might be in order. One must remember that gases like co2 will radiate their spectrum at a characteristic temperature (their temperature). It’s just that if their temperature is less than that of a background – the surface – then it will turn out to be a net absorption rather than a net emission. the calculation is to find just how much radiated power is coming out of the high pressure co2 “ocean” versus how much solar power is being absorbed by the high pressure co2 ocean and ground surface.
Well, no, I didn’t FORGET the clouds. The clouds are what block or reflect most of the SW AND LW PREVENTING it from heating the surface or even the lower atmosphere. Since the reflection is about 75% what penetrates the lower atmosphere is then less than what reaches the surface of the earth and the energy available to heat the atmosphere is less.
You aren’t going to suggest that there is large scale downward convection of heat from areas colder than the surface??? So about 40 kilometers above the surface little heat is going down and little is going up. Now this is sounding like the fabled blanket that is told to us about the CO2 on earth, but, it isn’t caused by primarily by CO2.
Here is some more from Lubos Motl:
http://motls.blogspot.com/2010/05/venus-chris-colose-vs-steve-goddard.html
You may be interested in the emissivity of CO2 at the pressures and temperatures seen in Venus’ lower atmosphere.
” the calculation is to find just how much radiated power is coming out of the high pressure co2 “ocean” versus how much solar power is being absorbed by the high pressure co2 ocean and ground surface.”
There is very little solar power making it past the clouds you so kindly REMINDED me of.
What do you think of that Venusian adiabatic lapse rate under those clouds?? Looks to be about 10c per kilometer to me. How does all that CO2 and other stuff manage to lose heat so fast if it is such a great insulator/blanket/slower of cooling???
Let me make one final suggestion. Try to ignore how Venus managed to get in the condition it is in. Anything based on hypothesis of what created the conditions are probably not germane to the current conditions. Try and concentrate on the actual observations of what is currently happening. Once that is understood we MAY be able to say something about how it got to this situation.
kuhnkat,
I don’t like venus as an example. It’s got an “ocean” of co2 with pressures out the gazoo. It’s about 3/4 of an au and I doubt that the albedo is really only 0.75. it also has a day that lasts closer to a year than an Earth day which would be disastrous were it not for the cloud cover.
I doubt there’s any convection going. I also suspect that the co2 radiating amount is in excess of the incoming solar. That sorta makes the whole mess a cloud function.
I agree that Venus is a poor example of anything but Venus and basic physics.
So clouds are now positive feedback?
And you have shown that where?
Harold: IMHO, the simple 3-page article I linked debunks about everything Dr. Lacis said.
Unfortunately precious little of that document is correct!
Here’s a comparison of a portion of the CO2 spectrum on both Mars and Earth at surface conditions which bears out Dr Lacis’s post:
http://i302.photobucket.com/albums/nn107/Sprintstar400/Mars-Earth.gif
Er, Phil: you gotta provide more than a couple of spectra to prove your point. WTF?
The PROBLEM with the “alarmist theory de jour,” ala Dr. Lacis here, is that, although the little radiation cartoons and equations look “scientific,” there is ABSOLUTELY NO EMPIRICAL EVIDENCE THAT THEY ARE CORRECT (think about that if you are a scientist, please, since even Einstein didn’t get a pass until a certain eclipse occurred! No evidence, no science; science is that simple). The paper that I linked to actually has some empirical evidence (from NASA, no less–see the references). If YOU can show that the article I linked is flawed, then I might bow down to you and submit to your arm-waving and admit that I’m wrong. Otherwise, you wasted some bandwitdth, as far as I can see. And…this goes for the good Dr. Lacis–it is up to him and his friends to produce some ACTUAL DATA to show that all the little radiation cartoons that he and all his ilk keep producing as “facts” have any real basis in FACT. Post Haste! The world of science is waiting for some real science in “climate science!” And, since temperatures are cooling while CO2 levels are increasing exponentially, I think the alarmists have a real problem with the public. LOL. If the Republicans in the House have any sense, they will severely cut the budget for climate science research.
Here’s the first para. of your link.
Climate science’s method of deriving a surface temperature from incoming radiant energy (whose intensity is measured in watts per square meter) is based on the Stefan-Boltzmann formula [1], which in turn refers to a theoretical surface known as a blackbody – something that absorbs and emits all of the radiance it’s exposed to. Since by definition a blackbody cannot emit less than 100% of what it absorbs, this fictional entity has no option of drawing heat into itself, for that would