Pierrehumbert on infrared radiation and planetary temperatures

Raymond Pierrehumbert has written an excellent overview on infrared radiation and planetary temperature.  The article was published in Physics Today, and unfortunately behind paywall.  Fortunately, Climate Clash has posted the article in full.  I suspect that this article is digest of  the corresponding chapter in his new book, Principles of Planetary Climate, which is hot off the press (published December 2010).  On a previous thread, Chris Colose highly recommended  Pierrehumbert’s treatment of infrared radiation and planetary temperature.

I think Pierrehumbert’s article is very  good, and summarizes many of the topics that we have discussed at Climate Etc. on previous greenhouse threads:

So, if you have followed the Climate Etc. threads, the numerous threads on this topic at Scienceofdoom, and read Pierrehumbert’s article, is anyone still unconvinced about the Tyndall gas effect and its role in maintaining planetary temperatures?   I’ve read Slaying the Sky Dragon and originally intended a rubuttal, but it would be too overwhelming to attempt this and probably pointless.  Has anyone else read this?

I’m asking these questions because I am wondering whether any progress in understanding of an issue like this can be made on the blogosophere.  Yes, I realize there are a whole host of issues about feedbacks, sensitivity, etc.  But the basis of greenhouse  don’t seem to me to provide much intellectual fodder for a serious debate.  I’ve lost track of the previous threads (with over 1000 comments on one of them); can anyone provide a summary of where we are at on the various unresolved discussions?

I’m really wondering if we can get past exchanging hot air with each other on blogs, learn something, and collectively accomplish something?  If you have learned something or changed your mind as a result of the discussion here, pls let us know.

Moderation note: this is a technical thread and comments will be moderated for relevance.

693 responses to “Pierrehumbert on infrared radiation and planetary temperatures

  1. Well, if you are wondering whether any progress in understanding of an issue like this can be made on the blogosphere you can’t leave this thread as purely technical can you?

    raypierre is the perfect example of what has gone wrong between mainstream climate scientists and the rest of the world, starting from the “blogosphere”. One can be the smartest kid this side of the Virgo Supercluster of galaxies, but if in one’s mindset questions are considered as instances of lèse majesté and examples for the general public are routinely simplified in the extreme, so as to make them pointless, well, one will only be able to contribute to the hot air: because the natural reaction of most listeners will be to consider whatever one says (even the good stuff) as just a lot of vaporous grandstanding.

    From this point of view, SoD’s work should be more than highly commended.

  2. The first response doesn’t bode well for Judith’s question.

    Judith, the sad reality of ‘discussion’ forums on the internet is that there are many who participate simply because they enjoy arguing. The topic is of secondary importance.

    • Michael – Are you in denial over the getting past exchanging hot air with each other on blogs, learning something, and collectively accomplishing something?

  3. I am not sure I understand what this is all about. My understanding of what transpired on the threads of Radiative Forcing, No-Feedback Sensitivity and Climate Feedbacks is that no-one understands the detailed physics of HOW CO2 causes global temperatures to rise. The physics presented by the IPCC is wrong. Until we know what is right, there is little to discuss. I dont see that Raymond Peirrehumbert has added anything to our knowledge.

    • Jim, I don’t think the physics of the IPCC report are wrong, just the math. The 1.5 to 4.5 sensitivity range is wrong because you cannot just average guesses and get a correct answer. That is scientific, good ol’ boy, cypherin’

      • Dallas writes “Jim, I don’t think the physics of the IPCC report are wrong,”

        The object of my remark is that this is precisley what I meant. The physics presented by the IPCC of HOW CO2 cause global temperatures to rise is just plain wrong.

        We have

        1. Gerlich & Tscheuschner
        2. Tomas Milancovic
        3. A complete lack of the scientific method. There is no observed, measured data. Radiative forcing has not, and probably cannot be measured. Feedbacks have not, and probably cannot, be measured. No feedback sensitivity has not been and almost certainly can never be measured.

        So, IMHO, the IPCC has got the physics wrong, and any numbers quoted about how much global temperatures might rise as a result of doubling CO2 are purely hypothetical and completely meaningless.

      • I realise that my first sentence does not make sense. Instead of “The object of my remark is that this is precisley what I meant.”, I ought to have said “What I wrote originally is precisely what I meant”. Sorry about that.

  4. I have come to understand that the simple model of a doubling of CO2, radiative equilibrium at TOA, and lapse rate is pretty much meaningless as a gauge of climate sensitivity and the effect a doubling would have on surface temperature.

  5. Dr. Curry,

    I’ve read his article (he referred me to it over at Dot Earth) but was not impressed. First, the paper is mistitled. It should be called “Infrared radiation and inferences about planetary temperature” because it does not compute based on any dataset of planetary temperature and so there is no way to check computations against observation.

    Ray referred me to the article because I mentioned Richard Feynmann and a video available at http://www.youtube.com/watch?v=b240PGCMwV0

    I love Feynmann because he is such a clear communicator. He uses “guess” where many scientists would say “hypothesis” – It’s the same thing. Ray’s paper demonstrates that AGW is a reasonable hypothesis, no more. In fact, his paper is very similar to the work of many of the scientists mentioned in Spencer Wyart’s “The Discovery of Global Warming,” a history of scientists making estimates, mistakes, corrections and more mistakes.

    In order to establish CAGW as a viable theory, one would have to use temperature measurements – preferably satellite temps or ocean heat content (as those two are the least subject to mischief) – not spectral inferences. Spectral inferences (which is what Ray uses) leads us to the conclude that CO2 should likely lead to warming, but the climate system is very complex and there is a great distance between what might be and what is.

    Watch the Feynmann video again. If the hypothesis is that CO2 causes a change to surface temperature, tropospheric temperature or ocean heat content, the next step is to compute the consequences of the hypothesis using one of these datasets, then you compare the computation to nature. Ray’s paper did not even attempt to take these steps.

    The key statement of the Feynmann video is: “If it (the guess) disagrees with experiment (or observation), it is wrong. In that simple statement is the key to science. It doesn’t make a difference how beautiful your guess is. It doesn’t matter how smart you are or what his name is who made the guess – if it disagrees with experiment, it’s wrong. That’s all there is to it.”

    According to Accuweather.com, currently global temps in January are below the running mean. Contemplate that for a minute. After decades of rising atmospheric CO2 and decades of rising global temperatures, we are now below the running mean. If CO2 is the primary driver of dangerous global warming, why is it that after all these decades we don’t have any global warming?

    • ” the paper is mistitled. It should be called “Infrared radiation and inferences about planetary temperature” because it does not compute based on any dataset of planetary temperature and so there is no way to check computations against observation.”

      Ron – the article itself does not cite temperature data, but temperatures (from satellite, radiosonde, and ground-based measurements) are an important input to the transfer equations, as given by the statement in the article – “the change in the flux distribution across a slab is ΔIν = eν[−Iν + B(ν,T)]”. One can’t use these equations in the absence of data for T.

      The correspondence between computed and observed radiances and also temperatures is an important element in the confirmation of the basic theories. Indeed, this aspect is not truly controversial in a theoretical sense. Rather, some controversy remains as to the parametrizations utilized to avoid the computationally impractical use of line-by-line calculations in global models, and their substitution by band-averaging procedures instead. The band-averaging appears to yield results of high accuracy, but less than that achievable by LBL methods. Despite this compromise, however, the overall validity of the approach is now well confirmed.

      • Fred,
        It is entirely possible I am missing some real scientific advance here, but your comment has not yet convinced me. The fact Ray’s conclusions are considered good theory is clear. But that is exactly my point. Ray has not provided, so far as I can see, any way to confirm or falsify the hypothesis through observation of actual temperatures. If you think he has, please provide me with computations of the consequences of the theory (projections) measured in either satellite data or ocean heat content.

        As a skeptic, it is my position that no one knows how much of 20th century warming was natural and how much was anthropogenic. If Ray has determined a way to make that call, it would be a real advance. I don’t see it.

      • Ron – I probably wasn’t completely clear in my comment. What has been confirmed by observational data are the amounts of IR heat energy descending from the atmosphere to the ground, or ascending into space from the top of the atmosphere – in each case in the wavelengths relevant to CO2 and water vapor absorption and emission. This validates the radiative transfer equations, their incorporation into computationally practical modalities, and the quantities derived from them. It does not, of course, answer questions as to how other mechanisms affect heat flux (solar changes, volcanic eruptions, aerosols, etc.), nor does it address the quantitation of feedbacks. My point is that atmospheric CO2 and water behave observationally as predicted, and so it would be incorrect to assert that this particular element of climatology lacks confirmation. The other factors have been, and will be, topics of other threads here and elsewhere. In echoing Judy’s point, I believe it is these other elements of climate where the uncertainties are more in need of resolution.

      • Fred,
        Thank you. Then you are confirming the situation is as I thought. The paper is mistitled. It should be “Infrared radiation and inferences about (or forcing on) planetary temperature.” While is it nice to know certain components of climate physics have been confirmed by observation, it is completely wrong to say “We know the basic physics.” There are far too many complicating factors to make such a bold and unwarranted claim because then we end up discussing Trenberth’s travesty of missing heat.

    • The thing about a running mean of a noisy quantity is that the quantity stays on each side of it about half the time.

  6. Let me state this another way. If your hypothesis (guess) does not allow you to compute the consequences of the guess in a manner that can be compared to observation, it is not science. At least it is not a developed science. Ray’s paper does more to point out the shortcomings of climate science than to provide any new insight.

  7. Judy – I believe this summary by Pierrehumbert only partially overlaps his coverage of radiative transfer in his book (due out this week), and radiative transfer is itself only a fractional component of the entire book. I base this on a near-final draft of the book that I already have. The summary captures the essence of radiative transfer in abbreviated form, but also adds empirical data (e.g., from the AIRS and ground-based spectroscopy measurements), and addresses common fallacies surrounding radiative transfer, such as the saturation argument. These last items are not in the draft, and may not be in the book unless recently added. The book itself addresses the quantitative aspects of radiative transfer, the Schwartzchild equations, and their adaptation to a computationally practical methodology in some detail.

  8. The question still is how much, not how. Referencing Arrhenius (1.6) and Manabe (2.0) sensitivity should be in a range of roughly 1.2 to 2.6.

  9. The article is a good framework in which to frame the AGW question.

    IMHO ICBST AGW is true without the need for another theory or hypothesis, just continue to collect data.

  10. I’ve read Slaying the Sky Dragon and originally intended a rebuttal, but it would be too overwhelming to attempt this and probably pointless.

    Just to be certain, can you give bigger hint as to which way this would have gone?

  11. cagw_skeptic99

    Between SoD and here, I think I understand the basic physics much better than before. If what you want to derive as a basic agreement is this: “More CO2 causes long wave radiation to be deflected as it leaves the earth’s surface and thereby causes the air and underlying surfaces to be warmer than would be the case with less CO2”, then you have accomplished something with me.

    If you want to go on and say that increasing CO2 this year and next means that planetary temperatures are warmed by some calculated number of degrees, then I haven’t seen that evidence. Speculation based on computer models and speculation about correlation between recently increasing temperatures and increasing CO2 do not constitute evidence.

    As a software developer with 40+ years of experience, some of that with complex computer modeling programs, and a lot of experience debugging and validating complex logic structures, the idea that you can reliably trust much of anything output from programs that are constantly being changed, have no apparent validation or configuration control and nothing resembling a quality assurance plan, is just daffy. I personally was involved in ‘tuning’ complex models and know very well that even the developer (maybe especially the developer) cannot be trusted to know which changes to the logic make what difference in the validity of the result. The statement by a PhD scientist that the program does A and B and a few dollars will buy you a cup of coffee at Starbucks. Absent a formal methodology for testing an unchanging set of logic against static input and validating the predictions against measurements over substantial years, these models will never move out of the class of research tools and competing forecast techniques. By the time any one of them can be validated, none of the people involved will be young enough to care.

    Not to say that the experiment isn’t worth continuing with the expectation that models which come somewhere close to predicting actual measured climate behavior across complete cycles of the various known naturally varying cyclic influences (Ocean currents, changes in the sun, changes in the orbit and orientation, etc.) may someday produce valuable results.

    The constant drumbeat from CAGW true believers that we must act immediately to end CO2 emissions should be regarded as just so much urban noise, but since the perpetrators have high positions in Government and Educational Institutions, presumably the noise will continue until it dies out naturally or a replacement Government shows them the door.

  12. cagw_skeptic99

    For what it is worth, the concept that “Planetary Temperatures” can be predicted decades in advance by models and theories that cannot predict them them month to month and year to year is also daffy. The complete lack of ability to forecast next summer and next winter mean to me that the variables are simply not known or understood well enough to trust predictions for 2020 or beyond. The unknown unknowns clearly rule the theoretical environment of today, and it doesn’t appear that significant resources are being directed at discovering them. Most of the money goes into stronger and stronger drum beats for the choir that sings at CO2 true believer’s religious events.

    • “For what it is worth, the concept that “Planetary Temperatures” can be predicted decades in advance by models and theories that cannot predict them them month to month and year to year is also daffy.”

      That may be an exaggeration, but you’re correct that climate models have been shown empirically to perform better over multiple decades than over short intervals. It’s therefore not “daffy”, but an inherent property of the obervation that chaotic elements in climate tend to operate on shorter timescales than the more stable forces underlying long term trends, and appear to even out over the timescales of particular interest (e.g., many decades to a few centuries), although how complete is this averaging remains controversial. This has been discussed in detail in several previous threads on models, but if you believe important facets were previously overlooked, this thread would be a good venue to discuss them, and I would be interested to read what you have to say.

      • Fred,

        I think the daffy part is assuming that averaging a chaotic system over a century is sufficient to claim you understand any underlying trend. As those of us who have developed modeling software for much simpler systems know, the period you are modeling must be many times longer than the period you are projecting, and then you call the results a guess, and make plans in case it is wrong. I’m not seeing that same respect for the difficulty in modeling real time systems from climate modelers. Dr. Curry’s statements about uncertainty highlight that difficulty.

        The uncertainties lie in the quality of the data used for input for calculations, the magnitude of perturbing factors, the correct application of those perturbations, and interpretation of the model run outputs. Everything else looks OK. I think.

      • cagw_skeptic99

        So except for all the important input factors, they have done input factors well.

        My guess is that another 20-40 years of experimental modeling might allow the modelers to predict out ten years or so with better than a 50/50 chance of making useful predictions.

      • As those of us who have developed modeling software for much simpler systems know, the period you are modeling must be many times longer than the period you are projecting, and then you call the results a guess, and make plans in case it is wrong.

        Wow, someone who thinks like me on that point. This is disappointingly rare on Judy’s blog.

      • cagw_skeptic99

        Fred Moolten “climate models have been shown empirically to perform better over multiple decades than over short intervals”.

        Clearly I didn’t speak clearly. I said exactly the opposite. Where is the evidence that climate models have been shown empirically to perform well at all over any interval of any length? What is your definition of “perform well”. Hansen’s model predicted the streets of New York awash in salt water. No model predicted the flattening of temperatures since 1998. No models predict much of anything of substance in any publication I have seen; except that maybe some of them predict the past with only minor errors.

      • “Hansen’s model predicted the streets of New York awash in salt water. …”

        Link?

      • According to the below link, the comment was made to Bob Reiss when he was doing research on a book called “The Coming Storm. It was the West Side Highway (that runs along the Hudson River) that would be under water.

        The link is : http://dir.salon.com/books/int/2001/10/23/weather/index.html

        More discussion about it also can be found at other sites (climateobserver, WUWT) by doing a search for key word (Hansen interview west highway under water).

      • How is that the result of climate model?

        When did he say this will happen?

        Within 20 or 30 years. And remember we had this conversation in 1988 or 1989.,

        Link to Hansen’s own writing where he states he thinks parts of Manhattan would be underwater by 2011.

      • Two years ago, Hansen told Barack Obama he had four years to save the world. See http://www.guardian.co.uk/environment/2009/jan/18/jim-hansen-obama

        It seems Hansen has given up on Obama. He recently said China was the best hope to save the world. See http://www.examiner.com/climate-change-in-national/top-nasa-scientist-says-china-is-best-hope-advocates-trade-war-with-u-s

        Hansen’s book “Storms of My Grandchildren: The Truth About the Coming Climate Catastrophe and Our Last Chance to Save Humanity” makes it clear he is talking about human extinction.

        Do you really doubt the comments made by Bob Reiss?

      • Do you really doubt the comments made by Bob Reiss?

        I sincerely doubt that Hansen would say something to Reiss, who is not a scientist and who could easily have misconstrued the conversation, that totally contradicts Hansen’s scientific writings. In them he has said in history the large ice sheets typically take thousands of years to melt, but because of anthropogenic global warming, they could melt in timescales measured by centuries, and that in history there have been episodes of rapid ice melt. There is simply nothing in the history of James Hansen’s scientific career to support the silly notion that he could be so stupid as to believe melting could be so aggressive it would inundate parts of Manhattan in 20-to-30 years.

        I think what is most likely is he was trying to tell Reiss that if mitigation efforts failed to be implemented in 20 to 30 years, Manhattan would eventually flood as that is perfectly consistent with what he always says, and I think this Reiss just blew it.

        Do I think James Hansen believes ice sheet disintegration will go nonlinear? Yes, he clearly believes that. There is, that I know of, no computer modeling of nonlinear melting in the literature. What Hansen has speculated is that nonlinear melting could lead to SLR of 5 meters by the end of this century.

      • That is obtuse, JCH.

      • No it’s not. A detective novelist interviewed Hansen. He thinks he heard Hansen say something that appears totally absent in Hansen’s vast peer-reviewed and non-peer-reviewed writings. What he thinks he heard sounds oddly similar to something Hansen has said many many many times.

        What most likely happened? Answer: the detective novelist misunderstood the conversation.

        His theoretical call to confirm he heard Hansen correctly:

        “Jim, do you still think if we don’t do something in the next 20 to 30 years, parts of Manhattan will go under saltwater?”

        Unaware of the man’s confusion, Hansen could easily confirm that question.

        Hansen’s writings consistently say that we have a period of time to act to avoid future negative consequences. The period of time is a range that depends upon what humans choose to do now. The negative consequences are mostly in the future. He has written some very aggressive things on SLR. He clearly believes melting will go nonlinear in this century, and his estimate is 5 meters by 2100, a significant majority of which will not be seen until after 2050.

        This enforces my belief that he could not possibly have said Manhattan would be under water by 2011.

      • Yep -cognitive dissonance, JCH. Rewriting history is one of the symptoms.

      • JCH,

        you are attempting to integrate the science James Hansen has done with his activist pronouncements. It won’t work. Here is the quote in question:

        While doing research 12 or 13 years ago, I met Jim Hansen, the scientist who in 1988 predicted the greenhouse effect before Congress. I went over to the window with him and looked out on Broadway in New York City and said, “If what you’re saying about the greenhouse effect is true, is anything going to look different down there in 20 years?” He looked for a while and was quiet and didn’t say anything for a couple seconds. Then he said, “Well, there will be more traffic.” I, of course, didn’t think he heard the question right. Then he explained,
        “The West Side Highway [which runs along the Hudson River] will be under water. And there will be tape across the windows across the street because of high winds. And the same birds won’t be there. The trees in the median strip will change.” Then he said, “There will be more police cars.” Why? “Well, you know what happens to crime when the heat goes up.”

        He obviously was scare mongering as is his habit with media ala Coal Trains of Death…

      • (someone claiming to be quoting Hansen) “The West Side Highway [which runs along the Hudson River] will be under water. And there will be tape across the windows across the street because of high winds. And the same birds won’t be there. The trees in the median strip will change.” Then he said, “There will be more police cars.” Why? “Well, you know what happens to crime when the heat goes up.”

        Nicely illustrating the point that people can put any words they like in other people’s mouths when there’s no evidence either way to support or refute them. Amazingly enough there are people who take every such alleged quote as gospel truth.

        When a tree falls and no one hears it, does it make a sound? And when you quote people and they’re not there to refute you, have you added anything to what we know?

        Garbage.

      • Here is the abstract of a Hansen paper from 1981…

        Hansen et al. 1981
        Hansen, J., D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, 1981: Climate impact of increasing atmospheric carbon dioxide. Science, 213, 957-966, doi:10.1126/science.213.4511.957.

        The global temperature rose 0.2°C between the middle 1960s and 1980, yielding a warming of 0.4°C in the past century. This temperature increase is consistent with the calculated effect due to measured increases of atmospheric carbon dioxide. Variations of volcanic aerosols and possibly solar luminosity appear to be primary causes of observed fluctuations about the mean trend of increasing temperature. It is shown that the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century, and there is a high probability of warming in the 1980s. Potential effects on climate in the 21st century include the creation of drought-prone regions in North America and central Asia as part of a shifting of climatic zones, erosion of the West Antarctic ice sheet with a consequent worldwide rise in sea level, and opening of the fabled Northwest Passage.

        I would say some of that has come true.

      • This exchange is recapitulating earlier exchanges regarding climate models, and so I recommend that readers review those several relevant threads to avoid repetition here. Many models have peformed “hindcasts” showing that only GHG forcing added to other variables could reproduce observed trends. In addition, however, Hansen’s published 1980’s models were predictive in nature, and the predictions for CO2 emissions scenarios that actually transpired were reasonably good, but too high. The main reason was that his climate sensitivity estimates were higher than those now known to be more probable, and if the current inputs responsible for the lower estimates had been used, his results would have been quite accurate. (Note that since models aren’t “retuned” to make them reproduce observed trends – that’s not legitimate – this is a hypothetical case, but still one that is informative about the potential predictive values of models for long term trends).

        The limitations as well as the utility of GCMs for long term prediction have been discussed extensively in the previous threads, and if the topic of this thread is radiative transfer specifically and/or its relationship to the value of this blog in gaining new understanding, I wonder whether it wouldn’t be more useful to discuss model performance in the earlier threads devoted to that topic unless important new information about models can be offered here.

      • Fred, i’m not sure i agree. I was of the distinct impression that his models predictive nature were off by some degree.

        Also, if he modifies the climate sensitivity to be more accurate in the predictive areas, surely that will result in him being ‘off’ in the hindcasting.

      • The hindcasting models were not the ones Hansen used, but rather improved versions with more accurate input values. His trend estimates were, as you say, too high, but not extraordinarily off the mark. In fact, his predictions for the case of a CO2 emission rate scenario that was lower than the actual rates yielded a trend that closely matched the observed temperature trend.

        For clarity, GCMs don’t use climate sensitivity as an input. Rather, in the case of Hansen’s earlier models, the input data he used yielded a climate sensitivity to CO2 as a model output that current data inputs show to be too high. If more current inputs had been used, the early Hansen model trend estimates would have been fairly accurate.

      • Can you define ‘not extraordinatley off the mark?

        “GCMs don’t use climate sensitivity as an input”
        Understood- i was unclear- i meant that if he inputted modified data/criteria to ‘tighten’ then predictions that it would ;loosen’ the hindcasting.

        “Rather, in the case of Hansen’s earlier models, the input data he used yielded a climate sensitivity to CO2 as a model output that current data inputs show to be too high. If more current inputs had been used, the early Hansen model trend estimates would have been fairly accurate.”

        This is quite confusing. Are you trying to say that the data he used to program the models was ‘off’, so that if he used ‘better’ data then he would have been right? or are you saying that his paramaters used were wrong/inaccurate in which case you’ve argued my own point for me.

        Can you be clearer?

      • I’m still doing a poor job explaining. Hansen’s early model utilized model-based estimates of CO2 forcing and feedback that are higher than inputs used today. The earlier inputs lead to climate sensitivity values for doubled CO2 of about 4.2 C, whereas mid-range estimates today are about 3 C. With current inputs, his model would have very closely matched observed trends. With the higher inputs, his trend estimate was too high, but not by an exceedingly large amount. Indeed, his scenario B based on CO2 emissions that are not very different from those that occurred yields a reasonably good match. Only CO2 emissions much higher than those observed create a large disparity – Hansen Models

        Current models are still far short of optimal, but perform better than is sometimes claimed.

      • Steven Mosher

        Monkey,

        Let me see if I can help. In the version of the model Hansen used there were at least two key parameters/processes that have been improved. The ocean and the sulfur cycle.

        There really is no point whatsoever in pointing back to Hansen’s old projections, excect to point out that GIVEN the data at the time and the MODEL at the time, it did a fair job of predicting the future. As any engineer who build physics models will tell you it’s a never ending process. Perfection aint in the cards.

        Plus Hansen’s model is so OT, I don’t know why you even bring it up

      • Fred –
        Many models have peformed “hindcasts” showing that only GHG forcing added to other variables could reproduce observed trends.

        If you remove the word “only” from that sentence then it would make sense to me. But a model that will reproduce observed trends. “only” due to GHG forcing added to other variables is a model that was designed to produce those specific results.

      • That’s not correct, Jim. The models were “tuned” to the existing climate, but what they projected when the modelers then added increasing CO2 to the input was outside the control of the modelers. It came out the way it came out, and if it had come out differently, there would be nothing the modelers could have done about it. The only recourse would be to develop completely new models – this process is an ongoing one, but it takes much time and money, and is not something that can be done simply to make a result come out better.

        (I would add that climate models do a much poorer job with certain other climate variables, such as ENSO. If that could be fixed simply by retuning, it would have happened long ago.)

      • Fred –
        Can’t agree – As you say, they were tuned to the existing climate – with specific assumptions built into the “tuning”. How many other possible factors were not built into the models because they were “asssumed” to be non-contributing (cosmic rays) or insignificant (solar variation) – or non-existent ? Or simply because the mechanism was not sufficiently defined (clouds perhaps)? ONE way to tune the models is to assume that GHG’s are THE main factor (not that I’m contesting that assumption at this time). There are others that I believe have not sufficiently been tested.

        I was a spacecraft science instrument engineer in the mid-60’s. The Asian Brown-cloud was barely known back then, but it showed clearly in satellite photos. And clouds were a known unknown. They still are. Do you believe that either of those are accurately represented by today’s models? Or that the effects are understood?

      • Jim – model parametrization is a challenge, but the tuning of the models to the existing climate did not dictate how they would respond to CO2. I don’t see any fair way to negate the conclusion that the models performed well in demonstrating a critical role for CO2 in the rising temperature trend.

        Regarding brown clouds, the combined warming effects (from the black carbon components) and cooling effects (from organic carbon and other aerosols) are reasonably well understood and represented in current models, thanks to the pioneering studies of Ramanathan et al involving atmospheric measurements at a multiplicity of different altitudes. You are right that they were poorly represented in earlier models.

      • Fred,
        Statements about, what is predicted by the models and what is a result of conscious or unconscious tuning are notoriously difficult.

        I can easily believe that it would have been much more difficult for the modelers to explain the recent warming without CO2 and I have no difficulty in believing that CO2 has contributed significantly. Still I do not give very much weight to the claims of impossibility. It is quite typical for a modeling process to add new features until an agreement is reached with empirical observations. At that point the model cannot be brought into agreement without the last addition – or without some other addition or modification that the modeler would have been ready to do, if needed for reaching an acceptable agreement.

        The more extensively the modelers try all possible alternatives that they can invent the more one can give weight for statements about the necessity of a particular factor, but complicated models can never provide straightforward proofs about the necessity. In the case of climate models I have little trust that everything possible has been tried. Furthermore the models are badly constrained by the requirement that they must be stable enough to give any results at all. This limits quite severely the choices available for the modelers. In particular it may make it presently impossible to describe by the model some dynamics that is true in the real earth system.

        Models are very important tools, but complicated models of a dynamic system may fail badly in describing some features of the real world and the same limitation may apply to all models making their agreement with each other a less useful proof of their validity.

        In the case of the earth system (atmosphere + oceans + others) there is considerable evidence of oscillations or mode switching which cannot be modeled successfully. This is evidence on that the concerns I presented above are not empty theorizing but probably a real serious limitation of the modeling approach.

      • FM, see what happens when an actual modeler gets his hands on the DIY kluges favored by The Team?

      • Fred –
        I don’t see any fair way to negate the conclusion that the models performed well in demonstrating a critical role for CO2 in the rising temperature trend.

        Actually, I haven’t seen much evidence that the models have performed all that well. For example –
        http://rankexploits.com/musings/2010/multi-model-mean-projection-rejects-gisstemp-start-dates-50–60-70–80/

        As for the model parameterizaton, Pekka Pirilä’s post covers most of that well so I won’t pursue it right now. But things like this keep on showing up –

        http://www.thehindu.com/todays-paper/tp-national/article1107174.ece

        If true, it could be a game-changer. Is it true? Who knows. But if it is and it’s ignored, we could have another 20 years of wandering in the wilderness. :-)

      • The performance of the models has been discussed previously, including Hansen’s early model that performed well in predicting temperature trends (see links elsewhere in thread) and would have performed perfectly if the input data in use today (which yield a climate sensitivity estimate of about 3 C) had been used instead of earlier data that lead to a climate sensitivity estimate of 4.2 C.

        As to a news report of a paper in an Indian journal not devoted to climate science (the paper itself was not presented), I have little to say except that the news item claimed that the conclusions about cosmic rays were based on “calculations”. If that is true, then the paper has little to offer. If the authors, however, went out and acquired significant new data, the results might be worth examining, although I suspect they would appear in something other than the journal they are in.

      • Actually, just subtract out the warming trends since the Ice Age and Little Ice Age, and the anomalous La Nina surge, and Hansen is left with bupkis to support his prediction.

        And the reason people continue, Fred, to bring up this outdated “old stuff” is called “validation”. The only models eligible to be tested on the last 20 yrs.’ data are the ones that were extant at the time. The hindcasting performance of subsequently tuned efforts is irrelevant.

      • Excerpt:

        According to the latest report by the IPCC, all human activity, including carbon dioxide emissions, contribute 1.6 watts/sq.m to global warming, while other factors such as solar irradiance contribute just 0.12 watts/sq.m.

        However, Dr. Rao’s paper calculates that the effect of cosmic rays contributes 1.1 watts/sq.m, taking the total contribution of non-human activity factors to 1.22 watts/sq.m.

        This means that increased carbon dioxide emissions in the atmosphere are not as significant as the IPCC claims. Of the total observed global warming of 0.75 degrees Celsius, only 0.42 degrees would be caused by increased carbon dioxide. The rest would be caused by the long term decrease in primary cosmic ray intensity and its effect on low level cloud cover.

        Oops! Of course, as Fred says below, it’s merely India(n) research, so can be safely disregarded until The Team approves it.

      • Perhaps I should add that I do not think that modeling is not useful and an important tool in learning about atmosphere and wider earth systems. I do believe that a lot has been learned from the models.

        The problem is that it is extremely difficult to estimate, how reliable the models are. There may be reasons to think that certain model results are likely to be true, but no proof of that without empirical confirmation. When using models, we may know, how to estimate the uncertainties from certain known sources, but we do not know, how much other uncertainty remains. When comparisons between models are used to estimate the accuracy, we get feeling about errors from features, where the models differ, but no knowledge about errors that are common to all models.

        If the properties of the real earth system are stable enough in a sense important for climate issues and in a suitable way, then we are likely to be able to model it well, but if its dynamical properties go outside certain limits, then modeling becomes much more difficult, and then it may also be that all models will fail in this important aspect. My intuitive feeling is that there are important open issues of this type in judging the capabilities of climate models.

        The climate models have shown great skill in describing a large number of details of the climate, but they are at the same time incapable in explaining some other features. What is their skill when used outside the domain of empirically confirmed validity remains largely unknown.

        I should add that I have not used climate models, and read only one introductory textbook (Washington & Parkinson) and several review articles about them. Thus my claims are based on knowledge about other somewhat comparable modeling problems. I have noticed that many active climate modelers have expressed somewhat similar views, but cannot tell about the main stream views as expressed in internal scientific communications of the climate modeling community.

      • “(I would add that climate models do a much poorer job with certain other climate variables, such as ENSO. If that could be fixed simply by retuning, it would have happened long ago.)”

        Now what are you exactly saying in your last sentence :).

        Seriously though its a lot more than just ENSO. Even bias in temperatures is a problem – but that’s not for this thread.

        What is is the extent to which the understanding of infrared radiation helps materially in modeling temperatures at the surface, at the TOA and in between. And perhaps more important: how significant is the variation arising from infrared radiation in comparison with other sources when undertaking these tasks.

        At the TOA it is very significant – in fact this makes this boundary attractive to model for just this reason.

        Modeling at the surface is much more complex, so modeling at the top of the ocean might be a better bet even if it relies much less on this body of knowledge.

        But trying to model the whole atmosphere (apart from trying to predict weather) – would you really want to start there?

      • A paper on WUWT posits that a flat line is within statistical error in the instrumental temperature record. What does this do to ‘hindcasting’ and model tuning? What does it imply for the remains of the hockey stick chart? Really, it appears that with just a little more time, the global warming threat will melt like ice in Alabama in the Spring.

        http://wattsupwiththat.com/2011/01/20/surface-temperature-uncertainty-quantified/

      • There is a site that sums up aspects of bad science:
        http://www.catchpenny.org/patho.html
        It strikes me that much of what the climate consensus supports is well described at this site.

      • “shown to perform better over multiple decades than over shorter timescales” — by what? Hindcasting? Curve-fitting?

        Gimme a break.

      • Fred,
        As long as climate scientists are stil reduced to only state that something was not unexpected after it has happened, your assertion that AGW is better at predicting decades rather than short term is a little bit embarrassing.

  13. Judy – You ask whether we who participate here have learned from the experience and whether we have changed our views. The two are not synonymous, of course, and it’s doubtful that many active participants have radically switched positions.

    As to learning, however, I can only speak for myself. I have learned much – most of it from linked scientific sources, including not only the current article by Raypierre, but even more from sources on radiative transfer you have cited and those cited by others on this topic as well as sources with data on water vapor and other feedbacks, and sources on climate modeling.

    I have learned less from individual comments, although it would be wrong to say I’ve learned nothing. Certainly, I’ve learned from errors I’ve made that were corrected by others.

    I’ve also learned a great deal from trying to explain climate science principles to other participants who expressed a different point of view. There is nothing like trying to teach someone something as a way of discovering that you don’t understand it completely yourself. In the process of doing that here, I’ve been able to refine and reinforce my understanding of important points, often by having to look up data as a means of doing that. Fortunately, most of the “refinement” has occurred before I committed myself to a posted comment, but there have been a few examples where the process was reversed. Still, it’s a learning experience that I have found valuable.

    • Fred,

      You have only strengthened your own views towars temperatures and modelling.
      Is all other data irrelevant?
      Climate science has ignored a great deal of physical changes for a mathematical formula or model.

      In doing so, they have missed the movement of ocean currents, salt changes and many atmospheric events that are not recorded by temperatures.

  14. “…is anyone still unconvinced about the Tyndall gas effect and its role in maintaining planetary temperatures?”

    Radiative transfer only accounts for an aspect of the climate system and not the dynamics. Water vapor as a trasport mechanism overwhelms other atmospheric gases and also alters transfer in a variety of ways.

    If the topic is climate change with the idea of forecasting, accurately accounting for atmospheric circulation is a must. Has anyone run across a refined circulation model that accurately accounts for the system?

    Water Vapor and the Dynamics of Climate Changes
    http://arxiv.org/pdf/0908.4410

    What controls the static stability of the subtropical and extratropical atmosphere?

    The lack of a theory for the subtropical and extratropical static stability runs through several of the open questions. Devising a theory that is general enough to be applicable to relatively dry and moist atmospheres remains as one of the central challenges in understanding the global circulation of the atmosphere and climate changes.

  15. The global mean temperature data for 2010 is out.

    It is 0.475 deg C.

    The previous maximum of 0.548 deg C for 1998, 13 years ago, has not been exceeded.

    http://bit.ly/f2Ujfn

    Global warming has stopped for 13 years, and we continue to count the number of years that the previous maximum has not been exceeded.

    The number now is 13!

    How many more years is required to declare global warming has stopped?

    2? 5? 10?

  16. Monthly global mean temperature for December of last year has dropped by about 0.281 deg C from its maximum of 0.533 deg C for 1997. That is nearly half of 20th century warming.

    http://bit.ly/f2Ujfn

    There is no global warming.

    Is global cooling on its way?

  17. Judith,
    Some EXTRAORDINARY events are occurring as we speak.

    If you put the satellite map of cloudcover over the map of the ocean surface temperatures. You get areas of vast evaporation in the Arctic regions where there is warm water in the oceans.
    Dense cold air going over warm water is generating an extraordinary amount of precipitation.
    This event has lowered the ocean levels that were rising previously due to the water transfer(WUWT ocean levels falling).

  18. I was on board with the Tyndall gas effect for quite some time- since I read about in my elementary school library science section.
    It was called a greenhouse effect then, but Tyndall gas seems more relevant today.
    What none of the excellent writers connect- from SoD to Raymond Pierrehumbert is the connection between
    1- the workings described and the predictions of doom by so many climate scientists
    2- how this works in large places like an atmosphere.

    • Hunter,
      Climate Science has no real clue as to how a round planet with rotation operates and all the interactions in an enclosed biosphere.
      A general physics does not apply to a system that is vastly complex system.
      Theories are easier to manipulate into order than following the physical evidence. So, physical evidence is ignored.
      Hence the problem of not knowing how to interpret physical evidence. Physical evidence is not a temperature number nor an equation.

  19. Judith, no scientist that I know disputes the Tyndall gas effect. I guess I am somewhat surprised by your question. Is that what we have disputing all of this time? I don’t think so. What we are really trying to ascertain is whether anything that humankind is belching into the atmosphere is responsible for the hysterical claims by some that it is Man, and Man alone, that is directly responsible to climate Armageddon via man’s unconscionable oxidation of carbon.
    We all know that climate was different in the past than it is now and it will be different in the future. It is only a question of attribution. We’ve had ice ages and warmer periods in our historical record, and does anyone doubt we will have them in the future. Is it true that ice ages occur in a 100,000 year cycles, and is it also true that there have been previous periods that were warmer than the present despite CO2 levels being lower than they are presently? Despite what Mann et al. says, we’ve had the medieval warm period as well as the little ice age. We are now warming from the LIA, but very slowly and incrementally small. Other questions:
    • Has there been any significant warming during the past 14 years despite CO2 rising?
    • Is CO2 the only polyatom responsible? We know other polyatomic molecules such as CH4 are ten time more potent than CO2. What is the potency of N2O, NO2, O3, Freon, and H2O? Is there synergy amongst these polyatomic molecules?
    • What is the role of clouds?
    • If positive feedback exists, would it not be evident by now? Is there a conflict between data and models?
    • Does anyone deny that the level of heat in the debate would be much lower if the cAGW group did not adopt the Hockey Stick with such vigor (a stick that has been completely discredited by many others, especially McIntyre)?
    • If it is true that we might warm 1 degree in the next century, because of natural cycles or even AGW, has any reputable NGO, scientific team or Gov. agency discussed the potential societal benefits of slight warming. Has any money been directed towards these studies?
    • A great many people feel that the quality of the land temps, despite some agreement with other datasets, are less than reliable. The models depend on scrupulous data quality. Are we confortable with Hansen’s control of at least major dataset? What are to make of the Jones/Wang UHI debacle? Quality data – I don’t think so
    • Statistical quality: I am still amazed that climate scientists don’t make adequate use of professional statisticians. I am not sure most people understand the methods employed in the development of a new drug. Every pivotal Phase 3 trial conducted requires the establishment of an independent Statistical Advisory Board (SAB) when a trial is first contemplated. Before patient one is enrolled, the protocol has to be written and approved by the sponsors in concert with the FDA and the SAB. Obviously the trials are double blind and the only group that can break the code is the SAB. It is amazing that even with the rigor that goes into these trials, 50% of them cannot be repeated. It is shocking that we are contemplating reordering the world’s economy based on academic publications whose statistical rigor is so low that if they were a drug development project, the FDA would not allow them to begin even pre-clinical studies.
    The Tyndall effect is agreed. What’s next?

    • The evaporation doors are wide open yet no-one whispers the name Ice Age for fear of being classed a nutbar.
      The sun heats the planet and it is the atmosphere that figures out what to do with this heat with the help of changing oceans.

  20. I would like to take this quote from Raymond Pierrehumbert article as a point of departure. He says:

    “The atmosphere, if CO2 were removed from it, would cool enough that much of the water vapor would rain out. That precipitation, in turn, would cause further cooling and ultimately spiral Earth into a globally glaciated snowball state.10 It is only the presence of CO2 that keeps Earth’s atmosphere warm enough to contain much water vapor. Conversely, increasing CO2 would warm the atmosphere and ultimately result in greater water-vapor content—a now well-understood situation known as water-vapor feedback.”

    ___
    Now personally, I think this is an excellent summary of the technical details of the role of CO2 as a GH gas in a very non-technical way. And so, as you can guess, I’m a “warmist”, and happen to think that AGW is real. To what degree it is happening is another issue entirely, but it is happening. Now then, so very many of the so-called AGW skeptics immediately launch into very long diatribes about how CO2 is only a minor GH “trace gas” that in completely logarithmic in its GH behavior and so could be at 1800 ppm and not be much of a problem, etc. They essentially gut the central role of CO2 in climate regulation. Even doubting the entire rock-weathering carbon cycle and it’s role in moderating CO2 levels through the negative feedbacks. How could there be a chance for a honest discussion if one side simply ignores or refuses to accept what the other side sees as one of the greatest physics accomplishments of the 20th century (i.e. the integral role of CO2 in the regulation of the climate). Where is there a chance for common ground when the two sides see the science so differently?

    • cagw_skeptic99

      R. Gates: “Now then, so very many of the so-called AGW skeptics immediately launch into very long diatribes about how CO2 is only a minor GH “trace gas” that in completely logarithmic in its GH behavior and so could be at 1800 ppm and not be much of a problem, etc”

      I personally have not seen serious skeptics make this claim, but it is a nice straw man for you to destroy. You could have stopped after “To what degree it is happening” and included most of the skeptics I know in your position. It would then be possible to deal with common ground, if that is what you want to do.

      • Steven Mosher

        Well, I saw a congressman ask this stupid question of Lindzen.

        The trace gas argument is usually made by commenters who heard it somewhere. It’s annoying to see it used. really annoying.

      • Steven,

        You would think physical evidence would trump a theory.
        But it doesn’t especially when that evidence has been ignored for years.
        The movement of the ocean currents have now opened the evaporation doors to full tilt in the Arctic. Dense cold air over top of nice warm currents.

      • Steven Mosher

        Joe.

      • Mosher:

        I saw a congressman ask this stupid question of Lindzen. … It’s annoying to see it used. really annoying.

        I saw that too Steve and I’m in another jurisdiction. It is annoying and disappointing. That’s why places like Climate Etc. matter. Not just for the basics of Tyndall gases but the (attempt at) integration with rational policymaking.

        The fact that some only want to argue doesn’t negate the help provided for those that are more open-minded.

      • Steven Mosher

        One approach might be to have an extended discusion soley on the Tyndall gases. Where OT comments are snipped away and those who think there is an issue there can get the information they need.
        One issue is you start to make progress explaining something and a side fight springs up about hansens models or the iron sun or G&T.
        The other issue is that people are not committed to a process where saying “im wrong” is allowed. I see this on both sides. Nobody will allow anyone on either side to make a mistake.

      • The Hansen’s model deviation was caused by me, and I apologize. I saw this as false, and should have just let it go:

        Hansen’s model predicted the streets of New York awash in salt water.

        Please don’t u-tube me.

      • Is what he predicted false (obviously), or is it false to claim that he said it (he did), or is it false to point out that he in fact said it?
        Just wondering.

    • “…—a now well-understood situation known as water-vapor feedback.”

      Yes, well understood.

      By whom again?

    • R. Gates,
      The only problem with your claims about the idea of CO2 being the main regulator of the climate is the lack of evidence to support it.
      The pesky lag of CO2 as a response to increases in temperatures does not support your case at all. That that we still do not know what causes glaciation does not lend itself to your underlying assumption that cliamte science fully understands climate.
      Calling the theory of how CO2 works in the atmosphere a discovery of physics is a claim that does not hold up at all.
      It is a claim about how the physics of CO2 applies itself- an engineering claim. It would be as if claiming that the invention of refinery oil cracking were a discovery of basic physics. It is applied physics, like the theory of AGW.
      Evolution suffered through this in the first decades of the last century when eugenicists claimed that their plans were a direct result of evolutionary science. Eugenics was actually a mix of science with a great deal Malthusian thinking, then current prejudice and elitist group think.
      That combination will sound familiar to an observer of today’s great social/science mix.

    • He is mostly right in a world without clouds and large convective storms, which could act as positive or negative feedbacks. No paleodata exist to tell which, to my knowledge. This is what annoys me–taking the simple case of a moist atmosphere without clouds or storms and saying “see? just physics”.

    • How could there be a chance for a honest discussion if one side simply ignores or refuses to accept what the other side sees as one of the greatest physics accomplishments of the 20th century (i.e. the integral role of CO2 in the regulation of the climate).

      It’s stretching credulity to imagine that it’s any sort of accomplishment whatsoever – but one has to be seriously deluded to believe that it’s one of the greatest of the 20th century.

  21. I think it is important to at least understand the stepping stones, if not how to get from one to the other in the kind of detail that Pierrehumbert does. I think of the path as something like this.
    (a) Greenhouse gases alone can absorb and emit IR due to their molecular properties. Other gases are transparent.
    (b) This property allows the surface to be 33 K warmer than it would be without such gases, or equivalently without an atmosphere, which is easy to calculate from radiative balance.
    (c) Since science can quantify the effects of greenhouse gases such as this, it can also predict the effect of changes of composition on temperature. Measurements such as spectra confirm that science quantifies greenhouse gas interactions with IR very well.
    (d) AGW is built on such predictions that can quantify how much warming is obtained by, for example, doubling CO2.
    (e) There are feedbacks, and this is where the research is. Water vapor and surface ice albedo certainly lead to positive feedbacks, while the cloud feedback is still uncertain in sign. This is not because clouds are radiatively more complicated than gases (in some ways they are simpler), but because cloud distributions may change with climate. So far nothing indicates a particularly strong cloud feedback in either direction.

    There is a lot of science between each step, but it helps to know where you are going before understanding the details. I don’t know if my steps are the best route from physics to AGW theory, but maybe it helps to have some route like this.

    • albedo is now a positive feedback?
      Without looking it up, how many forms does water in the atmosphere exist in?
      “except for clouds”?
      For starters.
      When ever I read some earnest believer making these deterministic outlines of how simple the climate system really is, I am amazed.

      • The answer is 2.
        H2 18 O and H2 16 O.
        :-)

      • Actually, if you count isotopes, I’d say 54.

        H2O, DHO, D2O, THO, T2O and TDO

        times O16, O17, and O18

        times

        ice, water and vapor

        That’s assuming the Grateful Dead’s publishing companies version doesn’t exist in the atmosphere.

      • If you don’t think ice albedo has a positive feedback, you need to look at the Ice Ages again. (Note positive feedback amplifies warming and cooling effects).

    • (b) This property allows the surface to be 33 K warmer than it would be without such gases, or equivalently without an atmosphere, which is easy to calculate from radiative balance.

      A spherical black body without an atmosphere suspended in space at the same distance from the Sun as the Earth, with unity surface emissivity has a radiative-equilibrium temperature of about 278 K. Not 255 K.

      • Dan – I’m not sure what point you are making, but in case it’s relevant, the Earth has close to unit emissivity in the infrared, but it is far from a black body in relation to solar radiation, about 30 percent of which is scattered or reflected from clouds, the surface, and Rayleigh scattering by atmospheric gases.

      • . . . or equivalently without an atmosphere . . .

        A spherical black body without an atmosphere . . .

      • Thanks – You’re right that a without significant surface albedo, the temperature would be about 278 K. For Earth, surface albedo is determined particularly by snow and ice, but also by sand and other light colored, non-volatile materials that would exist without an atmosphere, and so the temperature would be higher than 255 K but lower than 278 K.

      • . . . about 30 percent of which is scattered or reflected from clouds, . . .

        And, if you allow the atmosphere to contain water vapor, aka clouds, you must account also for that contribution to the radiative-equilibrium-temperature energy budget. I’m referring to that other effect of water vapor beyond cloud albedo, if you get my drift. And, imo, a more nearly complete accounting would assign an emissivity less than unity to the Earth’s surface. The calculated temperature is then greater than 278 K but maybe lower than 288 K.

      • Dan – You’ve lost me completely. If, as Jim D was stating, the atmosphere had no water vapor or CO2, but all else, including albedo, was held constant (although he didn’t state that), the temperature would drop to about 255.

        Realistically, if there were no water, there would be no cloud or snow-ice albedo, but atmospheric and some surface albedo would remain, and so taking that into account would raise the temperature back up, although I don’t know to what level. I think the main point about the 33 deg C difference is that the greenhouse gases contribute that much warming and not that one could realistically envision a world in which they were absent but nothing else changed. It’s a way of artificially isolating one variable from a system in which the variables all interact, and is not meant to be something that could actually happen.

      • The greenhouse effect of cloud water is also included.

      • Why would you assign an emissivity of significantly less than unity to the Earth’s surface in infrared wavelengths?

      • Fred, Not to continue to be off-topic, but you have stated the problem that I have with the usual presentation of the situation; my em-ing in your quote:

        It’s a way of artificially isolating one variable from a system in which the variables all interact, and is not meant to be something that could actually happen.

        That’s my problem with this incomplete specification of the spherical-cow version. A material that is radiatively interacting with the incoming short-wave energy is introduced into the physical picture. But it is seldom, if ever, noted that that same material is also radiatively interactive with the out-going long-wave energy. The effect of the interaction with the short-wave is introduced into the calculation of some kind of grand global average temperature. The completely analogous effect, interaction with radiative energy transport, on the long-wave energy is incorrectly omitted from the calculations. Then it is simply stated that GHGs, if I may use that nomenclature, are responsible for the 33 K difference between the calculated number and the observed number. When all the time the material is right there in the model.

        If I made such a presentation to an engineering audience, I would get called on it in short order. I’ve made up my own version of an energy balance approach to the physical picture, in which I’ve conveniently omitted an important aspect; a very important aspect. Why not present the effects of the already-introduced material. Then, to focus on the issue of importance, humankind’s affects by way of introduction of CO2 into the atmosphere, show estimates of the affects of this material alone. Don’t lump it in with the effects of the material that was introduced into the model but incorrectly omitted from consistent treatment of phenomena and processes.

        Rules of thumb, spherical-cow versions, and other rough estimates can be effective. But only if they involve some relationships that can actually happen.

        I don’t know how the Earth’s Moon got into this discussion.

        I think I did not say, significantly less than unity.

      • oops, em-ing doesn’t show up in blockquote. I was aiming for artificially and is not meant to be something that could actually happen.

      • Dan – We may be talking past each other. The 33 C figure is correct within the context in which it’s presented. I infer that you are not challenging that, but rather what you see as the implications. However, the intended implications are not that such a scenario can happen but merely that greenhouse gases are what permit our climate to operate at tolerable levels even far from the equator. I’m not sure whether the unrealistic nature of the hypothetical scenario is your point, but if it is, could you illustrate with some specific examples why this might be a problem? Do you think the 33 C figure is being misrepresented as something it isn’t? This, to my mind, is not a spherical cow example, because I don’t see the 33 C as being used to derive erroneous conclusions. In fact, the greenhouse strength implicit in that figure is exactly what can be used for accurate evaluation of current changes in CO2 concentrations.

        Also, I don’t know whether this is relevant, but Pierrehumbert is correct that if CO2 were removed from the atmosphere, temperatures would drop precipitously to the point where most water vapor condensed out, and so the total temperature reduction would be severe because of the combined loss of the greenhouse effect of both substances. Something this total has never happened on Earth, but previous times with Earth in an “icehouse” state have come close.

      • OReily? During the Paleozoic Snowball Earth, CO2 was about 4,500 ppm. And it was low during the Permian, but temperatures surged BEFORE CO2 rose.

        Paleoclimate has no comfort for the GHG Believers.

      • I don’t know how the Earth’s Moon got into this discussion.

        You brought it up, it’s the “spherical black body without an atmosphere suspended in space at the same distance from the Sun as the Earth, with unity surface emissivity”.

      • The point was, it is easy to see that greenhouse gases contribute 33 degrees, all else being equal (including albedo). We know that despite the albedo being 0.3, the average surface temperature is not 255 K but nearer 288 K. Why is that? GHGs is the answer, precisely because all else is held equal to isolate them. Sorry if this is confusing.

      • You can have an O2/N2 atmosphere and the surface would still be 255 K. The discussion about no atmosphere does not really help with understanding the greenhouse effect.
        Alternative explanations for why we are 288 K rather than 255 K are welcome, but none have been put forward, which speaks volumes.

      • The one we know about has a daytime average of 380K and nighttime average of 120K, that comes to 250K overall average.

      • Hmm, nightime is a little high and daytime a little low. Must be some kinda Greenhouse effect!!

      • Those are the values I’ve seen, can’t be a GHE since there’s no atmosphere!

      • That’s a dumb joke Phil.

        A Black Body would go to zero with no absorption and peak higher. What mediates the temperature is the conduction of energy away from the surface layer which is then radiated at night when there is no incoming radiation. Just like the Greenhouse effect!! Yet some would have us believe that all the slowing of the cooling at night is due to back radiation and all those alledged 33c difference between no atmosphere and atmosphere is due to CO2!! When you leave out enough small contributors the last small contributor can be made to LOOK larger than it is.

        What happens when unphysical thought experiments become the primary tool of communicating complexities.

      • Who’s joking? Not me.
        Dan Hughes said the following:
        A spherical black body without an atmosphere suspended in space at the same distance from the Sun as the Earth, with unity surface emissivity has a radiative-equilibrium temperature of about 278 K. Not 255 K.

        We have one, it’s not a thought experiment, it’s called the Moon and the temperatures there are as I stated.

      • Not quite, Phil. First, the surface is not uniform, nor is the surface emissivity unity. Second, there actually is an atmosphere (although very tenuous).

        Nothing’s perfect.

      • I don’t see the requirement for the uniformity in Dan’s statement? Surface emissivity in the IR is sufficiently close to unity to make no difference and invoking the lunar atmosphere is quite a stretch! Nothing is perfect indeed but the real world (Moon) shows that Dan is off the mark.

  22. Steven Mosher

    I liked this:

    “Apart from its role in the energy balance of planets and stars, it lies at the heart of all forms of remote sensing and astronomy used to observe planets, stars, and the universe as a whole. It is woven through a vast range of devices that are part of modern life, from microwave ovens to heat-seeking missiles. ”

    I think we still missed the boat Judith on the application of RTE in everyday engineering.

    • Steven,
      There is far more missing than that.
      Science and science fiction are very much interwoven.

    • randomengineer

      I think we still missed the boat Judith on the application of RTE in everyday engineering.

      I’ve been under the impression that this was given up due to the perception that seemingly everyone agreed that RTE was reality.

      The article from Pierrehumbert reads as “assume a spherical cow.”

      The point of depature vis a vis climate seems to be the notion of equilibrium. Energy out = energy in, sure, but the atmosphere is dynamic; the Fermi satellite was just in the news re detection of gamma ray and antimatter bursts from thunderstorms. I don’t think anyone has enough of a handle on the mechanisms of energy release to space. This speaks to our understanding (and lack thereof) re energy equilibrium.

      Moreover, until such time as we have a working explanation of the MWP and the LIA (something that isn’t just ‘scientific’ guesswork aka SWAG) then it seems presumptuous at the least to start running with RTE and making grand astrological quality predictions about the weather 50 years hence. The problem with discussing RTE is that the dubious explanations of the past says that we don’t *really* know how the climate (and RTE) worked in 1276 to create those conditions, therefore we don’t understand it well enough to make useful predictions. And again, the climate of the MWP was a point of equilibrium, as was that of the LIA.

      The “out of balance” meme re climate alarmism suggests that there’s a natural equilibrium that man is upsetting. The notion of “tipping points” is also invoked metaphorically to make the same suggestion. The pushback from the skeptical circles is therefore likewise tied to equilibrium: “out of balance? Sez who?” For example, much of the claim of increased storms etc seems to come from the idea that man is pumping GHGs too fast therefore the equilibrium is upset and energy MUST be released, so increased energy in the atmosphere ought to translate to more violent forms of energy release hence more storms. Disruptive storms, too: global climate disruption.

      (And I probably have this wrong. This of course is remarkable given that I tend to follow this stuff, and if I’m that far wrong then the idea that most people have any sort of useful clue is preposterous.)

      Anyway, it’s not RTE that’s the problem. It’s that cows aren’t spherical.

      • randomengineer:

        the Fermi satellite was just in the news re detection of gamma ray and antimatter bursts from thunderstorms …

        Yep, the grbs were already widely known, the antimatter only to folks like Lubos Motl.

        But I enjoyed hearing about that. One of the problems with the role of ‘climate science’ as harbinger of doom is that, surely, some of the fun of discovery goes out of the field.

        Feynmann’s already been mentioned, for good reason. Roger Penrose is another mathematical physicist with an irreppressible sense of fun. I like that. Something about how little we know about thunderstorms tickles me, given how many of them are active over the earth every second I type this. Of course I want to know more. But the journey is the reward and all that. I think we lose a lot of that because of the pressure of the activism.

      • Steven Mosher

        “I’ve been under the impression that this was given up due to the perception that seemingly everyone agreed that RTE was reality.”

        It would be nice to collect a list of skeptics who say so.

        Further, One need not understand the LIA and MWP to make useful projections. You can, and people have, made useful projections on the back on an envelope. For example, just by looking at the chinese stealth fighter I can tell you that its has severe problems with broadband all aspect stealth. to a first order that can be determined on inspection, if you understand the physics. Same with climate, you don’t actually need a GCM or an understanding of the LIA or MWP to make a good first order approximation.

        http://www.realclimate.org/index.php/archives/2010/07/happy-35th-birthday-global-warming/

        In fact, Unless your goal is a regional adaptation plan, I would argue ( maybe just to piss people off) that the back of the envelop calculation is enough information to get people to take the need for nuclear seriously. Its enough information to get people to change the way they plan for floods and droughts and storms.

      • “Further, One need not understand the LIA and MWP to make useful projections. You can, and people have, made useful projections on the back on an envelope. ”

        Oh, brother!

      • randomengineer

        Further, One need not understand the LIA and MWP to make useful projections.

        Of course this depends on *which* projections, doesn’t it? Seems to me Trenberth is right from a certain POV. That is, imagine climate as being the same curve as sunspot cycles; the current projection from Hathaway is max = 59. Look at the deviation from the fitted curve; the deviations in sunspot counts are essentially superimposed on it. Climate similarly should look the same — deviations up and down from the projected curve. The deviations (i.e. hot/cold temp records) are relative to the era average. All Trenberth really says is that we’re at spot X on the curve, so everything will be relative to this. Which of course is sorta obvious: in an Ice Age hot and cold records are relative to the period as well.

        Its enough information to get people to change the way they plan for floods and droughts and storms.

        Certainly what I just outlined is sufficient. But this isn’t even a back of the envelope calc, this is just a simple and generalised guess based on a curve shape. I’m not convinced that GCM’s and 2.5 billion USD are telling me any more than this.

        For my 2.5 billion USD I don’t think it’s a lot to ask to nail the conditions causing the MWP and LIA. It might make me warmer and fuzzier re predictive ability beyond what I outlined.

  23. Reading most responses here, I think the answer to Judith’s question tips heavily to a “no”.

    • As usual, Bart, you choose (perhaps cherrypick) way too small a sample

    • Not just “No” but “Hell no”.

      The “let’s discuss all points of view” approach coupled with Dr Curry’s lack of knowledge about the specifics of greenhouse calculations and radiative transfer (I don’t know anything about it either but I’m not claiming my lack of knowledge is any way notable or unusual) has created the impression among many that this means the physics of the greenhouse effect is actually uncertain.

      Even topics like the direct radiative effect of a doubling of C02 which were previously a point of agreement between knowledgeable skeptics and non-skeptics alike are now up for “debate” as to their certainty and even existence.

      Examples:

      Half of the energy is flung out to space… (along with the model projections)

      “Like Judith Curry (see her blog, Part I and Part II), we think the calculation of a 1.2C warming for CO2 doubling is opaque and uncertain, and open to challenge. On the face of it, it may well be half that, around 0.6C. (And it’s not like those who aim to alarm us, ever exaggerate or hide behind obscure and unexplained data or calculations, is it?)”

      Climate sensitivity

      “As a starting point, the authors assume the magnitude of CO2 radiative forcing as it is used by the IPCC, however, there is currently no way to accurately verify what they assume to be the sensitivity of surface temperatures to radiative forcing due to CO2 in the absence of any feedbacks.

      Judith Curry has an interesting article on CO2 sensitivity that is well worth reading.”

      In Search of a Lost Greenhouse Effect

      “In the recent post CO2 No Feedback Sensitivity Curry questions even the very starting point of CO2 alarmism, namely a climate sensitivity of 1 C from a direct application of Stefan-Boltzmann’s Law Q = sigma T^4, which in differentiated form with Q ~ 280 W/m2 and T ~ 280 K reads dQ ~ 4 dT and thus gives dT ~ 1 C upon input of “radiative forcing” of dQ ~ 4 W/m^2.

      This is along the criticism I have expressed: To take 1 C as a starting point for various feedbacks is not science, because the formula Q = sigma T^4 as a model of global climate is so utterly simplistic: One can as well argue that one should take 0 C as starting point, and then enormous feedbacks would be required.

      Curry admits that she does not know the physics (“the actual physical mechanism”) of any atmospheric greenhouse effect, and she asks if there is anyone somewhere out there in cyberspace who does. Isn’t this strange? Is the greenhouse effect dead? Was it never alive?

      Compare with Slaying the Sky Dragon: Death of the Greenhouse Gas Theory (now #1 on Amazon ebook lists).”

      To me the issue appears to be going backwards rather than forwards. Perhaps some now grudgingly accept some energy can be transferred back to the surface (though naturally not enough to be a problem or measurable etc etc) but many others now have reduced confidence in the physics of the greenhouse effect.

      • shaper00,
        So Dr. Curry is ignorant, where as you, our anonymous internet self-declared expert, get it?
        Hmmmmm…….who should be credible in this? Someone who writes text books on the topic, is willing to at least admit there are problems, or someone who sources those who deliberately misquote and mischaracterize the issues and our hostess?
        shaper00 or Judith Curry…..a tough call. Not.

      • Hunter,

        I’m going to have to give you a failing grade for you understanding of what you just read. I’d restate it in an attempt to help you understand it better but having read your many many (many!) comments I already know it will simply trigger another reflexive and poorly thought out response.

      • shaper00,
        Fortunately you are not my teacher.

      • Let’s not get into a slanging match again chaps, huh?

  24. As some others above have intimated, i think you’re asking the wrong question Dr Curry. The GHG theory, or the science behind it seem reasonably well defined to my mind (though i get lost in the more complex calculations embedded in the physics) it’s just the APPLICATION that’s the issue.

    I don’t think many would argue that in a simple system increasing GHGs will raise temperature. It’s just our undesrtanding of the feedbacks, sensitivity and specifically clouds that’re the issue.

    I’m still reading through the entirety of the article (it takes me a while when equations are involved lol) but that’s my understanding of the situation at present.

    • Labmunkey, I agree that the APPLICATION should be the issue, I’m hoping I can get people to agree on this so we can move on to the more challenging issues.

      • Well, you’ve got one to agree for what it’s worth :-)

        Also, just like to say that the climate clash article is very informative for a novice like myself- thanks for posting it.

      • How many have to agree before we can move to the juicy bits :-)

      • 42 :-)

      • That’s 101010 in digital form. It has to be right answer therefore. But no pressure :)

      • Dr. Curry,
        Yet how many posts here assume that AGW is basic physics and that the climate is deterministic?

      • Steven Mosher

        err, I have yet to see that

      • Steven,
        I will look for examples of this, but it seems clear that the climate science consensus effort at declaring a worldwide climate disruption off of a few degrees change in the global temp average is fairly deterministic.
        The tragic mistakes in Australia regarding not building flood control due to the predicted droughts of AGW seems rather deterministic as well.

  25. The Curates egg springs to mind; good in parts!

    On page 8 we are told that” CO2 is just planetary insulation” like “adding fibreglass ” insulation.
    I will buy that, so no more of the miraculous properties of backradiation where a colder surface (atmosphere) increases the temperature of a warmer surface (Earth surface).
    All CO2 does then, is reduce the loss of heat of the Earth surface.
    There is always more radiation of every frequency leaving the Earth surface than entering it.(excluding temperature inversion)
    N2 and O2 also play their part in reducing the 3 methods of heat transfer and the central role of H2O phase change is particularly important.

    However elsewhere in the article we find that CO2 becomes a sophisticated thermostat.
    It regulates the temperature in a way that loft insulation cannot.
    Its this other role for CO2 that lacks any convincing proof.

    Steven Mosher says in post above.
    ……”The trace gas argument is usually made by commenters who heard it somewhere. It’s annoying to see it used. really annoying.”….

    Well sorry Steven, CO2 is still a trace gas and its effects are determined by that fact.

    • Steven Mosher

      And Bryan can you tell me have you ever had to calculate the effects of C02 on the propagation of IR thru the atmospheric column? Has anyone’s life ever depended upon your ability to do this? So tell us what physics did you use to do this calculation?

      • Steven Mosher

        Its not to hard to calculate the thermalisation of the atmosphere by absorption of say 15um radiation, if that what you mean?
        However the bulk result of experiments carries out by for instance R W Wood show that the radiative effects of CO2 at atmospheric temperatures to be very small.

      • Steven Mosher
        At the TOA the importance of CO2 becomes more evident because of its ability to radiate to space.

      • Steven,
        Has anybody’s life ever depended on you, Dr. Curry, Hansen, SoD or anyone else, to determine how the IR propagates through the atmosphere due to CO2 being calculated properly?

      • Steven Mosher

        I can only speak for myself. Yes. Anyone who has ever worked on a weapon system since say 1980 will understand the importance of getting those calculations correct. Anyone who has ever built a satillite system that senses the state of clouds and temperatures throughout the atmospheric column ( and warnings provided by these systems do save lives) understands the importance of getting these calculations correct.

        Just for starters look at the charts ray provides that show the effective height where the planets radiate to space. If you can understand that, then we are on the way. And, whats more, you can believe everything Ray says and still be a skeptic. So you can learn something and still be a skeptic. Bonus time

      • Except that isn’t an altitude, it is an average temperature which is being INTERPRETED to be an altitude partially based on observational data. With the inversions in temperature in our atmosphere, could you demonstrate to us how that affects the actual radiation altitude(s)?

      • Steven,
        Excellent answer. I stand corrected.
        However, do you see equal significance in the world of climate?

    • Well sorry Steven, CO2 is still a trace gas and its effects are determined by that fact.

      As far as radiative transfer in the troposphere is concerned N2, O2 and Ar are trace gases and their effect is determined by that fact!

      • Phil. Felton
        CO2 and the other radiative gases play a vital role at TOA radiating away the long wavelength EM radiation.
        Near the Earth Surface CO2 s radiative effects dont seem to have any major practical effects.
        Certainly not the claimed 33C increase in Earth temperature.
        This is because it is a trace gas.
        R W Wood did a famous experiment to prove that point.

      • Yes he did a quick experiment and unfortunately his analysis was wrong. As he said: “I do not pretend to have gone very deeply into the matter, and publish this note merely to draw attention to the fact that trapped radiation appears to play but a very small part in the actual cases with which we are familiar.”
        Emphasis mine.

      • Phil. Felton

        R W Wood was probably the best experimental physicist that America ever produced.
        The quality of genius is that they quickly get to the point.
        Wood nailed two points in this experiment.
        1. Greenhouses(glasshouses) work by stopping convection.
        2. The radiative effects of CO2 are very weak at atmospheric temperatures.

        G&T did an experiment to confirm the conclusions of Wood.

        This also is an interesting paper especially as it comes from a source with no “spin” on the AGW debate.

        The way I read the paper is it gives massive support for the conclusions of the famous Woods experiment.

        Basically the project was to find if it made any sense to add Infra Red absorbers to polyethylene plastic for use in agricultural plastic greenhouses.

        Polyethylene is IR transparent like the Rocksalt used in Woods Experiment.

        The addition of IR absorbers to the plastic made it equivalent to “glass”

        The results of the study show that( Page2 )

        …”IR blocking films may occasionally raise night temperatures” (by less than 1.5C) “the trend does not seem to be consistent over time”

        http://www.hort.cornell.edu/hightunnel/about/research/general/penn_state_plastic_study.pdf

      • Near the Earth Surface CO2 s radiative effects dont seem to have any major practical effects.

        Apart from heating up the atmosphere by absorbing IR from the surface which N2, O2, & Ar can’t do!

      • Phil. Felton

        ……”Apart from heating up the atmosphere by absorbing IR from the surface which N2, O2, & Ar can’t do!”……….

        They dont need to!
        The non IR active gases such as N2 and O2 get their energy in the main from conductive transfer with the Earth surface.
        This is why to atmospheric temperature in the troposphere is at its highest at the surface.

      • No the rate of loss from the surface to the atmosphere is much lower than by radiation. CO2 being a very strong absorber of the IR emitted by the surface and transmits a lot of heat to the atmosphere (maximally near the surface). The profile exceeds that allowed by convection so that sets up the actual lapse rate. The profile is the result of a radiative-convective equilibrium.

      • Phil. Felton

        The discussion on lapse rates continues between myself and Pekka below.

        The dry adiabatic (maximum) lapse rate for the troposphere is derived without radiative effects considered.

        The actual lapse rate at a particular time must include in addition the effects of convection, latent heat and radiative effects particularly water vapour.

      • To be clear, most of the 33 C is due to another trace gas called H2O. CO2 contributes about 20% to the greenhouse effect.

      • Jim D
        Do you realise that the dry adiabatic lapse rate for the temperature profile of the troposphere is derived without including any radiative effects whatsoever.
        The 33C invention is based on the fictitious Earth with no Oceans or Atmosphere, whats that supposed to prove?

      • Bryan,
        The adiabatic lapse rate would not apply without radiative effects. It is a upper limit for stable temperature gradient, but this limit would nor be reached without radiative effects.

      • Pekka Pirilä |
        The alteration from the dry 9.81K/km is mostly down to convection of “moist” air and its latent heat implications.

      • Brian,
        This is a completely different thing. I was not commenting on the value of the limiting lapse rate, but on whether the limiting value is reached.

        Without radiative effects the real lapse rate would be much smaller that these limits. It would not be 6 or 9 K/km, but something much less like 1 K/km. I do not know the value, but the radiative effects are the only reason for reaching the limit set by convection and thermodynamics or even close to these limits.

        Without radiative effects there would not be any large scale convection, but only conduction and some mixing. The radiative effects create the temperature differences that drive the convection.

      • Pekka Pirilä
        I was careful to say I was talking about the troposphere.
        Above the tropopause the radiative gases largely set the rate at which energy leaves the Earth.
        So the way I see it the major limiting effects on the whole atmosphere are;
        1.TOA – radiative
        2. Earth surface- the effects of solar radiation.
        3. Troposphere – Gravitational compression, convection and phase change with radiation playing a minor part.

      • Pekka
        Take a single molecule of air moving vertically upward.
        This gives the vertical profile of temperature against height.
        It leaves with a temperature characteristic of the surface say 288K.
        Its RMS velocity will be around 560m/s.

        As it moves up vertically its KE changes to PE.
        At any point Loss in KE=mgh

        3kT/2=mgh
        g =9.81N/kg the gravitational field strength.

        This is where the magnitude of the lapse rate comes in.
        By a simple application of the Kinetic Theory of Gasses in the Gravitational Field we derive the adiabatic dry lapse rate without reference to any radiative effects whatsoever.

      • Bryan,
        Your statement about the origins of the lapse rate is not true.

        Assuming that the temperature at the surface is 15 C the average vertical speed of air molecules (N2 and O2) is about 290 m/s (total average velocity about 500 m/s). In free space sending a molecule upwards with that speed would rise for 30 s reaching an altitude of 4300 m. Stating in another way the temperature would be close to absolute zero at that altitude, if your argument would be correct. The atmosphere would also be so thin as no molecules would have enough energy to go much higher up.

        In order to get any understanding of the lapse rate on must consider the equation of state of air (near to ideal gas, when relative humidity is well below 100%) and some additional physics.

      • Pekka
        The example I gave was of a thought experiment to simplify the problem to examine the energy changes involved.
        Its quite correct.
        I carefully said;

        ….”Take a single molecule of air moving vertically upward.”…….
        You can then gross it up for x,y,z directions for N molecules but the basic physics stays the same.

        As we move higher all objects gain PE and this comes from the gases KE therefore the temperature drops at 9.81K/km

      • Bryan,
        I cannot see any logic in your statement.

        How do you reach the conclusion that the temperature drops 9.81 K/km?

        Do you propose that there is some connection between the 9.81 in 9.81 K/km and in 9.81 N/kg?

        If that is the case, what is the connection between these two values given in completely different units?

      • Pekka
        Yes the magnitude is no coincidence.
        See page 40 of Rodrigo Caballeros notes on Physical Meteorology.
        These are available freely online if you Google.

      • Bryan,
        The same numerical value (given as 9.8 in the notes) is a coincidence of no deeper meaning. The numerical value would be different, if the SI units would have been defined in a different way – or if the temperature is measured in Fahrenheit instead of centigrade. It is pure coincidence.

        The theory presented in these notes has no relationship with your earlier messages. It contains the correct theory that I was referring to.

      • The lecture notes of Rodrigo Caballero help in explaining some relevant points.

        The equation of state of ideal gas is hidden in the presentation of these notes. Going backwards from the equation (2.92) that defines the adiabatic lapse rate, you can notice that it is based on the requirement that the potential temperature theta does not depend on the altitude. The definition of theta and its connection to the entropy involve applying the equation of state.

        Then you can continue to the next chapter 2.21 which tells that the adiabatic lapse rate is indeed an upper limit, not the only possible value. Faster cooling by altitude would lead to instability and bring back to adiabatic lapse rate, but a slower cooling is stable and can be maintained permanently. The adiabatic lapse rate is observed when something tries to create a stronger temperature gradient as this leads to the instability which is forced back to adiabatic lapse rate, but without radiative effects nothing will lead to this and the real lapse rate will remain less than the adiabatic limit.

      • Pekka
        Your last two replies contradict one another.
        The first maintains your previous position.
        However the second seems to accept my account without being explicit.
        To clear the matter up for others who follow this particular discussion, do you agree with my two points;
        1. The DRY adiabatic lapse rate is calculated without reference to radiative effects in the troposphere.
        2. The value of 9.81 K/km has its origins in the 9.81 magnitude of the Gravitational Field Strength used as part of the calculation.
        A simple yes or no will suffice.

      • Bryan,
        Simple yes or no is seldom appropriate, as they will almost certainly be misinterpreted.

        1) Yes and no. The adiabatic lapse rate does is a value, whose calculation does not involve radiation (‘Yes’ for that part of the answer to your first question). The adiabatic lapse rate does not exist without radiation (‘No’ as the second part of the answer to your first question).

        2) Strongly ‘No’. It is true that g appears in the formula (2.92), but it is pure coincidence that the divisor, the specific heat capacity of dry air has a value very close to 1 in units J/g/K. The value is more precisely 1.0035 and there is really nothing fundamental in its closeness to one.

      • Pekka.

        Thanks for setting the record straight.
        The maximum or dry adiabatic lapse rate is derived without reference to radiative effects.
        However you still seem to be persisting with your point that there is not a close relationship between the dry adiabatic lapse rate and the Gravitational Field Strength(g).

        A second source makes it even more consise.
        http://www.tech-know.eu/NISubmission/pdf/Politics_and_the_Greenhouse_Effect.pdf
        Near the bottom of the page you will find

        dT/dH = -g/CpT

        A substitution of magnitudes gives the
        Lapse Rate as – 9.8K/km

      • Bryan,
        Certainly that calculation is true, but you should try to understand, how the Cp enters the formula. There is a lot of thermodynamics in that. Therefore your statement that it is only gravity on the molecules is totally false. It comes in when the equation of state of air is applied to the adiabatic expansion and combined with the dependence of pressure on the altitude.

        The other point is that the appearance of the same digits (9 and 8) in both numbers is pure coincidence, because it is a consequence of the decisions made when SI units were defined and making the numerical value of Cp of dry air to be close to one was not among the factors influencing the choices. This proximity to one is really pure coincidence.

      • Pekka

        At the start of the discussion I pointed out that as molecules of air move higher away from the surface they gain potential energy.

        This Work Done against gravity has only one source of energy of supply .
        The internal energy of the gas.

        The molecules MUST lose KE if they are to gain PE.
        Therefor their temperature must drop.

        Now PE =mgh
        So g is retained until the last line of the calculation.
        To imply that its presence is a mere coincidence is highly misleading.
        Any careful reader will come to the conclusion that all my points are essentially true.

      • Bryan, you are quite correct. The answer to your question 1 is very simple: “yes”.

      • Bryan,
        Your theory is so badly incomplete that it is useless. Of course the g enters the formula as a factor describing the strength of gravitation. This is important, because the pressure gradient is determined by it. The change in pressure is transferred to a change in density by the equation of state, but this must be done in a way defined as adiabatic as the temperature is changing at the same time. In this connection between pressure, density and temperature we get the specific heat capacity to the calculation.

        The gravity is not considered on the level of individual molecules but on the level of a parcel of gas that is large enough to move without significant transfer of heat or matter through it bounding surface. The speed of motion of this parcel is not important, indeed it is natural to think that it moves very slowly compared to the speed of individual molecules. The reduction in the speed of molecules is seen in the change of temperature, but this is calculated from the change in pressure and the equation of state as I said above, not from the change of the potential energy of individual molecules in the gravitational field.

        It is essential to take into account the properties of air as a nearly ideal gas. It is not possible to get right answers without such considerations. The lecture notes describe all this. If they do not explain everything well enough, you should start with an introduction to thermodynamics and proceed so far that you understand well the thermodynamics of adiabatic expansion of ideal gas.

      • Pekka
        ….” when SI units were defined and making the numerical value of Cp of dry air to be close to one was not among the factors influencing the choices. This proximity to one is really pure coincidence.”….

        So to paraphrase you the value of Cp being close to unity is a pure coincidence.
        It could have been any old haphazard constant.
        Well lets see;
        Potential Energy Gained in lifting One Kilogram of Air through 1000m
        PE = mgh = 1×9.81×1000 = 9810J

        If this energy is gained by the air dropping in temperature we would expect
        Heat energy lost = mCpdeltaT
        9810J = 1KgxCpx9.81K

        this implies that
        Cp = 1000J/kgK
        Cp = 1J/gK in grams
        So by simply using a potential energy gain from a measured temperature drop we find the heat capacity has to be close to unity.
        Or by using thermodynamics to find the heat capacity we can then predict the temperature drop.
        The Physics is quite consistent and if a correct consistent use of units applied the constants must give definite magnitudes.
        Gravitational Field Strength must be close to 9.81N/Kg near the Earth surface and the heat capacity has to be near unity at the Earth Surface when the lapse rate is quoted at 9.81K/km

      • Pekka

        You say
        ….”The gravity is not considered on the level of individual molecules”……
        This is utter nonsense!

        What is further it must come as a shock to anyone looking for information here, that a number of apologists for AGW (including Pekka) did NOT REALISE that radiative effects play no part whatsoever in the calculations of the dry adiabatic lapse rate.

        That gravity plays the dominant role in the temperature profile of the atmosphere is news to them!

        I will produce a more extensive reply to Pekka later.
        However to deny that air molecules are subject to the force of gravity is so preposterous that I had to nail this gross error instantly!

      • Don’t worry about more arguments. At least not for me. I am not going to read your messages anymore. Your insulting style has gone beyond what I care to read.

      • Pekka,

        Why would the adiabatic lapse rate not apply without radiative effects?
        As the effective radiating altitude would be at the surface, the surface would be much colder.
        But wouldn’t the TOA be even colder?

      • Peter,
        The adiabatic lapse rate is the maximum possible lapse rate. It is reached, when other mechanisms try to create a higher lapse rate, but the radiative effects are the only effects that influence significantly in this direction.

        Without convection the lapse rate would be much larger and the greenhouse effect much stronger. The earth surface would be some 30 C warmer than it is now. A temperature gradient that is larger than the adiabatic lapse rate induces strong convection as the density difference between different layers exceeds stability limits, but a gradient that is less than the adiabatic lapse rate is stable and prevents vertical convection as it does in stratosphere.

        Thus the adiabatic lapse rate is obtained when some mechanism tries to create a larger gradient and the convection stops this attempt. Without radiative effects this will not occur.

      • But why should the lack of radiative effect prevent the gradient? Surely, even though the surface would be colder, the stratosphere would be colder still?

      • There will not be any large gradient unless something drives it. It is more natural for the temperature differences to disappear than to appear. Only a strong driver can invert this natural tendency and the radiative effects are the only strong driver that exists.

        They drive the temperature difference because the allow for a significant transfer of heat from the top of atmosphere to the space while the heating comes to the surface. Without radiation from the atmosphere both the incoming flux and the outgoing flux connect the surface directly to the space (and sun). The atmosphere settles gradually to a state of small vertical temperature differences.

      • Pekka,

        Thanks. I see your point.
        However, that raises yet more questions. As the atmosphere would not be able to lose heat (well, very little) it would have gotten steadily warmer until it stabilised at something approaching the maximum daytime surface temperature.
        Which would make it much warmer than it is now, wouldn’t it?
        Of course, that also assumes no water in the system.

      • Peter,
        Because the atmosphere would not prevent radiation from the surface from escaping, the surface would reach an effective average temperature determined by the solar irradiance and albedo (to a lesser degree also on the emissivity of IR, but this is certainly closer to black body than the absorption of solar radiation).

        If the earth would be totally black and the influence of atmosphere would be left out completely the earth would radiate as strongly than a black body at 278 K. There would, however, be a very large difference between equator and the poles.

        At poles the temperature would be extremely low as the sun would not heat at all. The temperature could drop to the level of cosmic microwave background radiation or 3 K.

        At the equator the daily average would correspond to 296 K and the difference between night and day might in theory reach 390 K. At the latitude of 60 degrees the effective average would be only 209 K.

        These numbers are theoretical, but they tell, how much the the temperatures would vary, if radiation would completely determine the temperatures. The large temperature differences between latitudes would induce convective circulation basically similar to Hadley cells of the present earth, but certainly very different in details. That would create also some temperature gradients, but I do not know anything more about this. In any case it is obvious that a large part of the earth surface would be very cold and the existence of sufficient water would lead to a snowball earth with a high albedo and very low temperatures everywhere even on equator due to this low albedo.

      • On the last line I should have written “high albedo”, not low.

      • That’s an amazing bit of physics you describe, Pekka…where our atmosphere traps radiation. The way a house of mirrors traps a beam of light, I suppose. Let’s take a short pulse from an IR laser and trap that radiant energy in a gas, okay? That would be really exciting to watch.
        However, be careful! Don’t let it get out of control, with that “back radiation” greater than incident radiation. You’ll disappear into a mushroom cloud and that would make us all sad.

      • Pekka,

        I didn’t say the surface would be warmer. The average surface temperature would be at the radiative equilibrium temperature.
        But the atmosphere would not be able to lose any heat through radiation, and would only be able to lose a small amount due to conduction to the surface at times and places that the surface was cooler. Convection pushes heat upwards, and dry air is a good insulator.
        What would stop the atmosphere from gradually heating up towards the maximum surface temperature?

      • As I posted somewhere else here, a pure O2/N2 atmosphere would also be 33 C cooler at the surface. I am still waiting for an alternative explanation to even be put forward of why the surface temperature is not 255 K.

      • …… ” still waiting for an alternative explanation to even be put forward of why the surface temperature is not 255 K”….
        Well Pierrhumbert gave a hint when he said the Atmosphere acts like insulation.
        Radiation however is not the only means of insulating the Earth in fact the N2 and O2 molecules find it very difficult to lose their thermal energy.

      • …and where would the N2 and O2 be getting their thermal energy from? The sun heats the surface which heats the atmosphere, but the average surface temperature is 255 K, so how would the atmosphere become warmer?

      • During the day the Sun heats the exposed Earth surface.
        In the absence of radiative gases the day heating effect would be even stronger than at present.
        The evaporation from the Oceans would be higher.
        Latent heat collected by day would be released at night as the atmosphere cooled.
        To imagine as you do that an Earth without an atmosphere and oceans would have exactly the same average surface temperature does not seem very likely.

      • If you imagine a kind of water vapor that has no radiative effect, but has latent heat, the atmospheric temperature is still not going to exceed the surface temperature. Even with latent heat the equilibrium lapse rate is negative.

      • Jim D

        The insulation effect of the atmosphere depends on
        1. Conduction
        2. Convection
        3. Radiation
        4. Phase change
        5. The Suns effect on the Earth surface would be stronger than at present.
        For your proposal to become true when 3 is withdrawn 1,2,4 becomes zero and the Suns radiative output would have to be reduced

      • You have not explained how the surface could exceed the radiative equilibrium temperature with any of these processes. An atmosphere transparent to IR has no insulating effect at all.

      • Actually the temperature might be rather high at the equator (average about 23 C if albedo is zero), but much colder at high latitudes. Thus there could be evaporation in the tropics, but an ice age would be unavoidable as the water would end up in growing glaciers.

        The atmosphere would be ineffective in moving heat and the lapse rate in the atmosphere would be small even in the tropics.

      • Jim D

        ……”An atmosphere transparent to IR has no insulating effect at all.”…….
        A simple experiment would prove this wrong.

        Ambient outside temperature 5C (say) a Cup of water with lid at say at 90c placed inside an enclosed box made of double layer of polyethylene.
        The gas inside the box is pure N2.
        This makes the experiment transparent to IR.
        The temperature of the water is taken over a suitable time scale.
        The experiment is repeated with the cup outside the box.
        I hope you agree that the cup of water inside the box would lose heat less quickly than outside.

      • From the G&T paper:
        “Alfred Schack, who wrote a classical text-book on this subject [95]. [In] 1972 … showed that the radiative component of heat transfer of CO2, though relevant at the temperatures in combustion chambers, can be neglected at atmospheric temperatures. The influence of carbonic acid on the Earth’s climates is definitively unmeasurable [98].”

        Of course, the DIY Jackasses-Of-All-Sciences at The Team know better than actual professional specialists in every and any field.

  26. Correction
    ……”There is always more radiation of every frequency leaving the Earth surface than entering it.(excluding temperature inversion)”…..

    Here I intended to add at night with reference to “backradiation”

    • Bryan,
      Does CO2 move ocean currents?
      Does CO2 inhibit evaporation or precipitation?
      Does CO2 change salinity?

  27. Dr Curry, Are there in your view any new facts or concepts in Dr Pierrhumbert’s article? Most of it looks broadly familiar. Is there anything, for instance, which has not been covered (perhaps distributed among several different posts) at Science of Doom? If so, would you be able you draw our attention to any significant departures?

  28. Well, you’re persistent if nothing else.

  29. Sorry about the typos: ‘Pierrehumbert’ and ‘…able to draw…’

  30. Judith,

    When you put the satellite map of the cloud cover:
    http://uk.weather.com/mapRoom/mapRoom?lat=10&lon=0&lev=1&from=global&type=sat&title=Satellite

    Over the map of the Ocean Surface Temperatures:
    http://wattsupwiththat.com/reference-pages/ensosea-levelsea-surface-temperature-page/

    There are two areas in the Arctic that are pumping great amount of evaporation from the warm currents to the dense cold air.

    The colder ocean surface area over the equator pretty much lines also with the ocean salt changes in the 1970’s as it was expanding too.

    • Steven Mosher

      Joe.

      Do you believe those pictures from space are accurate?

      If so, do you believe in the physics used to calculate those pictures?

      If so, then you accept what ray is describing.

      If you reject Rays physics, then those satillite images are wrong. In order to calculate them, in order to transform a EM field that hits the sensor INTO an image you apply certain physics. Precisely, the physics that ray describes.

      Thus, the evidence you cite REQUIRES a physics you deny, to actually be true.

      If you doubt this I’ll suggest you go look at the description page of the data products produce by satillites,

      • Steven,

        The more I dig in science, the more mistake I find.
        So, what is correct?
        The whole area of planetary rotation and the energies it creates was missed with much garbage in it’s place. Angles of deflection of solar energy on a moving planet was missed. Found the same mistakes in power generation.
        In some cases, just simple measuring would show mistakes.
        But this bring another problem…where is NASA measuring from?
        Suns carona or suns core?
        Is the measurements from satellites include trajectory angles?
        As I answer questions many more come into play that was not contemplated.

  31. Michael Larkin

    Being mathematically challenged, this is all very difficult for me to grasp – but I have always accepted that the GHG effect helps keep temperatures on earth higher than they would otherwise be. So I’m not inherently looking for there to be any flaw in that principle, so much as in finding a way to understand the phenomenon without recourse to a level of mathematics that is beyond me.

    Just recently, I came across a post at Jo Nova’s blog by Joe White:

    http://joannenova.com.au/2011/01/half-of-the-energy-is-flung-out-to-space-along-with-the-model-projections/

    Now, the purpose of this post is to challenge the consensus view, but what Joe does is to develop, over a series of stages, a diagram explaining the earth’s energy budget. He displays a sequence of 6 diagrams, and I’m wondering if we can all agree that what he says is uncontroversial as far as diagram #4.

    I can certainly follow his logic that far, and I would be interested to know whether anyone here would challenge that. I’m not thereby trying to engage in polemic, but rather to establish what we can all agree on, and whether, indeed, this is represented at diagram #4. And if not, why not, hopefully without the need to launch into maths that will be beyond me and perhaps others here.

    I think it would be really useful to try to establish what we can all agree on before taking things further, and that’s all I’m trying to do here.

    Here’s hoping for some constructive responses…

  32. Michael Larkin

    Correction – the article is by George, not Joe, White.

  33. Pierrehumbert’s article includes the following blatant falsehood:
    ” The same considerations used in the interpretation of spectra also determine the IR cooling rate of a planet and hence its surface temperature. ”
    The title of his piece is also misleading, suggesting that temperature is determined by radiation.
    As has been discussed repeatedly here and elsewhere, you cannot determine the surface temperature unless you can quantify the heat flux from all sources, including convection and evaporation.
    I am surprised that Pierrehumbert would make such a claim that he knows perfectly well is not true.
    I’m afraid that Judith seems to have a blind spot in this issue. It was raised repeatedly on the threads linked at the top of the post, by Leonard Weinstein, David L. Hagen, Nullius in Verba, philc, Tomas Milanovic, Gordon Robertson, myself and others.

    • I am not sure if blatant falsehood is the best way to describe this.
      Many of the AGW community seem to think that radiative cooling is the only issue….even as they still admit they do not understand clouds, think water vapor is only a feedback, and only positive, and seem to gloss over the oceans. Especially since the oceans stopped cooperating so much on pesky things like OHC.
      I see it more as a general inability to do more than find things that support the pre-determined answer. Not deliberate cynical lying.

      • Everybody admits that very much is unknown. At one end of the spectrum are those who do not accept essentially anything. They may disagree on the human role in increasing CO2 in the atmosphere or the basic radiation physics of CO2. At the other end people may think that IPCC is far too dismissive and that only the activists with most threatening views are close to reality. Most of us are somewhere between these extremes.

        My interpretation is that Judith is trying in this and some recent posts to find out, how far we can come from the extreme of no knowledge towards accepting certain results of science without loosing too many behind. The progress has been very slow and it is not possible to get everybody in, but we should be able to proceed to the next issues where the disagreement is much more widespread and where the scientists themselves start to have significantly differing views.

        Many comments of this chain have already gone into these more difficult issues (e.g. climate modeling, or cloud feedback), but I think their place is not here but will come soon in some other chain.

      • Pekka,
        There is a built in prejudice that science is correct and any outsider should stand in his place as we are 100% correct and should not be questioned.
        I found a great many answers but not by prejudiced science but by good hard detective researching.
        This in many places conflicted with current science.

      • Joe,
        Science is never 100% correct, but in many cases it is 99% correct or even 99.9% correct (as long as it is understood that these figures are only illustrative without well defined meaning).

        It is really the same issue than in accepting Newtonian mechanics in spite of the fact that Einstein’s special relativity and quantum mechanics have shown that the Newtonian mechanism is not strictly correct. In the same fashion the radiative calculations are basically correct although we know that we do not know all the details well. They are based on the well accepted theories of quantum mechanics and electromagnetism, but to calculate anything we must know many features of the atmosphere, which we can pick from experimental observations without recourse to theories – or we can alternatively use thermodynamics and some fluid mechanics to calculate the adiabatic lapse rate.

        All this is known well enough for performing reliable calculations of useful accuracy, but not enough for determining the real climate sensitivity with feedbacks.

        (There are no effects related to rotation of the earth that would prevent this approach or make it suspect, but the rotation is certainly one important factor in more comprehensive calculations on the level of GCMs or even at a somewhat lesser level.)

      • Pekka,
        This is where your prejudice is showing.
        The Law of relativity fails when brought back into the planets past and the system was very much different with no evaporation and the planet was rotating faster. The atmosphere was completely different with much less friction and increased motional speed.
        Quantuum mechanics fails as it does not take into account of motion and rotation. In a lab, two points are easy to connect but in space, there are NO fixed positions as everything is in motion. This also does not take into account of rotation.

        Pekka, unless you know every single area has been covered to prevent contamination or mistakes, the experiment is prejudiced by the outcome wanted and not the true outcome.
        Many areas in science are now coming under fire for not being able to recreate experiments published and turned into learned science.

      • Joe,
        I knew, what you were going to say. I have already noticed that you give no value to the huge accumulated knowledge base of science, while I believe that this knowledge base is essential.

        The most important single characteristic of science is precisely that it is a process that continuously improves knowledge by the process of using earlier knowledge as far as it helps and adding to it where it fails. In doing that it is always critical of both earlier knowledge and the new ideas, but being critical does not mean denying its significance.

      • Pekka,
        I do not deny science significance.
        Just correct science.
        If you do not know each point on this planet is different due to the diffent energies at each point, how can you understand the complexity of it?

      • Joe,
        Nobody knows what is correct science, but capable professional scientists have a better basis for making the judgment.

        In physical sciences the order of magnitude of various factors is often clear to scientists working on issues close enough to the problem considered. They can also verify that their intuition has not failed. The effects that you have introduced in these discussions appear to be several orders of magnitude (factors of ten) too small for being of significance.

        When I mentioned 99% correctness, I did not want to imply that the atmospheric calculations would have an error of less than 1%. In this case I only have 99% trust in that the basic effects are understood correctly and can be calculated with an useful accuracy (which allows for sizable errors). The list of these basic effects is short, much shorter than the list of all important features of the atmosphere.

        In general the micro-level physics of atmosphere, oceans and radiation is well known (of the 99.9% class), but only few macroscopic effects can be calculated reliably from microscopic theories and general laws of physics. Even for these the accuracy is not perfect. The behavior of radiation in a fixed atmosphere is one of the things that can be calculated, but fixing the atmosphere is never really realistic. Still the results obtained are useful and the article of Pierrehumbert discusses to a major part issues that belong to the well understood class.

      • Simple question.

        Why is the planet cooling then if ALL these scientists know best?

      • Joe,
        What happens for the average temperature in one decade or less is definitely not one of the things that can be calculated from well known laws of physics.

        All my comments are comments of a physicist who has never worked with atmospheric sciences. I have strong opinions only on issues that are simple enough to understand based on what I know about physics combined with only basic knowledge of the atmosphere.

      • Leonard Weinstein

        Pekka,
        Up to 2000, many CAGW supporters, many professional societies, the many governments of the world, the main news media, and yes, you, claimed the evidence is in, the models are conclusive, we know enough, the arguments are over, move on. As the temperature stopped rising (on average), as the hockey stick was shown to be poor evidence, as the low latitude hot spot in the troposphere failed to show up, as the water vapor content of the mid to upper troposphere was shown to be flat to declining, as the ocean heat storage was shown to be inadequate, and as models were shown to have no real predictive capability (and etc.), many admitted “Everybody admits that very much is unknown.”, and many, including yourself only did so after being shown to be wrong on many issues. It is astonishing to me how the extreme supporters of CAGW backtrack and claim they knew there was real uncertainty all along, and expect to not be admonished.

      • Leonard,
        We know that many people have been thinking that they must simplify the message. It is, however, not necessary to go any further than to the IPCC reports to find out that the incompleteness of the knowledge has been acknowledged by the scientists.

        The reports are far from perfect in describing the uncertainties, but most definitely they admit the existence of major uncertainties and have done it in all four reports over twenty years.

      • As ACYL sez, below, that “admission” of uncertainties seems to have few or radically minimized consequences. IMO, the proper conclusion from that admission is that the systemic response of the atmosphere is well outside any possibility of modeling given current understanding of its processes.

        Yet the IPCC and Warmists leap immediately to the opposite position, and claim the little they know is enough to go on. I call BS.

      • AnyColourYouLike

        ‘many admitted “Everybody admits that very much is unknown.”, and many, including yourself only did so after being shown to be wrong on many issues. It is astonishing to me how the extreme supporters of CAGW backtrack and claim they knew there was real uncertainty all along, and expect to not be admonished.’

        Yup

        That certainly fits in with my own observations of how the conversation has gone in the last few years (though, I don’t know of Pekka’s specific positions, so only speaking generally).

        For a layman like myself, trying to get to grips with the science but having a lot of it go over my head, there is a basic trust issue here. Certain cAGW scientists make strong assertions of confidence – “it’s all simple physics” etc, which may not mean exactly what Al Gore and others meant when they said “the science is settled”, but these kind of phrases resonate and reinforce one another.

        So sceptics point out that the “simple physics” is agreed, but that the chaotic, non-equilibrium effects of physics equations are far, far removed from “simple”, in modelling the real atmosphere.

        “Yes, of course we know that” chime back the cAGW proponents, “what, d’ya think we’re dumb or something?”

        I don’t think they’re dumb – I do think, however, that many subtly exaggerate certainty, until they’re challenged about it. Then they backtrack, but still argue on the basis of their own scientific authority, as if no backsliding had ever happened. As if the media wasn’t still full of the same catastrophic predictions they now admit are open to uncertainty. Why don’t these guys ever correct the record?

        On this particular issue, there seems to be an air of “Look, we’ve shown you the physics, it’s a simple step from there to global warming. Why won’t you take that step? You must be either dumb or a denier! I won’t waste time talking to the likes of you.”

        I will continue to try to be open-minded, but as the old saying goes – don’t p**s down my back and tell me it’s rainin’!

        So, we have to be careful with where we draw the line. Saying one believes in the physics of the greenhouse, shouldn’t be extrapolated to mean one believes in cAGW

      • Your right Hunter.
        The problem here is partially correct with CO2 MINIMAL interference.
        Ocean surface salt starting back in the 1970’s interfered with the solar penetration of radiation to depths that the heat would normally be dispersed. If the solar radiation hit shallow water, the sand below absorbed some of the radiation before releasing it depending on depth.
        So, in essence, the surface salt reflected back solar radiation by blocking partial penetration simular to the CO2 hype.
        But now the ocean heat is gone and moved into the Arctic waters where the cold air is creating massive precipitation.

        It is REALLY impressive to see almost ALL of the land mass in the northern hemisphere covered in clouds(water vapour).

      • OK, sorry, let’s replace ‘blatant falsehood’ by ‘error’ :)

      • …or ‘blatant error’…. lol

      • It’s been a while since I looked at the relevant charts in the IPCC report, but I think they do recognize that the water vapor feedback is both positive and negative. An increase in temperature leads to both an increase in evaporation as well as an increase in the ability of the atmospere to hold water vapor. That’s a gross oversimplification but it is an area of ongoing research.

      • Water vapor feedback (always positive) and lapse rate feedback (always negative) are typically combined because the variance of the combination is less than the variance of the individual components. The net is a positive feedback under most circumstances, but may vary in some tropical locations depending on circumstances.

  34. Dr. Curry,
    You mentioned Slaying the Sky Dragon. I have not read it, but would be interested in reading your review. A well-written review, describing points of agreement and disagreement, can truly advance understanding of the science. I hope to read a lengthy review of the book here. It’s one vote. I hope you will consider it.

  35. An atmosphere is a mixed gas of matter and photons.
    — Raymond Pierrehumbert

    I can only imagine the awful mess Dr. Pierrehumbert would create if he was given a tough-but do-able engineering problem—for example: passively storing heat energy for a long time. Imagine if the task was to store a hot fluid so a working man’s coffee was still hot for his afternoon break. An engineer would be thankful the fluid is dense and has a lot of thermal mass, then would attack the problem by reducing the most-effective method of moving heat energy around…conduction. Hmmm, how will we reduce conduction to a practical minimum? How about a bit of a vacuum…that would work (similar to a planet in space). Let’s make sure the supporting structure has little direct contact because that would be a thermal path to the outside world. With a vacuum, we’ve also addressed the next most-effective way of moving heat energy…convection. Outstanding! The coffee will convect itself and integrate its temperature, but that’s irrelevant. So, we start with coffee at say…90C and we want it to still be 60C eight hours later. The fluid will radiate, and we can cheaply get a little benefit by silvering the surfaces, but the delta T is low and the radiation is small—we can ignore it. Problem solved!

    How would Dr. Pierrehumbert solve this problem? Every good climate scientist knows about “back radiation” so he’ll suspend a glass bottle in a “greenhouse” gas with a lot of CO2. It would make sense to maximize the CO2 for a given area, so he’d pressurize the gas…not a lot, perhaps a couple of atmospheres. He’d need to be very careful to make sure this chain reaction does not get out of control and meltdown…what with all that “back radiation” amplification and all. A careful design would be required. The resulting flask would be expensive, but it would work great (we have models to prove it)…so a government grant would be appropriate to help the worker with the purchase price. Oh, it would be wonderful to demonize the vacuum flask (don’t you care about the children?) and eventually make them illegal…we’ll make workers use the Pierrehumbert flask for their own good.

    I know the clamor which will follow. Radiative balance. Yeah, yeah. We’re fortunate to live in a thermal system where there’s a lot of incoming radiation (big delta-T) quickly storing heat energy in large, watery thermal masses…with a slower release back to space (smaller delta-T). If there is any heat left to start our morning in any kind of comfort (often not nearly enough!)…that’s why.

    If you prefer the Pierrehumbert Flask for your coffee…don’t let me stand in your way. Have at it.

  36. From my perspective, I can readily accept the principles of radiative transfer and the fact that the earth is warmer than it otherwise would be as a consequence of greenhouse gases including CO2. I have no problem with this at all. However, after reading some of the earlier threads again, I still have trouble accepting that we can quantify the effect at the surface of a doubling of CO2 with no feedbacks. ie supposedly 1.2degC. I freely admit that I don’t understand the complexities of the models etc, but from my simplistic point of view, if we can’t accurately quantify the ‘no feedbacks’ effect then we don’t seem to have a valid starting point for further work that examins the effect of feedbacks and makes valid long term projections. Finally, I am also still confused about what parameters are actually measured and which are inferred or calculated by modelling.

    • You’re asking the right questions, Rob. I don’t deny a temperature effect from CO2…if you want to describe atmospheric CO2 as a radiative dissipator or diffuser, that’s way-cool by me. I simply don’t think its a significant contributor to our surfaced temperature…the effect is far too small to measure or quantify.
      I believe something very strange…that 1,000,000PPM of N2, O2 and Argon atoms and molecules have temperatures. 390PPM of CO2 bathed in IR radiation contributes to the N2/O2/Argon temperature, but the reverse is also true…and is a much, much (much!) larger contributor. I do not believe in Little Carbon Dioxide Suns…though some apparently do.

      There are many bitter clingers around here who will argue, but the human-caused global warming theme is dead. Sorry, Warmies, you had your day in the sun. It’s over. Don’t fight the inevitable…it’s time to think about getting real jobs.

    • Thanks Rob, I think that’s a very good summary of where many of us are. Yes, we agree that greenhouse gases have some warming effect. No, we don’t agree that this effect can be accurately quantified and it is probably lower than the 1.2C figure sometimes quoted.

  37. Judith asks

    ……..”I’m asking these questions because I am wondering whether any progress in understanding of an issue like this can be made on the blogosophere”……

    The nearest we came to a consensus on this site was around the atmospheric structure proposed by Leonard Weinstein and Nullus in Verba(sp?).
    Previously on SoDs site, Leonard and William Gilbert said much the same thing.
    Leonard thought that even SoD was in broad agreement, but I’m not so sure that this is the case.
    So dialog is useful.

  38. Tomas Milanovic

    Judith you wrote

    But the basis of greenhouse don’t seem to me to provide much intellectual fodder for a serious debate. I’ve lost track of the previous threads (with over 1000 comments on one of them); can anyone provide a summary of where we are at on the various unresolved discussions?

    If by “basis of the greenhouse” you mean the absorption and emission of IR by gaseous molecules (what is what Pierrehumbert talks about) then yes, this point is utterly uninteresting.
    The Einstein coefficients are known since … umm … Einstein and he didn’t even need QM to compute them correctly. That is 100 years ago.
    Statistical mechanics is known since Boltzmann what is more than 200 years ago.
    All this is VERY old established physics.
    Gas lasers and thermometers work as expected.
    No, I don’t think that there is any debate worth mentioning in there and this was the case for centuries already.

    However if by “basis of the greenhouse” you mean the dynamics of the Earth system in all its components (solid, liquid and gas) and the role of the elementary radiative properties of gaseous molecules in these dynamics then you open a problem whose formidable complexity and difficulty dwarfs problems of quantum gravity.
    In quantum gravity there is at least a serious quantitative mathematically well developped theory, the string theory, while in the problem of the Earth dynamics such a theory doesn’t even exist.
    We are still in the prehistory with crude numerical computer simulation and even cruder naive statistics where people arbitrarily average everything and anything because they don’t know what to do with the data.

    I think that I have written in my very first post here why I became interested by the climate science and where I see the problem.
    I certainly don’t see any problem in boring, trivial details about radiative physics or statistical mechanics.

    I see the main problem in the paradigm.
    95% of the climate scientists (I don’t say 100 because there is at least Tsonis) believe that the system can just be treated by 19th century physics based on statical equilibriums.

    In papers you read all the time phrases like :
    “… the perturbed system returns to equilibrium
    “… non linearities are just noise that cancels out”
    “… after interaction the system stabilises in a new equilibrium state”
    “… climate is not weather and time averages can be deterministically predicted”
    Etc

    All these statements are just a variation on the same theme – the system is a simple (linear) system in equilibrium where everything that is not linear is stochastical noise irrelevant for the equilibrium.
    And what I learned here with surprise through the exchanges was that indeed almost nobody realizes how hopelessly irrealistic and unfounded this paradigm is.

    To illustrate what the REAL problem of climate dynamics is, I have posted in the Tsonis thread a link to this paper : http://amath.colorado.edu/faculty/juanga/Papers/PhysicaD.pdf

    Despite the fact that this paper finds a MAJOR result and is the right paradigm for a study of spatio temporal chaotic systems at all time scales so also for climate, I suspect that nobody has read it.
    And probably only few would understand the importance of both the result and of the paradigm.
    Of course the climate is more difficult than even a network of chaotic oscillators because, among others, the coupling constants vary with time and the uncoupled dynamics of the individual oscillators are not known.
    Also the quasi ergodic assumption taken in the paper is not granted for the climate.

    Yet even in the general case it appears completely clearly that the system doesn’t follow any dynamics of the kind “trend + noise” but on the contrary presents sharp breaks , pseudoperiodic oscillations and shifts at all time scales. Of course the behaviours in the case when the coupling constants vary will be much more complicated and are not studied in the paper.

    Unfortunately people working on these problems are not interested by the climate science and those working in climate science are not even aware that such questions exist , let alone have adequate training and tools to deal with them.
    Concerning these paradigm issues, this belongs obviously to the unresolved questions and as far as I am aware, it is only on blogs and among others on your blog that they are discussed.
    There are other people apparently knowledgeable in dynamical systems and numerical solutions (Jstults, Dan Hughes and some others …) that I have seen contributing here too.
    Actually yourself, having had a “Navier Stokes past”, are obviously more sensible to these questions than a crushing majority of climate scientists.

    Last but not least.
    In the above I have dealt with the theoretical paradigms for the climate science only.
    You will have noticed that I didn’t say a word about numerical models.
    This has 2 reasons.

    First is that numerical models are a special case – they are neither theory nor data.
    They are simulators in the same sense as you have flight simulators.
    I have been flying both with real planes and with simulators.
    A simulator gives you something that looks like flying with no obvious aberrations and is suitable for rudimentary training.
    Yet when you pilot a real plane you understand immediately that the behaviour of the plane and of the environment is quite different from the simulator.

    Second is that it would need a long post to explain why the climate simulators must always fail over long periods because they don’t solve and will never solve the PEDs governing the real dynamics of the system.
    But as this post has already been long enough, I will keep it for another occasion.

    • simon abingdon

      Actually Tomas, as a retired airline captain I should like you to know that the airliner simulator is uncannily similar to the real thing in every practical respect. The first time I ever landed a Boeing 767 was at Orlando Sanford airport after a 10 hour flight from London Gatwick. (I didn’t tell the passengers). However, I’m not suggesting that climate simulators are any where near being in the same league.

    • Tomas,
      I agree fully that there are no valid theoretical reasons to trust that the earth system has simple statistical properties.

      I do not believe that the climate models have such properties unless they are forced to such stability. Thus we cannot use the models to prove simple ergodic properties of the earth system. Here the similarities of different models may be totally misleading, as one of the first things a modeler has to learn is, how to make the model stable enough. Soon he starts to think that this stability is inherent in the system being modeled even, when there is no evidence on that.

      The only arguments that we may have to support stable statistical properties of the earth system come from empirical observations and it is questionable whether they support this conclusion rather than the opposite. There are all kind of irregular cycles from short term fluctuations to glacial cycles and as far as I understand none of them is really well understood. Of course many details are known about ENSO type cycles of duration suitable for collecting empirical data, but even their understanding is badly incomplete.

    • Tomas, I agree that the real issues are feedbacks, sensitivity, and nonlinear dynamics. On this thread, i’m hoping to declare the basic physics of ir absorption/emission pretty much a closed subject, so that we can move on to the more complex topics, without getting the threads cluttered up with “the greenhouse effect doesn’t exist.” That is what I am trying to do, anyways

      • so that we can move on to the more complex topics, without getting the threads cluttered up with “the greenhouse effect doesn’t exist.”

        Are we there yet? Or close?

      • Tomas. I read what you have written, and I do not pretend to understand it in detail. The impression I get is that you are saying something like this :-

        “The earth’s atmosphere is complex, not to say chaotic. Anyone, like the IPCC and climate modellers, who claim to have captured the physics of how the atmosphere works, are simply wrong. The atmosphere is much too complex, so that simple approximations will always give the wrong answer”

        Am I anywhere near correct?

      • ….After reading through a couple of thousand old comments on various old posts in the last two days, I would say we are there actually. In fact I recall seeing hardly any comments along the lines of ‘the greenhouse effect doesn’t exist’, but there are plenty questioning what that means in a practical sense. :)

    • In papers you read all the time phrases like :
      “… the perturbed system returns to equilibrium“
      “… non linearities are just noise that cancels out”
      “… after interaction the system stabilises in a new equilibrium state”
      “… climate is not weather and time averages can be deterministically predicted”
      Etc

      Tomas, thanks for this comment; one of your older comments inspired this post of mine about predictability and averaging.

      Up thread a-ways, Fred Moolten says:

      It’s therefore not “daffy”, but an inherent property of the obervation that chaotic elements in climate tend to operate on shorter timescales than the more stable forces underlying long term trends, and appear to even out over the timescales of particular interest (e.g., many decades to a few centuries), although how complete is this averaging remains controversial.

      Nope; the climate is a huge multi-scale problem; there is no meaningful “separation of scales” that would support what you are claiming (as is the case in many turbulent, reacting fluid-dynamic systems). This one is not controversial either. Averaging does not make the chaotic predictable (or unchaotic). Lorenz was clear on this fact. Modern climate scientists like James Annan haven’t forgotten it: see page 3, discussion of Figure 2 (though it might be useful to pretend that the chaotic is stochastic for certain tasks like parameter estimation). It’s just the uncritical, activist cheerleaders who seem to have forgotten (or never learned it in the first place).

      • your new post is very very good. I would be most interested in hosting a thread here on your post, let me know if you would like me to do this. Alternatively, i can write a more practical post on ensemble interpretation (and the pointlessness of the ensemble mean) and refer to your article.

      • Sure, I’d appreciate a link when it makes sense as you move the discussion more towards dynamics (I don’t think my notes are worth a whole discussion thread though). For the folks that are interested my little Lorenz63 series of posts starts from setting up a numerical solution method, and proceeds to using the toy to demonstrate various things and gain insight into certain questions. As Pekka points out below, it is just a very simple toy, so the insights we can gain are limited (but I still think useful!).

      • good. peter webster and i are preparing a post on hazards of averaging, hope to have it up on monday, will link to your post

      • jstults – I can agree with you that in a hypothetical climate on a hypothetical planet in a hypothetical mllennium, chaotic elements might overwhelm an ability to discern trends because of their magnitude and/or overlapping timescales. Currently, in this climate and millennium, that is not the case. The most conspicuous internal, unpredictable variations, such as ENSO, AMO, PDO, do average out – probably not totally, but sufficiently for the centennial-long warming trend to be discernible.

        This is not to say that elements outside the long term trend are easy to separate out on shorter timescales, or for that matter that much longer variations (e.g., multimillennial) might not also be hard to disentangle, but on timescales of greatest import to us, the non-chaotic elements can be well characterized, and their trends quantified. An informative method for extracting this trend data from contaminating variations, known or unknown, is described by Wu et al – trends and detrending .

        It is important to recognize chaotic elements in our climate, but equally important, given the empirical evidence, not to mischaracterize the climate system as a whole as chaotic.

      • Thanks for the chuckle Fred; I was wondering why that paper read like a marketing pitch until I noticed that one of the main references for the method was a patent by one of the co-authors.

        It is important to recognize chaotic elements in our climate, but equally important, given the empirical evidence, not to mischaracterize the climate system as a whole as chaotic.

        As Pekka points out below, the more interesting thing is to learn about what we can say about the trajectory of a mixed periodic / stochastic / dynamic system. The periodically forced model begins to explore this by changing parameter values so that the response is chaotic / non-chaotic (notice they don’t suppress chaos by averaging!). As far as “letting the data speak for themselves”: Don’t Bleach Chaotic Data.

      • Jstults – This is a circumstance in which interested readers should review the data to make their own judgments, so as to remain independent of assertions made during these exchanges. I expect that most will agree with you (and me) that a climate system might be in theory hopelessly chaotic, but will also conclude, as I have, that our climate is not. The unpredictable elements (ENSO, AMO, PDO, etc) do in fact average out fairly completely, and data from past centuries suggest that this property is not unique to current times. Regarding the link I provided to trends and detrending, I don’t see the article as invalidated by the holding of a patent by one of the co-authors, but again, readers can judge for themselves. I found the capacity to extract a climate trend from the data persuasive.

        Finally, without revisiting the entire climate change literature for evidence, I would also make a point that perhaps is not adequately appreciated by individuals more familiar with chaos than with climate. The known properties of CO2, solar irradiance, aerosols, water, etc., and their behavior as reflected in the Schwartzchild and Clausius-Clapeyron equations, and as confirmed by satellite, atmospheric and ground-based measurements, tell us that the quantified role of these entities must account for a substantial portion of the observed trends. This does leave some wiggle room for chaotic elements, but the wiggle is limited. The climate harbors chaotic elements, but it is not chaotic.

      • Fred,
        I do not think that one can learn much new using this methodology to find trends. Its objectivity is not real. It is making similar model assumptions as other methods that have been used. The method is based on the assumption that the reality is a trend plus shorter term variations. How ever these assumptions are made, the results are approximately the same, but it is still based on the assumption that what is given as trend is really a trend and not part of another fluctuations. No objective model can get around this problem and the problem is just there.

        The alternative is that there are longer term fluctuations, either regular oscillation or less regular mode changes or whatever. Without better knowledge it is likely that there really are such fluctuations, but we do not know how strong they may be and how much they have influenced the climate of last couple of decennia.

        The scientific approach applies thinking similar to Bayesian statistics. It is unavoidable that we must make subjective judgments on what is possible and what is likely neglecting all relevant empirical observations. Then we can use every observation to strengthen or weaken our trust in any specific hypothesis. The more empirical data there is the more weight have the observation in relation to the subjective preconditions, but never can the influence of the preconditions be completely eliminated. At present the preconditions may still dominate the outcome for the climate attribution and the empirical data only modify it to some extent.

      • Regarding the link I provided to trends and detrending, I don’t see the article as invalidated by the holding of a patent by one of the co-authors…

        Of course you’re right in that, sorry that what I wrote may have suggested that fallacy in the reader’s mind.

        I found the capacity to extract a climate trend from the data persuasive.

        I did not; here’s why (also another thing I got a little chuckle from, so thanks again ; – ), from the intro:

        Determining trend and implementing detrending operations are important steps in data analysis. Yet there is no precise definition of “trend” nor any logical algorithm for extracting it. As a result, various ad hoc extrinsic methods have been used to determine trend and to facilitate a detrending operation.

        Off to a good start; criticism of various ad hockeries is a theme running through much of Jaynes work. “This is promising,” I thought.

        From the methods:

        Identify all of the local extrema (the combination of both maxima and minima) and connect all these local maxima (minima) with a cubic spline as the upper (lower) envelope;
        Obtain the first component h by taking the difference between the data and the local mean of the two envelopes (the average of the upper and lower envelopes at any temporal location); and
        Treat h as the data and repeat steps 1 and 2 as many times as is required until the envelopes are symmetric with respect to zero mean under certain criteria. The final h is designated as cj, the jth IMF.

        Um, yeah, that’s not ad hoc at all…

        As Pekka points out below (“never can the influence of the preconditions be completely eliminated”, well stated btw), there’s no magic here.

        that a climate system might be in theory hopelessly chaotic, but will also conclude, as I have, that our climate is not

        I’d encourage you to read up more on predictability (I even link some of that lit in the post I mentioned before); it’s an interesting topic. I’d be glad if you added links to parts of the lit you think are relevant to predictability for our climate in the comment section of my little open notebook.

      • None of the classic simple systems of non-linear autonomous ODEs with constant coefficients which exhibit chaotic response can exhibit a trend in the dependent variables as a function of the independent variable.

      • All results from deterministic chaotic models leave unanswered the crucial question: How much of the observed behavior is dependent on the deterministic nature of the model. The real atmosphere is not deterministic, it is stochastic. Models of the atmosphere may be deterministic, but the atmosphere is not. The real atmosphere follows the laws of physics, but the final outcome does not follow from these laws and the initial conditions only, but it is influenced continuously by additional stochastic perturbations. The way these stochastic perturbations enter the process depends on where we draw the system boundaries, but wherever the boundaries are drawn, stochastic perturbations will cross these boundaries.

        This observation has made me always doubt many results of chaos theory. My intuition tells that many of them are really consequences of the deterministic dynamics, not of the properties of the real process described approximately by the deterministic model. In some stochastic models one may end up closer to the behavior of the ensemble mean of a deterministic model than to the behavior of individual trajectories.

        Concerning the predictability of the climate, we do not know. It is possible that the climatic statistics is indeed dominated by boundary conditions on the level we are interested in, but it is also possible that it is not. Fred made the statement that the oscillations are really oscillations that can be averaged out at the time scale of interest, but for making this claim justified we should have a reason to believe that there are no important variations of a longer time scale. We should understand the earth system much better than we do to make such claims.

        Actually we know that the earth system has variations on much larger time scales. The dynamics on the scale of tens of thousands of years is not only external forcing and rapid convergence to new boundary conditions. On the contrary there is definitely a lot of internal dynamics involved. I do not see justification for saying that there would be any time scale between the molecular processes and the lifetime of the earth that would not be strongly influenced by some internal processes that are not understood at the level that would allow for making good predictions.

        It is quite possible that the internal processes are less important at some particular time scale, but I do not see evidence for stating that that this would be true for any of the time scales important for understanding the present issue of climate change. The uncertainty affects our decision making in both directions. It makes it more difficult to know, whether we really have any serious problem but it makes at the same time more difficult to make any definitive statements concerning the risk of reaching a dangerous tipping point by continuing to add CO2 to the atmosphere.

      • Pekka – To some extent, my response is the same as to jstults above. There are theoretical reasons for expecting climate unpredictability, but we know from observation that the unpredictability operates mainly on timescales shorter than the multi-decadal or centennial intervals of particular interest to us. On those timescales, predictions have been shown to perform reasonably well for some variables (global temperature), although less well for regional forecasts, hurricanes, or other climate variables. Indeed, as one proceeds from annual to decadal to multidecadal scales, predictability increases rather than decays with time.

        You raise the important point as to whether even longer timescales might reintroduce substantial unpredictability. The answer is that we don’t know, but the record again indicates that there may not be reason to expect major surprises on a global scale – regional or hemispheric phenomena are probably different in that regard. We do not yet, for example, fully understand orbital forcing, or why 40,000 year oscillations that dominated in the past have been superseded by 100,000 year domination, but we still have a good idea of what paleoclimatology gives us to expect over the coming thousands of years. We also have an even better idea that if current anthropogenic activities continue, they will modify the future in ways they have modified our recent and current climate.

  39. Raymond Pierrehumbert (RP) and his article were discussed on Climate Clash and a topic came up that I am trying to get my head around.
    Tom Vonk (TV) disagreed with RP about the consequences after CO2 absorbs a photon from the Earth surface.
    Both agreed that since at atmospheric temperatures all CO2 molecules would be in their ground state (translational) and hence ready to absorb a 15um photon from the surface.
    Agreement also that only 5% of CO2 molecules would have the necessary energy to re emit a 15um photon at the temperature of the colder atmosphere.
    RPs position was that thermalisation took place and the photons energy was shared out with N2 and O2 molecules almost instantly by collision.
    TV disagreed about the thermalisation net effect and cited Kirchoff’s Law of Radiation to prove that the net heating effect did not occur and that a 15um photon was emitted nearby in a random direction to keep Kirchhoff happy.
    RPs article somewhat confused the picture by agreeing that Kirchhoff law had to be complied with.
    So to me there seems to be a problem here.
    They both cant be right, or perhaps both are wrong.
    Scenario 1.
    One 15um emissions for 20 absorptions; result; the atmospheres temperature increases slightly with further emission of longer wavelength to balance energy. Kirchhoff’s Law apparently not followed.
    Scenario 2(TV)
    One 15um photon in and one 15um photon out.
    No net heating effect.Kirchhoff’s Law followed.
    Scenario 3.(RP)
    One 15um emissions for 20 absorptions; result; the atmospheres temperature increases slightly with further emissions of unknown wavelength to balance energy. Kirchhoff’s Law said to be followed but there seems to be a problem of how this complies.
    I tend to think that Scenario 1 is most likely.
    I discussed this situation with Pekka Piriläin on an earlier post and he might like to give his views.

    • Bryan,
      I do not know the actual ratios, but I know that the CO2 molecules are not all continuously in ground state, but actually a large fraction of them is at any moment at one of the excited states that may emit at 15 um. The excitation energy of these states is close to the typical energy released in a molecular collision. Thus any CO2 molecule in ground state before collision has a fair chance of getting excited by the collision and a molecule in an excited state has an even larger change of releasing its energy in the collision.

      The power of radiation from a particular volume of CO2 containing atmosphere is proportional to the number of CO2 molecules, the share of molecules in states that can fall to ground state by radiation and the coupling constant that applies to that particular radiation. In troposphere the huge frequency of collisions with N2 and O2 determines almost fully the number of CO2 molecules in excited states. The share that ends to these states by absorbing radiation is minuscule in comparison and such states release their energy almost always by a collision. For every emission and absorption there are very many collisions that lead to similar transitions between the same states.

      The high frequency of the collisions is also the reason for the line broadening. This leads to the Lorentz profile of pressure broadening which is really the broadening caused by frequent collisions.

      Kirchoff’s Law is related to the fact that the coupling constant between to states of the molecule and a photon works both ways. A strong coupling increases the probabilities of both emission when the excited states goes to the ground state and the absorption when the transition goes in the opposite direction.

      • Pekka Pirilä

        Thanks for the reply.

        ……”In troposphere the huge frequency of collisions with N2 and O2 determines almost fully the number of CO2 molecules in excited states.”….
        Raymond Pierrehumbert and Tom Vonk both seem to accept 5% excited hence 95% in the ground (translational) mode.
        I’m not sure from your reply if you agree with them.
        Kirchoff’s Law in this case seems to be of no use since 99% of the photons energy is with the non IR radiative N2 and O2.
        If on the other hand Kirchoff’s Law is fully complied with there is no net heating of the atmosphere and Tom Vonk is correct.

      • Bryan,
        Individually one main vibrational state that corresponds to the 15 um line has an occupation that is approximately 3.5% at the temperature of the lowest atmosphere and 2.5% in upper troposphere, because the temperature is lower there (the ratio is exp(-E/(kT)), where E is the energy level, k Boltzmann constant and T the temperature). There are, however two independent transfer directions which doubles the total ratio. There may be some difference also due to different rotational states, but I would have to dig deeper to say, whether this influences the total ratio.

      • I haven’t actually read Vonk’s comments, but if he believes that Kirchoff’s law requires each CO2 molecule that absorbs a photon to emit a photon (on average), he doesn’t understand Kirchoff’s law. The vast majority of CO2 molecules excited by photon absorption are de-excited by collision rather than photon emission, and the result is almost immediate transfer of the energy to neighboring molecules to create a local heating effect – so-called “thermalization”. The emission of photons comes almost entirely from other CO2 molecules excited by collision rather than photon absorption. There is no violation of any law, and the local heating effect is well understood, and in fact, can be measured under laboratory conditions. It may be that Vonk confuses absorptivity with absorption, and emissivity with emission, but then, since I haven’t read what I states, his confusion may arise elsewhere.

      • Not a Freudian slip, but it should read “I haven’t read what he states”. Except for the typo, I did actually read what I stated before posting the comment.

      • To elaborate further on what might be an additional claim regarding absorption/emission balance – in an atmospheric layer of infinite thinness (zero height), all emitted photons will leave the layer because by definition, internal absorption of a photon emitted within the layer is infinitely unlikely. In a real layer, some internal absorptions must occur, and their frequency increases with the concentration of CO2 or other absorbers. These internal absorptions create a heating effect via the absorbing molecules that is not balanced by a cooling effect from the emitting molecules. The result is net heating. As the vertical height of the entire atmospheric column where absorbers reside is very great, internal absorptions overall play a very prominent role in GHG-mediated warming.

      • But when N2 and O2 etc., heated by convection, collide with CO2 why wouldn’t there be a net cooling when it triggers emission? It really is quite simple Fred.

      • Well, if I had realized it was that simple, I shouldn’t have spent so much time commenting on it, should I?

        Incidentally, convection is not a heating mechanism but a heat transfer mechanism.

    • Tom Vonk is constantly trotting this out and is wrong both in theory and observation but no amount of discussion will change his mind and it’s a total waste of time even trying. Also Kirchoff’s Law says that absorbtivity is equal to emissivity not that emission = absorption.

  40. No one has learned anything here, or from Pierrehumbert’s recapitulation of “The Radiative Transfer Theory,” in capitals to communicate its divine status. The theory ignores convection; it ignores the ideal gas law and the gravity that compresses the atmosphere, and increases the temperature, as a monotonic function (increase) of depth (Pierrehumbert makes the insane claim that “An atmospheric greenhouse gas enables a planet to radiate at a temperature lower than the ground’s” — NO, the thermodynamic lapse rate, depending only on gravitational g and the atmospheric specific heat does that); it ignores the Venus/Earth data that proves there is NO greenhouse/Tyndall effect whatsoever, on either planet. It assumes that anything with a temperature is a blackbody (absorptivity=emissivity), including the surface of the Earth (obscene misunderstanding of basic physics). Pierrehumbert assumes no scattering of IR radiation, because he does not even have an understanding that absorption and re-emission does just that; heat diffusion is beyond his The Radiative Transfer Theory, as his cartoon Figure 1 well demonstrates. Which is less correct: “Half of the radiation is directed south by southeast, and half north by northwest” (see Shakespeare aficionados for the meaning of the latter direction), or “Half is directed back to the surface…”? And having figured that out, what is the effect of radiation from a cooler body on a warmer body? No, to both the Greenhouse Effect and to The Radiative Transfer Theory, as it is applied in climate “science”. It is sheer idiocy on the part of physics today, and of Physics Today as well. Incompetence to the nth power.

    • Pierrehumbert makes the insane claim that “An atmospheric greenhouse gas enables a planet to radiate at a temperature lower than the ground’s”

      I don’t agree that’s an insane claim. An atmospheric greenhouse gas(es) does indeed make it possible for the surface temperature to be higher than the planet’s effective radiating temperature, however the greenhouse gases are not necessarily responsible for the increased surface temperature. As you say, gravity etc all have a part to play.

  41. Judith says:
    “I’m really wondering if we can get past exchanging hot air with each other on blogs, learn something, and collectively accomplish something? If you have learned something or changed your mind as a result of the discussion here, pls let us know.”
    Yes I think so. Discussions here over the last 6 months have solidified my understanding of RT physics, specifically heat transfer mechanisms between the oceans and the atmosphere which was a sticking point for me and fundamental to the concept of AGW.

    Where can we go from here? Over the last 100 years we have added a thick 100 ppm CO2 blanket to our atmosphere and this must cause long term warming at some level. Even though current observation suggests a low sensitivity, temps will surely increase in an unnatural (manmade) way for a very long time. Finding ways to unravel this blanket without returning to the use of stone axes, drastically disturbing quality of life, or wiping out 1/2 the population in the process seems like a good place to move discussion.
    Some ideas:
    *Restart proven technologies that generate enormous amounts of energy while leaving a very small CO2 footprint (nuclear).

    *Reforestation and low impact agriculture techniques.

    *Continue to improve efficiency to reduce energy needs (electric motor technology, batteries, Solar LED lighting.

    * Vast public subsidy of alt energy that have not proven to be economically viable should be avoided as a misuse of available resources. (wind farms of Europe that generate little or no energy, solar panels in use above 40 latitude as examples).

    I am sure there are many more good ideas that would move us away from fossil fuels without enriching or empowering third world tyrannical dictators.

  42. After more than 100 comments so far, I have the sense that Dr. Curry’s expectations for the thread are being met to an extent that may be less than she hopes, but more than is typical for blogs on climate topics. She has cited Pierrehumbert’s Physics Today article as an excellent description of radiative transfer, and it appears as though the majority here, whether or not in accord with every point Pierrehumbert makes, agree in general that this aspect of geophysics is well understood. There are others who dispute this, and a few others still who choose to introduce extraneous topics to the thread, but by and large, the level of agreement is a good omen for the prospect of moving toward more challenging issues in climate science.

    My own perspective is that Pierrehumbert has provided a cogent and accurate perspective on the topic. More important, he (and others elsewhere) are correct in pointing out that much of the basic geophysics (and particularly radiative transfer) operate as expected not only in the laboratory and in models but also in the Earth’s climate system, as confirmed by observational monitoring. Despite climate complexity, expected trends are discernible. Despite the potential for chaotic elements to overwhelm the predictable ones, they don’t (or at least, they interfere at a level amenable to appropriate adjustments). Despite the impossibility of meaningful averaging of global values such as temperature, averaging is neither necessary nor utilized, but rather the mean values and interactions among grid anomalies serve as good estimators of climate behavior, and so on. As pointed out by many, numerous uncertainties still preclude precise estimates of climate sensitivity and other relevant variables, and all these points, of course, remain items for further exchange of views, supported, I would hope, by empirical data rather than exclusively by arguments of a purely theoretical nature.

    Above, Hr asked whether Pierrehumbert’s article provided new information. It breaks no new ground, but I wouldn’t be surprised if each of us picked up something we had not earlier known. In my case, two items of interest stood out.

    The first was the very large divergence between the mean excited state lifetime for a CO2 molecule to emit a photon and the much shorter intercollision intervals. Although aware of the divergence, I had not appreciated its magnitude (up to 100s of milliseconds for unperturbed lifetime vs less than 10^-7 seconds for collision). Indeed, it is only the very rare CO2 molecule that gets to emit a photon, but CO2 molecules are abundant enough for this process to yield very frequent emissions. Most of the energy, however, goes into local heating of surrounding air, which is therefore extremely efficient despite the low CO2 concentration in the atmosphere (currently about 390 ppm).

    The second was the intriguing upward “spike” in the intensity of IR radiance found by both models and observations in the middle of the IR “ditch” in radiance measured at the top of the atmosphere over the main CO2 aborption spectrum region centered at wavenumber 667. The ditch reflects the reduced radiance attributable to the cold altitude at which the IR is emitted – an altitude necessary for the atmosphere to be IR transparent enough (contain few enough CO2 and water molecules) to permit adequate IR escape. The spike, however, is attributable to the fact that at the most opaque wavelength, the required altitude is even higher – in the stratosphere. Because temperature rises with altitude in the stratosphere, the escape altitude is warmer than below, and the IR emission is correspondingly higher.

    The spike has interesting implications, some relevant to stratospheric ozone. The stratospheric temperature inversion – warming with altitude – reflects the ability of ozone, generated at high stratospheric levels, to absorb solar UV radiation and thus warm the surrounding air. Ozone depletion due to CFCs was responsible for stratospheric cooling in earlier decades, and the restoration of ozone through the Montreal Protocol has restored some of the warming. Ozone, however, also plays a role in stratospheric cooling consequent to increasing CO2 concentrations. Because IR emission by CO2 becomes more rather than less efficient with altitude in the stratosphere, due to the warmer temperatures, increasing CO2, and thus a warmer escape altitude, permits CO2 to cool more than would the same concentration at lower altitudes. The result is stratospheric cooling – the opposite of the tropospheric warming from increased CO2 in the troposphere. Recently, stratospheric temperatures have been fairly balanced between the warming from ozone repletion and the cooling from CO2 increases, but each of these phenomena can be discerned individually because of the different altitudes at which they maximize.

    Other interesting fine points also appear in the Pierrehumbert article, but the above were ones that interested me particularly.

    • For those interested in CO2-mediated stratospheric cooling in terms of the ability of CO2 to absorb and emit photons, the relevant phenomenon is the dependence of atmospheric absorptivity in the stratosphere on ozone-mediated UV absorption (CO2 has almost no UV absorbing capacity), and in contrast, the importance at stratospheric temperatures, for stratospheric emissivity to be enhanced by the high emissivity of CO2 in the radiated wavelengths. Increasing CO2 therefore increases emissivity more than absorptivity – hence the cooling. It can be thought of as a “safety valve” for ozone-mediated warming.

      • And this is shown by the flat Strat temps for the last 15 years??

      • Yes, as I explained above, regarding the balance between warming due to ozone repletion (since 1995) and CO2-mediated cooling, but the data showing each operating at a different altitude.

      • There is no difference in the ozone column values since 1995 so you need to explain why there is a t diffference.

      • Maksimovich – After declining for more than a decade, ozone began to recover in 1995, with a slow upward trend as of 2005 –
        Ozone Depletion and Recovery

        Starting at the same time, stratospheric temperatures, which had been declining significantly, flattened out. However, they did not rise despite the increase in ozone –

        Stratospheric Temperature Trends (note that the final article appeared in JGR in 2009 but is behind a paywall).

        The failure of stratospheric temperatures to rise in concert with the rising ozone concentrations is an expected consequence of the cooling role of increased stratospheric CO2.

      • Fred,
        based on the RSU temp series I would say you mischaracterized the strat temps. They were stable, volcano step decrease, stable, volcano step decrease, stable till now.

        Maybe you can point me to the data that shows they were declining before the early 1980’s and early 1990’s volcanoes that appear to have dropped them??

      • Volcanoes cause only temporary cooling, followed by a return of temperatures to a warmer level unless there is already a downward trend. Prior to 1995, ozone was declining, and the cooling was compatible with a combination of decreasing ozone and increasing CO2. Around 1995, ozone started to turn upward, and the temperatures stablized but did not trend upward in line with the ozone. This disparity between ozone and temperature is not probative, of course, but it is supportive of the expected cooling role of increased CO2.

      • Fred, stop waving at me. Please, a data source.

      • One good source is the JGR article linked to above – see the various figures (e.g., Figure 5). The data interval covered is informative. Even massive volcanic eruptions in the 20th century have cooled for 2-3 years at most, so volcanism doesn’t account for the temperature declines up until about 1995.

      • I should have added that volcanic eruptions cool the troposphere, but their main direct effect on the stratosphere is warming due to their ability to absorb solar radiation. A cooling effect due to their ozone-depleting capacity is of lesser magnitude.

      • Fred, you strike out again. The data in the paper shows a cooling trend without volcanoes during 58-75 WHEN THE TROPOSPHERE WAS ALSO COOLING!!! During the satellite era the only noticeable perturbations are the 2 volcanoes during what is claimed as exceptional warming. I would also point out that ozone was alledgedly dropping since before the 1970’s!! Your data is NOT coherent for what you are claiming.

        Sheesh.

      • There comes a time during exchanges of this type when it makes more sense for readers to review the material (including linked sources) to make their own judgments rather than for the participants to continue to propound arguments. I would therefore commend these recent exchanges to the attention of interested readers, and will be content with their judgment.

      • Sorry there is no upward trend in the total ozone column since 1995 eg unep 2010

        Average total ozone values in 2006–2009 remain at the same level as the previous Assessment, at roughly 3.5%
        and 2.5% below the 1964–1980 averages respectively for 90°S–90°N and 60°S–60°N. Midlatitude (35°–60°)
        annual mean total column ozone amounts in the Southern Hemisphere [Northern Hemisphere] over the period 2006–
        2009 have remained at the same level as observed during 1996–2005, at ~6% [~3.5%] below the 1964–1980 average.

      • Please see the links I cited for the data from 1995 through 2005. These show a flat temperature during that interval with a rising ozone concentration. I don’t have 2006-2009 stratospheric temperature data, but it’s a much shorter interval, and unless the temperature has risen significantly in those three years, the basic conclusions shouldn’t be affected.

      • Your misunderstanding is failing to account for the solar cycle (hale) that exists in the ozonosphere.This is well understood.The expert assessment review is now available at the unep and includes data through to 2010

        There is no trend post 1995 statistical or otherwise in the total ozone column.The increase/decrease in the upper stratosphere and mesophere are consistent with GCR modulation at solar minimum and the 27 day and 11 year cycle ie , ghg is indistingusihible at these levels of natural variation.

        The findings are quite succinct

        New analyses of both satellite and radiosonde data give increased confidence in changes in stratospheric temperatures between 1980 and 2009. The global-mean lower stratosphere cooled by 1–2 K and the upper stratosphere cooled by 4–6 K between 1980 and 1995. There have been no significant long-term trends in global-mean lower stratospheric temperatures since about 1995. The global-mean lower-stratospheric cooling did not occur linearly but
        was manifested as downward steps in temperature in the early 1980s and the early 1990s. The cooling of the lower stratosphere includes the tropics and is not limited to extratropical regions as previously thought.

        The evolution of lower stratospheric temperature is influenced by a combination of natural and human factors that has varied over time. Ozone decreases dominate the lower stratospheric cooling since 1980. Major volcanic eruptions and solar activity have clear shorter-term effects. Models that consider all of these factors are able to reproduce this temperature time history.

      • We appear to agree that temperatures have been flat since 1995. The link I provided showed that ozone started to increase since then. This disparity is consistent with CO2-mediated stratospheric cooling as an offset to ozone-mediated warming.

        You state that the ozone increase is not statistically significant. The UNEP site you refer to did not show a graph of ozone change between 2006 and 2010, but I expect that as you state, the upward trend since 1995 may not be statistically significant. This, however, does not contradict the conclusion, based on the evidence, that CO2 and ozone are cancelling each other out.

        As the UNEP site states, “The global middle and upper stratosphere are expected to cool in the coming century, mainly due to CO2
        increases. Stratospheric ozone recovery will slightly offset the cooling.”

    • Thanks, Fred. Actually the first point you mention had seemed to me qualitatively obvious, as otherwise there’s no way for the radiative gases to warm the adjacent non-radiative gases. But as you point out, it’s nice to have some numbers.

    • Fred Moolten,

      Your comments are the most valuable and informative on this thread (apologies to everybody else). I appreciate three things: the way your remarks stay on-topic, the relevant technical information that you convey, the good-humored persona that you assume here, and your willingness to amend your statements on consideration of new information.

      That’s four things (I was wrong).

      Also (FWIW), I concur with the assessment that you offered as the lead paragraph of this comment (January 20, 2011 at 11:53 am), with respect to Dr Curry’s original question, “can we collectively accomplish something?”

  43. Here’s an interesting old quote on global warming from Herman Kahn, a top think-tank consultant from the sixties and seventies.

    At a time when Paul Ehrlich, the Club of Rome et al were predicting that billions would starve, shortages would abound, and the oceans would die — all within a few decades — Kahn was forecasting a coming boom, and the general likelihood that humanity would resolve the challenges posed by our environmental and technological growth to arrive at a steady, sustainable population with a much higher standard of living.

    The greatest threat Kahn saw for America was that we would be strangled into collapse by the “educated incapacity” of our liberal class.

    IMO Kahn is the only futurist from that period worth re-reading on the merits these days. It is useful in a cautionary way to re-read Paul Ehrlich and the Club of Rome.

    The atmosphere’s carbon dioxide content, however, will remain a closely watched potential threat for some time. Since roughly 1850, a rapidly increasing use of fossil carbons in the combustion of petroleum, natural gas and coal has led to a steadily increasing concentration of carbon dioxide (CO2) in the atmosphere. This increase in CO2 conceivably has one main effect—the trapping of long-wave infrared radiation from the surface of the earth, which tends to increase the temperature of the atmosphere. The observed increase in average global temperature of about 2 degrees Fahrenheit between 1850 and 1940 was attributed to this cause. However, since 1940 the average global temperature has decreased about half a degree Fahrenheit—a decrease attributed to the presence of soot in the atmosphere from industrial emissions, even though these emissions have been falling in recent years. It is very likely that the CO2 concentration of the atmosphere will increase about 15 percent by the year 2000. This might cause an increase in the average global temperature of 1 to 2 degrees Fahrenheit. It has been calculated that a doubling of the CO2 content would lead to an increase in average temperature of 4 to 6 degrees Fahrenheit, but it seems unlikely now that the carbon dioxide content will ever double unless mankind wants it to happen. Consequences of such increases in concentration cannot be reliably predicted today; in 50 years or so, such predictions might be routine. In any event, a carbon dioxide catastrophe does not appear to be imminent.

    — Herman Kahn, “The Next 200 Years” (1976)

    • Very helpful, hunter. I’d not heard of Kahn.

      The thing that’s most bothered me in the last month has been the influence Ehrlich and the Club of Rome types had on international institutions responding to the environmental critique of DDT from the 1960s. This is spelled out clearly, without histrionics, in a new book called “The Excellent Powder” which I heartily recommend. The science against indoor spraying of DDT was a crock (from what I can see). But the doomsters influenced people at a high level to take the view that saving too many lives from malaria was going to bad for the planet and effected what amounted to a ban, causing tens of millions to die without cause, mostly the children of the very poor.

      The influence of the population doomsters seems to me to be established beyond doubt by Roberts and Tren in The Excellent Powder. It’s a human tragedy – a real travesty, one that deserves much more rigourous treatment as we consider many of the same issues with AGW. This book is a very good start.

      Sorry thought that this is well off target for this topic Judith.

      • The hat tip should go to huxley.
        I used to follow Kahn, but was not aware of how he forsaw CO2 and the unimportant role it would play in reality.
        Too bad he missed out on how the intersection of illiberal reactionaries and fearful ignorance would intersect to create the social mania we have today.

      • Richard Drake: Glad you like the Kahn piece. I put it up because I thought his take on CO2 from 1976 was straightforward and refreshing.

        Kahn didn”t question the greenhouse effect and he acknowledged that CO2 was a legitimate threat worth watching, but he didsn’t head off into the-sky-is-falling territory.

        Btw, I’m “huxley”.

      • ‘pologies huxley and hunter – but thanks hux. Yep, impressive balance from Kahn on CO2 in 76. When I grow up I’d like to demonstrate similar levels of foresight and judgement.

    • Richard D. & hunter: Kahn was amazing, though he’s largely been forgotten and sadly little of his work is available on the web.

      Last summer I ran across the transcript of a talk Kahn gave in 1976 to Gov. Jerry Brown and his staff (the first time time Brown was governor of California) that was stunningly prescient for today. Kahn spoke of the “New Class”:

      Think of a group of people who come from upper middle class backgrounds, from families who are largely education-oriented, so they see that the children go to the good schools and who, after they get out of the schools, earn their living by the use of academic skills, language skills, esthetic skills, analytical skills. They don’t earn their living by being entrepreneurs, businessmen, engineers, laborers, clerical workers.

      Kahn saw the New Class’s growing power, its animosity towards “square” Americans, and its “educated incapacity” that blinded it to thinking outside its own intellectual box as a much more serious threat to America than all the environmental crises that the New Class then, as now, was raising alarums about. Kahn was bang-on in predicting the current disconnect we see between Obama blue Americans and red Americans.

      Here’s a Kahn quote I did find on the web that explains “educated incapacity” and, I would say, bears directly on the current climate controversies:

      I often use the phrase [educated incapacity] to describe the limitations of the expert—or even of just the “well educated.” The more expert—or at least the more educated—a person is, the less likely that person is to see a solution when it is not within the framework in which he or she was taught to think. When a possibility comes up that is ruled out by the accepted framework, an expert—or well-educated individual—is often less likely to see it than an amateur without the confining framework. For example, one naturally prefers to consult a trained doctor than an untrained person about matters of health. But if a new cure happens to be developed that is at variance with accepted concepts, the medical profession is often the last to accept it. This problem has always existed in all professions, but it tends to be accentuated under modern conditions.

      http://www.hudson.org/index.cfm?fuseaction=publication_details&id=2219

      Kahn was a highly educated, physics-trained polymath who admitted that he was New Class himself. He had no problem with global warming as a reality and as a potential threat, but he was steadfast in countering the New Class’s obsession with risk and its hostility towards ordinary Americans.

      • I like to think my intuition isn’t bad. What you’ve spelled out in a second post on Khan exactly corresponds with what came into my head when I first the phrase “educated incapacity”. This is really important stuff. Thanks for drawing my attention to it.

        (The other example that springs to mind is when I asked a friend in 1998 what the big stories had been at OOPSLA in the States, from which he’d just returned. He mentioned two things, the second being “extreme programming”. I’d never heard the term before but I knew at once what it meant. At last was fulfilled what I’d been waiting to happen in software engineering since 1986 or even earlier. So it proved. It’s that Blink stuff. But I now really, really digress.)

      • Richard D.: You’re welcome!

        I’m a programmer and, except for paired programming, extreme programming made sense to me too.

      • Ha, you intuited the one practice I didn’t foresee. There are pros and cons to co-coding – but I don’t want to argue!

  44. The following summarizes what I think was a “semi consensus” of opinion from the earlier threads on this site regarding IR absorption. It does not answer questions regarding potential other factors effecting climate.
    In the case of IR absorption, very little of the science requires much more than fairly basic physics. On that basis, we don’t need to take climate scientist’s word for it; we can work out the whole thing from scratch. It you do this based on the official IPCC story, and except for minor details that could nitpicked about, you will find the whole thing was pretty much as the science said it was.
    One thing that seems to have stumped climate science is why the number for climate sensitivity is all over the map. IMO there are two basic reasons.
    Climate sensitivity depends on whether you calculate it from:
    1. first principles the way some people like to, or
    2. from extrapolation of how the Earth’s surface temperature has actually responded, which is what a few climate scientists (not enough in my view) call “observed climate sensitivity.”
    The advantage of the latter is that it takes into account all the contributory factors.
    Trying to simulate the whole planet even approximately over a period of decades even using the most massive digital computers on the planet is an exercise in group wishful thinking. There are just too many important factors we don’t fully understand yet, for example the rate of heat downtake and return of the deep ocean, the amount of extra cooling generated by evaporation of rain while it falls, etc. etc.

    • Rob – I’m not aware of attempts to calculate climate sensitivity from actual “first principles”, but you’re right in asserting that some methods are based mainly on modeling the combination of individual forcings and feedbacks to yield a composite estimate of sensitivity, whereas others are more empirically based, relying on historical data for temperature responses to CO2 or other variables.

      The former approach, for example, starts with well established values for CO2 forcing, adds estimates for water vapor/lapse rate feedbacks (with some observational confirmation via satellite measurements), factors in ice-albedo feedbacks (again, with observational constraints), attempts to estimate cloud feedbacks despite their uncertainties, and then models the interaction of all these independent entities to yield a sensitivity estimate.

      The latter approach is simpler in theory – it asks, “If we know the forcings and the temperature response, we can divide the second by the first to arrive at an accurate climate sensitivity value.”

      The problem with the second approach is that it is rarely possible to have a clearly reliable and accurate estimate of all forcings, and often even the responses are somewhat uncertain. An example is the Last Glacial Maximum (LGM), which is good starting point because the interval is long enough to permit estimations of equilibrium responses – part of the definition of climate sensitivity. Reasonable data exist on temperature, albedo, and CO2, but other forcings such as aerosols are less well quantified, and even the time correspondence between forcings and changes in temperature harbors some uncertainties. At times, attempts to avoid some of the difficulties involve a “reverse approach”, in which different climate sensitivity values are plugged into a model to determine which range of sensitivities is most compatible with historical data on temperature, CO2, and other known elements of the system.

      All of the above are clearly beset with limitations. In concert, they all tend to yield sensitivity values within the canonical 2-4.5 C range per CO2 doubling, but with some outliers.

      A further problem comes from the assumption that climate sensitivity computed for one particular forcing applies equally to others. This assumption may be reasonable for different atmospheric forcings operating over long intervals (CO2, other GHGs, persistent aerosol concentrations, etc.), but is probably unreliable when attempts are made to extrapolate from short term changes originating regionally, for example in the ocean (ENSO events) to long term global sensitivities to atmospheric perturbations such as CO2 increases. Examples of unreliable extrapolation include recent studies based on ENSO-mediated temperature changes, reported by Lindzen/Choi, Spencer/Braswell, and Dessler.

      It is not surprising that the 2-4.5 range of 95% confidence limits for sensitivity to CO2 doubling has not narrowed appreciably in recent years.

      For an excellent description and references related to some of the above, IPCC AR4 WG1 Chapters 8 and 9 are very useful – WG1 Chapters

  45. Wrong thread, CJ. I agree with a lot of what you say but that doesn’t make it right on this thread, sorry.

  46. Dr C
    What is the role of the greenhouse effect on the weather?

    I am looking for a more Trenberthian answer. I know the usual “it will result in warmer nights and increase the average minimums” kind of thing, but that doesn’t speak to what the ‘extra CO2 is going to do’ question.

    The greenhouse effect alone would raise the surface temperature to a greater degree that what is observed. This is because weather, occurs at the surface, distributing the heat and taking up and getting rid of it etc. In order to say what the climatic effects of increased CO2 are, we should be able to confidently say what its effects on the weather are.

    Therefore the question arises – what is the effect of CO2 on weather? ‘Natural variability’ is a nonsensical fig leaf because CO2 is well mixed in the atmosphere, present everywhere and acts via radiative transfer mechanisms which are instantaneous. Therefore CO2 should have a measurable effect on the weather and therefore such an effect should be indentifiable independently (hopefully by clever instruments or experiments)

  47. “On this thread, i’m hoping to declare the basic physics of ir absorption/emission pretty much a closed subject, so that we can move on”

    Such a declaration would be a sure indication that it needs further scrutiny.

    Andrew

  48. Judith,

    I’ve made some progress with people on this topic in blogland myself. Not as much as I would like, but some for sure. There is no question as to whether the basic effects are real, there is only question of magnitude and the people who grasp that point are far more effective in making their case on the real issues.

    Don’t give up on it, some see the ridiculous outcomes people call ‘solutions’ and will never be convinced, however, the light bulb does go off for those who can understand the certainty of the basic physics.

  49. Hamish McDougal

    Dr Curry
    The posts & comments here and the posts at SoD have been invaluable in refreshing my early 60’s vintage undergraduate physics and post-graduate PhysChem. One accepts the basic science. It’s all non-contentious to anyone understanding the physics, the thermodynamics, the math. The effects of feedback (and even the sign therof!) seem less well, even poorly, understood, particularly once clouds enter the reckoning. Everything thereafter is built on a shaky foundation. The establishment (the ‘believers’) seems to have been taken by surprise by oceanic influences, by weather cycles, and, it seems (one predicts), the sun.
    Once politics (or post-normality) enters the scene, activist scientists (I surely need not name names) seem to become prominent. They seem to arrogate to themselves data ‘custodianship’, responsibility for ‘homogenizing’ (‘correcting’?, ‘adjusting’? …. probably harsh to go further) the data. They inevitably, it seems, become evangelizers.
    The temperature record is, at best, suspect, even contaminated. IPCC ARs become political tracts (at least the ‘Summar[ies] for Policymakers). Highly contentious, extreme views are put forth by the icons of the ‘movement’.
    The average, scientifically literate Joe (that’s me) says ‘Whoa!’. Even stronger is his unease when urgency is manufactured, when emotional pressure is applied (those polar bears! that little girl!), when financial interests become so blatant and corruption becomes so obvious.
    Try this:
    (H/T Michael Hammer
    June 27th, 2009)
    “The corrected data from NOAA has been used as evidence of anthropogenic global warming yet it would appear that the rising trend over the 20th century is largely if not entirely an artefact arising from the “corrections” applied to the experimental data, at least in the US, and is not visible in the uncorrected experimental data record.
    “This is an extremely serious issue. It is completely unacceptable, and scientifically meaningless, to claim experimental confirmation of a theory when the confirmation arises from the “corrections” to the raw data rather than from the raw data itself. This is even more the case if the organization carrying out the corrections has published material indicating that it supports the theory under discussion. In any other branch of science that would be treated with profound scepticism if not indeed rejected outright. I believe the same standards should be applied in this case.”
    http://jennifermarohasy.com/blog/2009/06/how-the-us-temperature-record-is-adjusted/
    You hope for progress? Good luck!

  50. Judith,

    I took a quick look at the Jan Physics Today article, loosely read about the first half due to time constraints and work load. I didn’t have enough time to see what the ramifications were of not going with the Eddington approximation etc. My own version of this is not for climatology but I do have a simple atmospheric 1-d model based on Hitran.

    Raymond talks about a continuum in association with Kirchoff’s law. My understanding is that one really gets continuums with optically thick slabs and with particles and dimers. Consequently a cloud droplet produces not only scattering but also absorption and emission in a continuum.

    My interpretation of Kirchoff’s law is that it is valid as a function of wavelength. Note, I use wavelength, not /cm frequency and I don’t work well upside down. With LTE, which should be valid for most of the stratosphere, if not all, emission becomes a function of the energy states as described by planck’s law distribution and of the absorption spectra of the the slab. That means a slab neither absorbs nor emits if it is at the same temperature as the earlier one and the radiation is at the same BB temperature.

    Also, with the geometry of the situation near the boundaries that start to be transparent, one has emission upward and downward from the slab but has only absorption from below and that causes a conservation of energy reduction in T needed if additional ghg absorbing gases are added.

    Essentially, I found nothing surprising or controversial in the first half of the paper. I also found it marginally interesting and relevant.

    Such models are relevant mostly for clear skies and tend to lose meaning in the presence of clouds. Clouds bring in true continuum absorption and emission and per K&T 97 (khiel & trenberth 97) take up nearly 62% of the coverage of the Earth with practically total optical absorption and or reflection, depending upon SW or LW as defined by their 3 layer model.

    According to my 1-d model, around 120 w/m^2 of absorption occur using the 1976 US std atm model. I think KT97 claimed around 125 w/m^2. Simply taking the average surface T and stefan’s law gives 391 w/m^2 of emission and for a radiative balance, there cannot be more than around 239 w/m^2 of power escaping Earth. That leaves about 152 w/m^2 of total average blocking which is really occurring. I think KT97 used the number 155w/m^2. The difference must turn out to be due substantially to cloud cover and other atmospheric effects. Roughly 25 w/m^2 is the cloud blocking contribution for the roughly 62% cloud cover fraction.

    We’ve now pretty much gone past what simple 1-d and radiative transfer modeling can do for us. What’s more, we haven’t even started with what can be done by this simple averages approach.

  51. (btw, I am not RobB.) I’m new to Curry’s blog and so far find it quite good. I have some questions and doubts in the area of radiation transfer. “Well understood” is a relative judgment, and I would not refer to it as “robust” in the common scientific meaning of the word. Though in come contexts and at some levels both terms I would agree are accurate.

    I need to get back home to my information and need to read through the radiative transfer threads here and SoD (which I deduce means Science of Doom???) to do it right. But I’ll throw out a couple of short examples.

    While I think the mathematics is probably pretty close, the forcing formula for CO2 is a tad short on physics theory and observational verification: How was the coefficient (currently 5.43 IIRC) determined? Based on what physics? What is the physics that supports if, when, and why forcing goes from linear to logarithmic to something else? Will it stay logarithmic and 5.43 at 400ppm? 800ppm? 1200ppm? How do they know?

    The saturation question is usually answered with, as the concentration (pressure) increases the absorption line broadens. Yet if one does the math, the half-width of the “line” broadens almost infinitesimally even with pressure broadening. And the math and physics of pressure broadening is viewed by many (including 4 or 5 of the textbook authors I’ve looked at) as very complex and difficult with considerable uncertainty. HITRAN lists wide variances for the coefficients. This I would not call robust – pretty good maybe, but not robust.

    As an aside it would seem the spectrum broadening seen is predominately due to the rotational sidebands (sidelines??).

    I may find that I’m commenting wrongly or in the wrong thread; I’ll try to learn quickly. Thanks for the indulgence for my first (and frankly IM_own_O not terribly well written) comment here.

    • Rod – The logarithmic (approximately) relationship is not primarily a consequence of line broadening. Rather, it reflects the approximately exponential decline in absorption coefficients as one proceeds outward from the center of the CO2 main absorption region at wavenumber of 667 (wavelength 15 um) into the wings. As CO2 increases, the poorer absorbing wavelengths dictate more and more of the increase in optical thickness, and so the absorption effectiveness becomes correspondingly less powerful. For any foreseeable CO2 concentration, there will always be wavelengths where absorption is low enough for those wavelengths to remain unsaturated. In general, maximum changes with CO2 increases occur at wavelengths where optical thickness (tau) is about one, and these wavelengths depart more and more from the center with increasing CO2.

      This is not a universal property of greenhouse gases, and in fact, water vapor follows a pattern less clearly logarithmic. In addition, the logarithmic property disappears at very low or very high CO2 concentrations.

      • Fred,
        The approximately exponential decline in the wings is nicely visible in a logarithmic graph of emissivity which extends several decades towards weaker lines. Unfortunately I do not remember, where I saw this graphic presentation, but it might help many in understanding where the approximately logarithmic behavior originates.

      • Fred M, I appreciate the response, but I fear my syntax created some confusion. My log question (and the 5.43 coefficient) referred to the basic forcing equation — F = 5.43[ln(conc/conc_0)] — and not to the broadening/saturation question.

        I still have problems with broadening stuff as in your comment, “…maximum changes with CO2 increases occur at wavelengths where optical thickness (tau) is about one, and these wavelengths depart more and more from the center with increasing CO2.”

        My difficult is (and please bear with me: I’m still away from home and will quote from memory numbers I calculated, so they won’t be accurate but maybe enough in the ballpark to make my point…): Doppler broadening (which is pretty clear cut and calculable) will spread the 15um line to about 15.000004um (half-width half-max). Pressure broadening maybe spreads it to 15.0005um at HWHM though it spreads much further but at lower intensities to something like 15.01um at 0.1max. Even given my memory-guessed numbers there really aren’t any wavelengths that ‘depart more and more from the center’.

        Or I’m all wet! ;-)

      • Rod B,
        The main issue here is not the shape of individual lines but the number and strength of the weaker lines related to various rotational states.

        It turns out that taking into account lines which are by some fixed factor weaker than lines that were already included adds every time roughly an equal number of new lines over a wide range of line strengths. As long as this is a reasonable approximation of what happens, the radiative forcing will increase nearly logarithmically with concentration.

      • Rod – I don’t know whether you have access to Pierrehumbert’s article, but if you do, refer to Figures 2 and 3 (particularly Figure 2). What it shows is that the main absorbing band for CO2 is centered at wavenumber 667 (wavelength 15 um), but that this region actually consists of hundreds of individual lines at different wavelengths extending in either direction from 15 um. Each line represents a potential quantum transition in a CO2 molecule, the energy of which depends on the particular combination of vibrational and rotational states in which the molecule finds itself. The absorption coefficient indicating the absorbing capacity of a line at a particular wavelength is greatest at the center (reflecting the strength and probability of quantum transitions with that energy content). For lines found further and further from the center, the absorption coefficients decline (approximately exponentially), so that a line at a wavelength fairly far away from 15 um represents photons with energy quanta that are still absorbable by those CO2 molecules whose energy state makes them capable of that absorption, but with much lower probability (i.e., most CO2 molecules are not in a state that will alllow them to undergo that particular quantum transition).

        At high CO2 concentrations, photons at 15 um are efficiently absorbed, and so the atmosphere is opaque at that wavelength up to high altitudes, despite the reduced CO2 concentration with altitude. Further increases will do little to change the total absorbed energy there. However, photons with energies lower or higher than at 15 um now contribute more and more to total absorption, because even though they are absorbed with lower probabilities, the higher CO2 concentrations offset this reduction. With even further increases, the lines fairly near 15 um themselves become less capable of increasing total absorption, because they now are nearer saturation, and so the location of greatest change moves even further away from the 15 um center. At considerable distances, the absorption coefficients are so low that even extremely high CO2 concentrations would fail to allow all the photons to be absorbed before reaching high altitudes, and so on. This is why CO2 is, for practical purposes, unsaturable at all reasonably foreseeable future concentrations.

        Note here the distinction between line broadening, which involves only small percentage changes in absorbing ability, and the shift of the location where absorption is most affected by CO2 increases from those lines at or near the center to the lines further away. The lines themselves aren’t “departing from the center”, but rather the location of those lines (i.e., photon energies) where changes in CO2 concentration make the most difference.

      • Fred,

        any chance of getting some simple number on that?? That is, with a doubling of CO2 the central band is saturated and the wings could add what percentage of what the central band is absorbing?

      • I don’t think the computations are simple, because they require a set of radiative transfer codes combined with input data over a range of wavenumbers – all that handled with a great deal of computing power. However, the resulting logarithmic relationship has been described in a simple expression – Delta flux = 5.35 ln C/Co, where Co is a CO2 concentration with which a new concentration, C, is being compared. The basis for this estimate is described by Myhre et al 1998

        For a more qualitative and less rigorous treatment, but one that is more intuitively understandable, Pierrehumbert’s description of a few years ago is informative –
        CO2 and Saturation

      • OK Fred, this was a particularly wasted amount of time. The one paper said NOTHING about line spreading and the link talked about how it was real but in a tube experiment only reduced transmission about 1% if I understood it.

        This is more gobbledy gook. The fact that it is almost impossible to saturate the wing lines tells us that the effect is so minor that it is silly to discuss it against the main effect which has had no discernible effect on the earth’s system.

        You can rest your arms now if you don’t have any hard data.

      • Fred and Pekka, I greatly appreciate your responses. It sounds like pressure broadening per se is not the primary factor in saturation (unless the term is misused: as I understand it pressure broadening refers to a particular physical process that actually changes the energy level of say the first vibration mode in a single molecule through molecular collisions (more or less…).) You all seem to be saying that it is the side rotational energy lines that increase their probability of absorption (each rotation line corresponding to a quantum energy state) as the concentration is increased. I find this interesting and will have to mull it over and do some more research (including reading the rest of the thread) before I can comment further.

        Just for the record (to fix my earlier comment) I calculated HWHM true broadening at 15um to be 0.00006um (doppler) and 0.02um (pressure, at STP), though the latter can vary quite a bit.

  52. Judith: I’m surprised you recommend this article. Figure 1 shows the usual slabs of atmosphere with emission (at at given wavelength) depending only on temperature, not the amount of GHG in the slab. Slab models such as these give the mistaken impression that increased GHGs give increased absorption without increased emission (a dilemma that frustrated me for a long time). However, this situation only applies to optically thick slabs of atmosphere, and the Earth’s atmosphere is not optically thick at many relevant wavelengths. This is especially true at and above the tropopause, above almost all of the water vapor and at least three-fourths of the well-mixed GHGs. Here, the assumption that the atmosphere is optically thick appears grossly wrong, and convection doesn’t provide whatever cooling is needed to maintain a stable lapse rate. Unlike Pierrehumbert, most climate scientists writing about slab models usually include the term “optically thick” somewhere in the text – often, of course, without explaining how this limitation applies to our atmosphere. Pierrehumbert has omitted the phrase. Pierrehumbert closes this section by implying that one can take infinitely thin slabs of optically thick atmosphere and obtain the fundamental physics described by Schwartzschild’s equation! (Why can’t climate scientists admit that the radiation emitted by atmospheres is fundamentally more complicated than the radiation emitted by black- or gray-bodies?)

    Pierrehumbert goes on to say: “The top panel of figure 3 compares global-mean, annual-mean, clear-sky spectra of Earth observed by the Atmospheric Infrared Sounder (AIRS) satellite instrument with spectra calculated after the radiative transfer equations were applied to output of a climate model driven by observed surface temperatures. The agreement between the two is nearly perfect, which confirms the validity of the radiative transfer theory, the spectroscopy used to implement it, and the physics of the climate model.” Hasn’t Stainforth proven that different climate models – which give radically different predictions about climate sensitivity – are consistent with observations from space? So how can AIRS data “confirm” the physics of a climate model? (If we MEASURE the temperature and composition of a clear atmosphere at various altitudes, the observed upward and downward radiation do agree reasonably well with theory. I’m not sure we can accurately reproduce emission from all types of clouds.)

    Finally, Pierrehumbert manages to discuss planetary temperature while only mentioning the word “convection” twice, once in the phrase “turbulent convection”.

    (If I have gotten any of my facts wrong, I would appreciate being corrected.)

    • Frank – I believe you misinterpret Pierrehumbert’s article, which is accurate, but focused only on radiative transfer. As a result, convection, which he addresses in detail in his book, is not discussed. The article does not claim that increased absorption is accompanied by no increase in emission – just the opposite – the fundamental equations depend heavily on emissivity – nor does it claim that the atmosphere is optically thick at high altitudes. In fact, the AIRS -derived spectrum demonstrates how optical thickness varies according to wavelength and how it ultimately declines to low values even at the most opaque wavelength, which in the case of the spike in the center of the CO2 region is not reached until the stratosphere.

      Admittedly, the article is a synopsis. For a detailed mathematical treatment, you will need to read the book.

      • I would add that the Pierrehumbert article is unrelated to climate sensitivity modeling as discussed by Stainforth. The differences among models are based on factors outside of the radiative transfer equations discussed in the article.

      • If x^2-3x+2 = 0, we can’t say the x = 2. x might be 1. Just because one climate model reproduces the AIRS data, we can’t know whether that model is the only one that can. Do we have any idea how far we can perturb a climate model and still produce a good fit to the AIRS data?I’m under the impression that Stainforth has found that many different models are compatible with data like that from AIRS.

        Proof that the radiation modules of our climate models are correct begins with their ability to accurately reproduce observed downward (and upward?) radiation from a wide variety of atmospheric situations whose humidity, temperature (and clouds?) have been probed by radiosondes. Do they?

      • The radiative calculations are well understood physics. They are accurate, when the state of the atmosphere is known accurately. All kind of spectroscopy and remote sensing measurements have proven this beyond reasonable doubt. There are also numerous measurements of the atmospheric radiation and they agree with the models to the extent the state of the atmosphere is known (usually they are made to get information on the state of the atmosphere, which means that they are not a strict test of the radiative calculations, but the consistency is still evidence.)

        The radiative calculations can be tested experimentally by laboratory measurements, which often cover also pressures not present in the atmosphere, but extending the pressure range leads to more stringent tests of the theory concerning in particular the line shapes which are the least well known part of the theory.

      • Fred – Thank you for your comments, they helped me go back and see where I misinterpreted some things (and misinterpreted is probably too kind a term given the severity of the mistakes I made). The use of emissivity as a constant in some situations, but as a variable in here, got me off track. I’ve seen slab models that were correct only for optically thick slabs and jumped to conclusions. There is one sentence (more than a full screen below the model) that explains that emissivity is proportional to the number of “absorber-emitters”. The equation ΔI = e[−I + B(ν,T)] seems to be lacking a term for a distance increment (unless that is buried in the emissivity term). Even further below the model, I now see optically thin and thick slabs described in somewhat unfamiliar language (eν;> ≪ 1, “sufficiently extensive isothermal region”). From my perspective, it would have been preferable to express the emission term as the product of an emission/absorption constant, the concentration of GHG present (usually the product of the pressure and the mixing ratio), the Planck function, and an incremental distance, but that is no excuse for my carelessness.

        Any synopsis is a summary of major points. This article discusses some aspect of atmospheric physics most relevant to the case for AGW and neglects others.

  53. actually Fred, Frank,

    the optical thickness and exponential decay is what is going on exponentially. Beer’s law as I recall. These slabs are extremely thin optically speaking except for strong lines. Implementing Hitran in any sort of a reasonable approach means creating a line width function for each line. that function includes the partial pressure for the molecule of interest, the partial pressure for the rest of the atmosphere present, the temperature and the contribution at each wavelength from each line of each isotope of each molecule present, up to 39 different molecules. It also cannot be used realtime in gcm but it does give one a spectrum potentially with higher resolution than any ever done with instrumentation.

    As for the co2 empirical log equation, it’s ok over a short range if you like that sorta thing at all. I get similar numbers as what is considered commonly acceptable at the tropopause looking down, roughly 3.6 w/m^2. The real problem is this is only for clear skies, as I mentioned above in a recent post.

    • CBA – Beer’s Law does dictate exponential decay for monochromatic EM radiation as a function of path length, but that is not the source of the exponential decay operating to render CO2-forcing relationships approximately logarithmic. See my above comments and those of Pekka for more details. It is interesting that Pierrehumbert’s new book goes into some detail in explaining this.

      The 3.7 W/m^2 value now considered the best estimate for CO2 doubling is based on all-sky rather than clear sky calculations – Myhre 1998

      There are no data I’m aware of to suggest that the logarithmic relationship does not operate over a large range of CO2 concentrations, including those reasonably probable in coming centuries. I have not seen a rigorous analysis, but I believe there is more than enough unsaturation in the wings of the CO2 bands to maintain such a relationship.

      • I forgot to mention that the 3.7 W/m^2 value is derived by multiplying the value of 5.35 for the constant in Myhre’s Table 3 by ln 2.

      • having come up with the value using a 1-d Hitran based model, I’ve seen the results and they are only sort of log. Log would mean each doubling (or halving) would result in the same increment of power, 3.7 w/m^2. After about 8 or 10 halvings, that value is reduced to about 2.7w/m^2, roughly the same amount as the TOA reduction for a co2 doubling instead of tropopause. Total contribution to the atmospheric blocking by co2 is under 30 w/m^2 (over the range of 0.2 to 65 microns).

        As for all sky, I’ll believe that when I see it calculated, and not with some gcm time iterative video game. Above cloud tops the pressures are lower and the line widths are less. Also the drop in T is less as higher up, T rises again. Blocking is associated with all of these factors and despite lower T values for top of clouds than for the surface, emissions are going to be continuum not spectral. I just do not believe that these factors will permit the co2 to have the same effect as in clear skies.

      • ” I just do not believe that these factors will permit the co2 to have the same effect as in clear skies.”

        It doesn’t have the same effect as in clear skies, but the calculated value of 3.7 is an all-sky value, not a clear sky value.

      • every thing I’ve seen says it’s a clear sky value, including my own 1d model results. Archer’s modtran calculator set for 1976 std atm and 12 km (~tropopause) shows about 3.5w/m^2 vs mine which is about 3.6w/m^2. At the top of the atmosphere, that number drops by a w/m^2 for mine at 90 or 120km. I don’t think the modtran calculator goes beyond 70km. Tossing in a cloud for the modtran calc also results in a drop of about a w/m^2.

        The 1976 std atm is not really an average but more of a most typical value.

  54. Ron Cram says on 1/20 @ 8:38 am:


    Dr. Curry,
    You mentioned Slaying the Sky Dragon. I have not read it, but would be interested in reading your review. A well-written review, describing points of agreement and disagreement, can truly advance understanding of the science. I hope to read a lengthy review of the book here. It’s one vote. I hope you will consider it.”

    I would also be interested in such a review. There are some interesting thoughts in the book. Some “real physics” that disagrees with some of the other “real physics.”

    • Jae, i originally intended to review the book, then I read it and decided not to. it is not a serious book IMO, it is thrown together essays, and there is so much to rebut that it would take me weeks to do. There are a few interesting albeit speculative thoughts, but they are overwhelmed with highly flawed material

      • That’s an excellent review. You got’r done quicker than you thought you could.

      • Judith: I do hope SOMEONE offers more than the vague arm-waving you did here. How about just a few facts? Some snippets on some of the “flawed material.”

        BTW, I know this is heresy here, even among self-proclaimed “skeptics,” but the GHE is purely speculative, since there is no empirical evidence for it :)

  55. Dr. Curry and Fred,
    I would like to make an observation here. Earlier Fred claimed that Ray’s paper demonstrates that the theory is confirmed by observations at the top of the atmosphere. But aren’t these observations at the top of the atmosphere the very same observations Kevin Trenberth was referring to regarding his “travesty” about “missing heat?” I remember Kevin saying the observation system must be wrong.

    So, how is Ray’s paper any help when the theory is confirmed by observations known to be wrong?

    • It’s a legitimate question, Ron, which I would answer as follows. It is not clear whether the TOA observations relevant to Trenberth are the source of the “missing heat” problem. In fact, if there are errors in TOA flux measurements, they appear to relate mostly to reflected solar shortwave radiation rather than outgoing longwave IR.

      However, even if the latter is inaccurate enough to generate the missing heat problem, that would be because a very tiny percentage inaccuracy looms large when one tries to determine a small difference between two large quantities (incoming solar and outgoing IR). The same error (less than 1 percent) would be of minimal importance in terms of confirming the radiative transfer computations.

      • Fred,
        Thank you for the quick reply. I’m not sure I buy it. If I remember correctly, Trenberth’s observations would lead one to think the planet is warming six times faster than the IPCC says it is. This is not a minor discrepancy.

      • No, Trenberth’s data do not differ from IPCC by sixfold. His TOA imbalance is 0.9 W/m^2, whereas the fluxes in and out are of the order of 340 W/m^2. Even if the 0.9 disappeared completely, and even if that were because the outgoing flux was the inaccurate one (unlikely), the percent error would be very small. Trenberth, in fact, appears to think that much of the error may reside in an inability to account for heat content stored in the deep ocean, but whether that’s correct is very speculative.

      • Ron – in looking at your comment, I believe the “sixfold” difference you refer to does not relate to the IPCC but to the difference between Trenberth’s modeled value of 0.9 and some of the extreme CERES observational data several times that amount. The CERES data must certainly be wrong, but it is also very probable that this involves their difficulty in correctly assessing shortwave solar radiation reflected from clouds. The outgoing IR measurements are much more reliable.

      • I know this is esoteric, but part of the problem with CERES-derived values of incoming solar radiation may lie with apparent errors, recently corrected, in the value of the solar constant, such that the corrected value is now about 1360-1361 rather than 1365 W/m^2. If the earlier values were truly too high, the CERES-based calculations of absorbed heat would be too high. This would probably have little effect, however, on the Trenberth value of 0.9 W/m^2 TOA imbalance, and wouldn’t completely close the energy budget to Trenberth’s satisfaction.

      • Fred,
        Can you direct me to the paper which corrects the CERES measurements and explains the error?

      • It will have almost no effect on the 0.9 W/m^2 value as it was adjusted to model estimates:

        From Trenberth, Fasullo, Keihl “Earths global energy budget” AMS 2008

        “The Clouds and the Earth’s Radiant Energy System
        (CERES) measurements from March 2000 to May 2004 are used at TOA but adjusted to an
        estimated imbalance from the enhanced greenhouse effect of 0.9 W m-2.”

        and later

        “As noted in section 2, the TOA energy imbalance can probably be most accurately determined from climate
        models and Fasullo and Trenberth (2008a) reduced the imbalance to be 0.9 W m-2, where the error
        bars are ±0.15 W m-2.”

      • trenberth’s more recent paper in 2009 or 2010 describes this. The ceres data was adjusted using some value associated with TIM of the SORCE before the neasured value from TIM became the accepted. The value trenberth used was like 1365.2 instead of 1365.4 w/m^2 as shown in his table. He didn’t use 1360.8w/m^2. The difference when averaged over Earth’s surface turns out to be 0.8w/m^2 less for the new TIM value compared with the originally accepted value. That is also with the assumption that the albedo reflected power must be reduced.

        This is effectively enough to eliminate the 0.9w/m^2 imbalance trenberth assumed existed.

      • As an aside, it would be really helpful to understand the source and methodology for each element of the energy balance diagram. I am still unclear about what aspects are measured and what are calculated or modelled.

      • The correction for solar irradiance does not eliminate the 0.9 W/m^2 imbalance, and probably has little influence on it. The 0.9 value is from model estimates rather than CERES measurements, and if the solar input is reduced in the models, the outgoing flux will also be reduced, so that an imbalance remains.

      • So you are saying the models are not connected to reality??

      • No, what I was saying is that the models use observational data as inputs and then compute outputs on the basis of the physical principles utilized (conservation of energy and momentum, heat capacity, etc.). The computed value of outgoing radiation will depend on the input, and so if the inputted solar radiation is corrected downwards, the computed value for outgoing radiation will also be reduced. The TOA imbalance represents the difference between the two, and so it is unlikely to be eliminated simply via the corrected input. I can’t estimate whether it will change, but if so, it would be by an extent smaller than the 0.8 W/m^2 correction.

      • As I understood it, the models provided 0.85 w/m^2 and utilize 1365.4 w/m^2 as tsi, the old accepted value. The table in trenberth’s 09 article uses 1365.2 w/m^2 and claims to be using ceres slightly corrected somehow by TIM. And the imbalance is given as 0.9 w/m^2 and I think the paper mentions that the calculation from measurements agrees very closely with the model.

        In either case, changing the value from 1365.4 to 1360.8 w/m^2 results in a significant difference. What is uncertain is whether this correction affects the albedo measurement also. The difference being 0.8 W/m^2 with albedo being corrected and 1.15w/m^2 if albedo reflected power does not need the adjustment.

        In either case, model or measured, this correction is going to have the same effect, a reduction of at least 0.8 w/m^2 because the initial numbers are off.

      • The modeled imbalance utilized the uncorrected solar irradiance, but correcting it in the model won’t have the same effect as correcting the actual flux measurements, for the reason stated above. If only the solar flux is corrected, but the measured outgoing IR is not, the difference drops by about 0.8 W/m^2, because only one of the two fluxes is changed. Making the correction in the model will reduce both incoming flux (derived from measurements) and the outgoing flux (computed from the model), and so the imbalance won’t disappear.

      • Fred,

        although gcm code is not something I have time to dissect, outgoing lw is not tied to incoming but rather only to temperature and atmospheric transparency. Unless the gcm is totally non physical, it should not be connecting SW incoming to LW outgoing. SW outgoing is a different matter entirely as that is albedo.

      • Are you saying that the quantity of solar radiation that is absorbed by the surface and atmosphere does not affect the calculation of OLR? During a forcing, much of that energy does not appear immediately in the form of a temperature increase but rather as stored ocean heat that has the potential to increase SST and OLR. You may be right, but I’d be interested in evidence for your statement. I noticed that the Lean et al article on the corrected solar flux did not claim that their correction eliminated the 0.9 (or 0.85 in their paper) imbalance, even though they pointed out that the magnitude of the correction was about the same as that of the imbalance (and of course of the opposite sign). Perhaps they were simply being cautious.

      • Fred,

        I’m saying that. As you noted they stated the magnitude of 0.8 w/m^2, while the actual value for only the incoming radiative is 1.15w/m^2, when you subtract the albedo value from this, which may or may not be derived or measured from the incoming, one then gets the 0.8w/m^2. They also include in the comment that the old value of 1365.4w/m^2 is used in all models.

        whether this was caution or a compromise to get through peer review I have no idea. The implication was clear to me. It also might be possible that since dissecting model code or running the program to actually determine if this would be the result was not within their ability to measure in the project so they opted not to make the claim.

        The problem with TIM reading the values 5 w/m^2 below the commonly accepted value is not a new discovery, only the validation of TIM being right is new. The claims were not being made that TIM was correct and the former best estimate was wrong until the validation process took place.

        It may be that some gcms do not correct for the imbalance once they have had the new value put in. If this is the case, it may well be that those gcm codes could have problems and are not correct and do not actually provide a legitimate difference, for instance the prospect exists of hard coded values for TSI in some.

        To answer your first question, I’m saying yes, OLR is not related to incoming solar other than over the long term there must be energy balance. OLR is onlly related to surface & atmosphere temperatures and the ability of the OLR to pass through the atmosphere.

        most ocean heating will occur due to SW incoming that can penetrate the surface. Additional incoming LW to the surface (like caused by more ghgs) will only heat the surface and that will be reflected in immediate T increase or in increased evaporation cycle activity . Besides the need for a missing heat to be warming the lower ocean no longer exists with the current measurement calculations.

        The evidence I’m providing is really what is in the paper and the application of rational thought to it.

        also, There’s no reply button to your comment so not sure what to do.

      • Trenberth, Fasullo and Kiehl (TFK 2009) describe the logic as follows:

        Based on the hypothesis that the fluxes were in balance before the GHG levels were increased and on the estimated climate sensitivity they accept the value 0.85 W/m^2 of Hansen et al for the TOA imbalance and present it with one decimal as 0.9 W/m^2. The CERES data and its analysis leads to a imbalance of 6.4 W/m^2, which the consider to have significant uncertainties. These uncertainties were discussed in more detail in Fasullo and Trenberth (FT 2008) were the annual cycle and geographic distribution were discussed in more details. As the value 0.9 was considered to be much more accurate than the direct analysis of the CERES data they made adjustments to the analysis in order to reach consistency.

        This is not a calculation of the value 0.9, which an input constraint, not a result. Changing other inputs like the solar SW heating does not affect at all this input constraint, but it appears to help in reaching the consistency, because the starting value of 6.4 from CERS analysis would be lowered and the need for adjustments is smaller. The change in solar SW is, however, not large enough to make adjustments unnecessary or to change their direction.

        The observed radiative imbalance can be used to determine the TOA imbalance only when the data coverage is more complete and the calculations that link the actual observations to the full annual imbalance for the whole earth more accurate. The uncertainties should be reduced roughly to one tenth of the present uncertainties to make this approach really valuable for the determination of the earth energy balance.

      • There are comments both referring to the hansen model value and to an adjusted ceres data. Table IIa shows the imbalance and the incoming / outgoing/albedo reflection that is used to calculate the imbalance. If they are only using hansen’s value rounded off, then which other numbers are they accepting from hansen as they add up to the 0.9 imbalance. Note that each column of Table IIa is used for this. The ASR is the difference between TSI and albedo reflected power. The outgoing LW is subtracted from the ASR to give the balance.

        So which of these values came from Hansen or were fudged to work with the Hansen value rather than a correction being done to the cere’s data, F&K08a ?. In any case, the incoming solar is listed at 341.3 which corresponds to 1365.2 w/m^2, just under the old accepted value of 1365.4w/m^2. If you use the new value in that column and reduce the reflected albedo power to the new value, the result will be a reduced imbalance by 0.8w/m^2.

      • cba and Pekka – I have traced the source of the model-derived imbalance to Hansen – Science 2004

        The imbalance is computed by comparing imbalances due to forcings from changes since 1880 in GHGs, solar irradiance, aerosols, volcanic eruptions, and similar phenomena with the magnitude of the climate response to those imbalances as computed from temperature increases. The imbalance of 0.85 W/m^2 is the extent remaining to which the response fell short of eliminating the forcing-derived imbalance over that long interval.

        The solar contribution is only one of many. More importantly, the modeled imbalance is not computed directly from the magnitude of solar input but from the magnitude of its change over time. This would imply that a small perccentage correction in the value assigned to solar irradiance itself should have little effect on how much the value changed since 1880. I interpret this to mean that the correction probably has no appreciable effect on an imbalance calculated via the Hansen model used by him, Trenberth and others as an estimate.

      • I want to interject, in case it helps the discussion, that to warm 500 m depth of ocean by 0.15 C per decade takes about 1 W/m2. (The atmospheric heat capacity is minimal in comparison). Of course, I chose 500 m carefully to match these numbers. There is no particular reason why this depth would be representative of the warming layer.

      • so Fred,

        do you really thing hansen’s model can actually forecast or hindcast an imbalance without knowing how much radiation is entering or leaving the atmosphere?
        It appeared that hansen’s paper made the assumption that there was no adjustment for imbalance other than surface T. There appeared to be other potential serious problems as well, such as not dealing with a variable albedo.

        Ultimately, it’s all a computer simulation with approximation upon approximation of only partly understood relationships and dynamics.

        According to the Kopp Lean paper, all of the models are using 1365.4 w/m^2. As for responses, especially from the ocean, where does Argo stand with the more modern measurements? Are you ready to dismiss Douglass et al ?

      • The Fasullo and Trenberth paper (Journal of Climate, 21, 2297-2312 (2008)) lists in Table 1 possible adjustments to CERES analysis taken from a 2006 paper by Wielicki et al for reaching consistency with the Hansen estimate of 0.85 W/m^2. They give 10 possible adjustments to SW and 6 to LW components. These possible adjustments add to a maximal 6.4 W/m^2, while the discrepancy that they believe to exist is 5.5 W/m^2.

        Their reference to the Wielicki et al conference paper leads to a short abstract, which is of little help.

      • Pekka,

        “Based on the hypothesis that the fluxes were in balance before the GHG levels were increased…”

        This is a flawed hypothesis as all relevant information would indicate we have been warming since the Maunder and Dalton Minimums.

        There goes a little bit of their imbalance. 8>)

      • I used the formulation “based on the hypothesis ..” precisely to indicate that it is not strictly true, but the point is that it was considered useful to add some constraint, when the nonconstrained data was clearly more inaccurate than the differences between alternative constraints. What is known is that the imbalance is not significantly more than 1 W/m^2. Beyond that fact there is same arbitrariness in the choice. FT 2008 and their later papers chose 0.9 and presented their justification.

        That’s all. They could have explained this more clearly in the later papers read by wider audience.

      • pekka,
        that table is the list of changes made to the ceres data. It appears that the 1365 w/m^2 TSI is being adjusted by 1 w/m^2.
        Rt is the net in – out where in is TSI and out is albedo and LWIR emissions at the TOA.
        The TSI was not adjusted to the new TIM accepted value as the table indicates only a 1w/m^2 adjustment to TSI.
        I see nothing in the paper on a quick review to indicate that the Kopp & Lean paper TSI change is not applicable. That means the 0.9 w/m^2 is going to essentially be 0, once the correction for TSI occurs.

      • The 0.9 W/m^2 modeled value for TOA imbalance will be little if at all affected by the TSI correction, because it is not based on the magnitude of the TSI. On the other hand, CERES-based imbalance estimates will be reduced by about 0.85 (via subtracting albedo from the correction and dividing by 4). However, those estimates are much higher than 0.9 to start with, and even with the adjustment, will probably remain above 0.9.

        The Kopp and Lean paper refer to CERES data adjustments, but aside from stating that the modeled imbalance (0.85-0.9) has about the same magnitude as the adjustment, they don’t imply that one should be subtracted from the other.

      • It is unfortunate that this misunderstanding of the papers of Trenberth et al persists. They do not calculate the 0.9 from TSI and CERES observation. They assume the value 0.9 and modify the data based on CERES observation forcing the numbers to agree with the imbalance of 0.9. That they did with old data and that they would do with new data. The modifications to the CERES based date will be different with new data, but the value 0.9 is not affected as it is assumed.

      • Just how do you get the imbalance for incoming SW versus outgoing SW (albedo) +LW(emissions) without dealing with:
        1. incoming TSI magnitude
        2. albedo
        3. radiated emissions
        ?????
        That is where the imbalance, Rt, comes from.
        It has to be in any model and in any measurement other than a differential measurement and one would have to differentiate between apples on one side and apples and turnips on the other. I don’t believe such a sensor exists. If ceres has one and it only reads 6w/m^2, then the kludges used to fix it use the TSI.

        This TSI information does appear in all papers mentioned.

        Please explain if you know exactly how Rt can be determined without a value for TSI.

      • I know only, what I have read from the papers of Trenberth, Fasullo and Kiehl.

        The idea is to use all information available. When it is done without adjustments they ended to an imbalance of 6.4 W/m^2 at the top of atmosphere. The uncertainty of this value was known to be approximately equal to its value on the lower side. Thus this analysis tells that the imbalance is likely to be positive (warming) and that it is not likely to exceed a value of roughly 10 W/m^2 (the uncertainty is not fully symmetrical and I do not know the upper limit).

        They could have published the various components of heat flux on this basis, but they felt that the estimate of imbalance of 0.85 W/m^2 presented by Hansen et al is good enough to be used as an additional constraint on the data. Thus they looked at the data and uncertainties in its various components making adjustments that lead to the final imbalance of 0.9 W/m^2. The idea of this exercise is not to justify the value of 0.9 and it does indeed not give any additional support for this value. The idea is to force the other numbers to values that represent in a more consistent way the various factors – more consistent in the sense that the imbalance at TOA has a possible value not likely to differ much from the real value that they cannot determine.

        One of the adjustments did concern TSI. It was adjusted from 1365 to 1361 W/m^2 reducing the remaining need for adjustments by 1 W/m^2. Other adjustments were applied to absolute calibration, spectral correction and several other factors influencing the analysis.

      • admittedly the paper is very muddy at that point but adjusting from 1365 to 1361 is 4 W/m^2 not 1w/m^2. At the time, 1361 w/m^2 was not accepted as being valid, however, it looks like they considered a 1w/m^2 reduction to be plausible. If that were really 1 w/m^2 as averaged over the entire globe’s surface, then that puts the 6 W/m^2 discrepancy at 2.5% rather than 1/2 of a %. That is just paying lip service to the notion of some sort of measurement. How about a new error measurement? 0.85 +/- 0.15 give or take 6.5 w/m^2 ???
        If that is the case, then the 0.85 is solely a model number which is again subject to the model using 1365w/m^2 and not the new 1361 w/m^2 and if it is a legitimate model, that means the 0.85 +/- 0.15 is going to become 0.05 +/- 0.15.

        I thought this whole series of papers was about measurements of real physical properties. And evidently, it is not.

      • The reason that the 0.85 W/m^2 flux imbalance (currently 0.9 W/m^2) estimated by the model is not reduced perceptibly by the TSI correction is that it is not the magnitude of TSI that is used by the model, but rather solar forcing – i.e., the change in TSI since 1880. That change will be affected only minimally by a very small percentage reduction in the value assigned to TSI itself –
        Hansen 2004 – Science.

      • The adjustment must be divided by 4 because the values 1365 and 1361 correspond to the cross section (pi * R^2) of earth but the other numbers are for the whole surface that is four times the cross section (4 * pi * R^2).

      • the solar forcing is not known from 1880. It is now being assumed that it does remain quite constant, varying during cycles only. Again, I do not see how a model based upon physical principles can get away without the fundamentals.
        As for the 0.85 mentioned it hansen 2004, there are some serious problems claiming that the measurements are accurate. While hansen claims 0.85 +/- 0.15 w/m^2 (from Levitus) for measurements and 0.06 +/- 0.12 w/m^2 for 1993-2003.
        Looking at Douglass & Knox 2010, they reference Lyman et al (Nature 2010) with a trend of 0.63 +/- 0.28 w/m^2 from 1993-2008. Note that while the time frame is extended by 5 yrs, the differences place Levitus outside its own error bars if it were consistent with Lyman and it is a bit curious that with more data, lyman has significantly larger error bars.
        It appears that data prior to 2003 is from XBT, the expendable bathythermograph, described as having biases and systematic errors referenced by Douglass from Wijffels et al. this is the data hansen describes as extremely accurate.

        After 2002, the Argo buoy system was providing much more accurate data which was used by Douglass & Knox , http://www.pas.rochester.edu/~douglass/papers/KD_InPress_final.pdf . Douglass & Knox use only Argo data which is slightly under 10 years, 2003 – 2008.

        net result for the Argo data, Douglass & Knox plus several other analyses listed in the Douglass & Knox paper are indicative of a small negative value for heat coming in, not really statistically significant from zero but definitely not indicative of a positive heat imbalance as promoted by trenberth, hansen, lyman, or levitus.

        so now we have no current heat imbalance. That means whatever was potentially causing the results earlier, either instruments or a real imbalance, is no longer causing the imbalance.

  56. FWIW, here’s a puzzle (at least it’s a puzzle for me, fully admitting that I’m old and maybe wanting in some areas. :-)

    (cross-posted from here: http://blogs.chron.com/climateabyss/2011/01/the_tyndall_gas_effect_part_4_of_4_what_would_happen.html#comments )

    I spent quite a bit of time farming on the Great Plains (eastern CO) when I was younger and brighter. An interesting thing about that area is that the soil can be as dry as toast and yet the humidity can still vary greatly, due to the “game of tag” between the cold, dry artic air and the warm, moist Gulf area. And often there is no increase in cloudiness as the humidity increases. What I find most interesting is that it is absolutely no hotter on the humid days than on the dry days out there (it is definitely more uncomfortable, of course, because the sweating does you little good). Due to all the change in water vapor (the big Tyndall gas), wouldn’t one expect the humid days to be hotter?”

    I have been harping on this theme for over 3 years, and I have yet to have anyone explain just why more GHGs don’t have any discernable (OBSERVABLE) effect on temperature IN THE REAL OBSERVABLE WORLD. Not in the temperatue record. Not in the models (there is NO “hot spot,” as predicted). Only in radiation cartoons is there an effect. Something is definitely issing. Folks, I continue to say that , without EMPIRICAL evidence of a “Tyndall effect” (GHE), you don’t have ANY CAGW science. Even Einstein had to wait a few years, until an eclipse occured, to prove his relativity theory–by EMPIRICAL EVIDENCE!! Just where is the empirical evidence of AGW?

    BTW, anyone who has not read Slaying the Dragon should do so, even if s(he) thinks it is bunk. At least s(he) can say, “I did that.” Just sayin’ it is BS does not advance the knowledge (Judith).

    • How about a humid night, doesn’t it cool less quickly than a dry night if both are clear? That would be the greenhouse effect.

    • JAE – The problem with just thinking in terms of temperature is that it doesn’t tell you about the actual heat content of the air. Ninety degree air with 50% rel. humidity does have a higher heat content (i.e., joules of energy) than 90 degree air at 10% rel. humidity.

  57. Jim D:

    “How about a humid night, doesn’t it cool less quickly than a dry night if both are clear? That would be the greenhouse effect”

    This is what one of the things that is so difficult to discuss about the “atmospheric greenhouse effect.” Yes, you can credibly explain the slower cooling via a GHE. BUT, you can also explain it by the very basic fact that water vapor has TWO TIMES the thermal storage capacity (Cp) as the rest of the molecules in the air. It simply takes longer for all that energy to dissipate.

    Here’s another related puzzle: Atlanta and Phoenix are (virtually) at the same latitude and elevation. Yet the temperatures, DAY AND NIGHT, in Phoenix in the summer are MUCH hotter than Atlanta, even though Atlanta has three(3) times as much greenhouse gases as Phoenix. Why?

    • The ground cools radiatively faster on a dry night. This has nothing to do with the air heat capacity that changes by at most one part in a thousand between dry and moist air.

      Phoenix gets hotter because the surface is dry, and all the energy from the sun goes into heating the ground rather than some into evaporating soil moisture. This is the idea of the Bowen ratio, sensible and latent heat fluxes, and does not relate to IR at all. The best time to look for IR effects is at night.

      • “The ground cools radiatively faster on a dry night. This has nothing to do with the air heat capacity that changes by at most one part in a thousand between dry and moist air.”

        I think it is about 1 part in 100 at STP, going from absolute humidity of 5 g.m-3 water vapor to 20 g.m-3. So for a temp. change of 10 C, that’s 20 joules/1000 joules, or 2 parts in 100. That might be significant.

        “Phoenix gets hotter because the surface is dry, and all the energy from the sun goes into heating the ground rather than some into evaporating soil moisture. This is the idea of the Bowen ratio, sensible and latent heat fluxes, and does not relate to IR at all. The best time to look for IR effects is at night.”

        This is correct. But it also shows how evaporation offsets at least some of the heat gains from radiation (solar and GHE) (note that this is a NEGATIVE feedback, BTW). If evaporation serves to balance heat gain, there is no problem, eh?

      • Yes, I underestimated and it is 1% for the maximum effect of humidity on the heat capacity, still insignificant because instead of cooling by 1 degree, you cool by 1.01 degrees when it is drier, which goes no way to explain the real difference in cooling rates at the ground surface.
        Evaporation is putting latent heat into the atmosphere that later turns into real heat when condensation occurs, so it is not a way out of heating.

      • Jim:

        I don’t understand your comment. Please explain the connection between 1% and 1.01 degrees.

      • For a given amount of energy loss due to radiation for example in J/kg/s, the cooling rate (K/s) is this divided by the heat capacity. If the heat capacity changes by 1%, so does the cooling rate.

      • ?? I still don’t understand. Can you provide more logic/background/reference on this? An equation?

      • rho*cp*dT/dt = Q
        where rho is the air density (kg/m3) , cp is the heat capacity (J/kg/K), dT/dt (K/s) is the temperature change rate and Q (J/m3/s) is the heating rate per unit volume.

      • No, you still underestimated it; it’s 2%. Please re-read my response (HOH has twice the Cp of air).

        Whatever. You said:

        “The best time to look for IR effects is at night.”

        Well, what do we compare in the real world in order to see if there is a GHE? The temp in Phoenix is still way hotter at night than it is in Atlanta, regardless of the fact that there is only 1/3 as much GHG.

        I wish we could begin by agreeing that there is absolutely NO empirical evidence of an “atmospheric greenhouse effect.” There is only some radiation cartoons.

      • The evidence is by measuring the downward IR on a clear night with any simple radiometer that is used in atmospheric field studies or student practical courses. It is not zero, more like a few hundred W/m2.

      • Jim: The presence of IR is NOT evidence of a GHE! Of course there is IR, absorption, emission, collisions, LTE, etc., etc. What I want to see is a demonstration that the presence of this IR “overcomes” other factors, such as convection, evaporation–to make things yet warmer than they would be otherwise. If that demonstration existed, we could even call the belief in such a process “scientific!” As it is now, you only have an unproven hypothesis, no matter whether 99% of all scientists believe it.

      • It amused me when I figured out that not only is the GHE not the reason a greenhouse gets hot, but there is no way to arrange or manipulate things to demonstrate the greenhouse effect in a greenhouse. I bought a CO2 gauge (it also measures temperature) with the intent of checking for myself the 3C rise per doubling of CO2 with a given IR source…but…

        There was a recent study of cassava growth in CO2-enhanced atmosphere, but I did not get a reply when I asked what cooling mechanism they used to offset the increased temperature in their greenhouse. Want to guess why they did not disclose the cooling mechanism in their paper?
        http://academic.research.microsoft.com/Paper/5017614

  58. If human emission of CO2 causes global warming, why is that after human emission of 235 billion metric tons of carbon (http://bit.ly/gIkojx) the global warming rate of 0.16 deg C per decade for the period from 1970 to 2000 is nearly identical to that for the period from 1910 to 1940 as shown in the following data?

    http://bit.ly/eUXTX2

    Experiment is the final judge of a scientific dispute. As the previous maximum global warming rate of about 0.15 deg C per decade has not been exceeded with 5-times increase in human emission of CO2, there is nothing unusual or unprecedented about the current global warming rate of 0.16 deg C per decade.

    As a result, there is no evidence of man-made global warming.

    • If human emission of CO2 causes global warming, why is that after human emission of 235 billion metric tons of carbon (http://bit.ly/gIkojx) the global warming rate of 0.16 deg C per decade for the period from 1970 to 2000 is nearly identical to that for the period from 1910 to 1940 as shown in the following data?

      Well one possibility is that there were other factors in play between 1910 and 1940 which contributed to the warming during that period but have not been major factors in recent years. For example solar activity rose in the early part of the last century but has been fairly flat or falling since about 1960, also volcanic activity was relatively low during the previous period.

      • andrew adams,
        When skeptics invoke ‘one possibility’ as an explanation they regularly get excoriated.

        Could your ‘one possibility’ be that climate science has misunderstood significant aspects of how the atmosphere works?

      • Hunter,

        No, I don’t see any need to resort to that assumtion on this particular question. And I have no problem with people exploring “possibilities” as long as there is actually some evidence or logical reason to support them and it is not just baseless speculation.

        In my case there is certainly evidence which supports my claim – for example Lean 2005 on the solar influence and Zielinski 2000 on volcanoes.

  59. A basic element in the article caught my eye, because it is something I did not realise before, while it is a fundamental part of the radiative transfer theory: it seems that the absorption/emission due to CO2 and other gasses depend only of the mass of gas in an elementary volume, at least in a dilute “regime” valid for the whole earth atmosphere. It means that 1 kg of air would absorb and emit the same amount of radiation if it is at the same temperature, whatever the volume it cover (low near the ground, high near the TOA).

    Is this correct?

    If it is, could somebody tell me what is the mass ratio between the troposphere and the rest of the atmosphere? And does this ratio change with temperature (ground temp? TOA temp? Top of troposphere temp?)

  60. Tomas Milanovic

    Pekka

    There are all kind of irregular cycles from short term fluctuations to glacial cycles and as far as I understand none of them is really well understood. Of course many details are known about ENSO type cycles of duration suitable for collecting empirical data, but even their understanding is badly incomplete.

    Yes, this is the point.
    You are being kind by saying “not well understood”. The reality is that they are not understood at all.
    An excellent example is ENSO where some people claim “partial understanding”.
    Indeed if you consider an INDIVIDUAL ENSO, the mechanism is trivial – it is just winds and pressures. Consider a delayed oscillator, throw in a bit of Gaussian noise and you obtain something that looks like ENSO.
    However it is basically a tautology which “explains” a conundrum by a mystery.
    The parameters frequence, amplitude, phase and noise are just fitted.
    Why is the frequence what it is? Of course it is because the whole oscillation is a kind of quasi standing wave resulting from an interference of a large number of spatially interacting waves with an infinite number of different frequencies. Nobody has even a beginning of understanding what these waves are and what is their dynamics. There is no understanding of ENSO or any other “oscillation” for that matter. The fact that numerical simulations are unable to reproduce correctly these quasi standing waves is due to the fact that they don’t solve the equations of the dynamics because these are unknown.

    Jim Cripwell

    “The earth’s atmosphere is complex, not to say chaotic. Anyone, like the IPCC and climate modellers, who claim to have captured the physics of how the atmosphere works, are simply wrong. The atmosphere is much too complex, so that simple approximations will always give the wrong answer”

    Am I anywhere near correct?

    Yes 95% correct. The 5% are that I am not saying that they are wrong (implying wrong in everything).
    Much of what they do are tautologies which boil down to statements like “If there is equilibrium, then there is equilibrium”.
    A more sophisticated variant mostly used by those who pretend that climate is qualitatively different from weather is “If there is ergodicity, then there is ergodicity.”
    Of course a tautology being always true, these statements are not wrong.
    You may say that they are useless and explain nothing but they are not wrong.
    The corollary is that they may get randomly some features right, namely those where the premise of the tautology is right during a given time window too.

    If you want, you can develop your own theory that will not only fit with observations but predict a very different evolution.
    Postulate that there is a mix of a 200 years and a 400 years oscillations that interfere with the (known) higher frequency oscilations.
    Fit the phases and amplitudes. Introduce time constants , coupling constants and noise if necessary .
    Predict cooling by the end of 21st century and getting “worse than we thought” in 22 century.
    The causes are unknown like the causes of ENSO are unknown.
    If you feel like that, you will find the non linear mechanisms in deep ocean circulation, very low frequency albedo variations and heat storage but it is not really necessary.
    Of course you will be also compatible with the orthodox GH theory – it’s all inside and the CO2 is just a “small perturbation”.
    Can you be falsified? Yes, in 22 century. And then you will be dead anyway.

    CBA

    Such models are relevant mostly for clear skies and tend to lose meaning in the presence of clouds.

    This is also my conviction. Not only clouds but any significantly scattering medium. The equations used to approximate the radiative transfer in the Pierrehumbert paper are only valid for steady states and no significant scattering.
    Of course one can also introduce scattering and non steady states but then we are far from this rather trivial homogeneous slab model in LTE.
    I have seen somewhere a collection of downwelling spectra from different locations , different times and different conditions and there is a large variability indeed.
    Sure, CO2 always has a band around 15µ but that is clearly not the alpha and omega of the radiated power.

    • Tomas, Many thanks. Jim Cripwell.

    • Tomas,

      At the moment, I just don’t have the time to spend on Pierehumbert’s paper to carefully scrutinize his approach. I read through an online book he had a couple of years ago and have vague recollections of the approach with differences of optically thick and thin shells or slabs and the approximations being made for each.

      I did not use this approach and I don’t remember details on it. I chose instead to use an Eddington type approach. I use over 50 shells to represent the atmosphere and take average T and P values for each.

      I created a program to generate spectral values for each shell. The optical thickness is calculated for each wavelength bin. The program is variable in bin size and in the bandwidth for the range of wavelengths. I have gone as fine as 1/10 or 1/100th of an Angstrom per bin and as wide as 10nm per bin. I use the standard suggested Hitran approach of Lorentz line widths and so generate the contribution of each spectral line into each bin. The ultra high resolution has permitted me to compare to telluric spectral lines measured in some extremely high resolution.

      The approach turns out to attenuate the incoming spectra by the attenuation of that slab at each wavelength. It also adds in the emission at each wavelength which is the boltzman distribution for that temperature (a black body curve) times the attenuation value which is an emissivity factor at each wavelength. This requires LTE. There is no assumption of optically thin or thick slabs and a slab will vary in optical thickness by wavelength.

      With IR, scattering is not that big a deal with molecules. That is mostly partial to uV and blue colors. If we have scattering of IR, it’s going to be clouds and particulates and absorption too.

      I’ve no interest in debating what LTE means. Basically, it means that there is one temperature for a microscopic region shared amongst the various types of molecules present. the alternative is for co2 to be at one temperature, n2 to be at another, h2o vapor at another temperature. Having lots of collisions/sec mean that the energy is distributed between the various molecule types. A molecule can gain energy by absorbing a photon or by a collision and it can lose energy by emitting of photon or having a collision. LTE is true for most of the atmosphere.

      I’m afraid I haven’t spent much time looking at atmospheric generated incoming spectra. so many of the lines present in incoming stellar spectra are actually atmospheric absorption, even in the visible.

  61. Judith,
    The lesson I learnt from Pierrehumbert is this statement:
    “…the energy of the photon will almost always be assimilated by collisions into the general energy pool of the matter and establish a new Maxwell–Boltzmann distribution at a slightly higher temperature. That is how radiation heats matter in the LTE limit.”
    Earlier I have read that that the absorbed IR was readily emitted in all directions and half of it returned to earth.
    The question arises if the N2 and O2 molecules gaining the energy can radiate it or are forced to transfer the heat down ta earth surface, and thereby warming the atmosphere.
    Finally, today I have got use of my accumulated knowledge from the Websphere when I was asked to explain why a car under a roofed carport does not need deicinging in the cold mornings we have in Sweden these days.

    • Gunnar,
      Neither N2 or O2 can radiate in infrared. O2 can radiate in microwaves, but that involves very little energy transfer. Thus both contribute only through convection, a little conduction and by exciting CO2 and other greenhouse gases through collisions.

      There is about as much IR radiation from the thermally excited CHG molecules as there would be if they would radiate with a larger contribution from molecules excited by prior absorption. Thus the amount and distribution of radiation does not change much, but there is very little direct connection on molecular level between the absorption and emission.

      • Isn’t the photon’s energy being counted twice? It can’t both result in local warming and stimulate equal IR emission (since that emission would cool the “local” molecule(s) right back down again.)

      • There is no double counting. The addition to the emission that results from the direct succession of absorption and the subsequent emission without intervening thermalization by collisions is simply forgotten. This results in a minuscule error in the opposite direction, but this error is so small that it has no significance.

        Saying it as deviation from local thermal equilibrium: the vibrational modes of CO2 are at a very slightly higher temperature than translational and rotational modes of various molecules (N2, O2, H2O, CO2, ..) but the difference is insignificant.

      • Pekka Pirilä

        I think you will need to review your understanding of the state of CO2 at atmospheric temperatures say around 260K.
        Both Pierrehumbert and Vonk think the vast majority are in the translational(ground state) and I agree with them on this point.
        After absorption of say a 15um photon there is a huge jump in the internal energy of the co2 molecule(3 fold).
        Pierrehumbert thinks this is rapidly lost by collision(thermalisation) and I agree with him.
        Brian H is quite correct to say that you must be careful on this next step so as not to count twice.
        If there is an equal amount of energy on average radiated away this means that all the thermalisation has been reversed.
        If this is the case then for all practical purposes all the radiative effect does is to redistribute the radiation with no thermal effects .

      • Bryan,
        Perhaps you did not understand, what I meant by translational mode. It means normal motion of the molecules. Being in translational ground state means no motion which is dominating only very near absolute zero (o K). At translational ground state there are no collisions at all. In collisions energy is typically transferred between different translational and rotational modes and sometimes also with vibrational modes. These events are the way vibrational excitation is transferred to normal thermal motion.

        When I wrote the last paragraph of my previous message, I knew that many people do not understand it. I thought that they just skip it, but this did not happen with you. Apologies for unnecessary confusion.

      • Pekka Pirilä

        Yes, that makes more sense, as you know and for the benefit of others, at 260K the RMS speed of CO2 molecules will be around 450m/s.
        All internal energy will be in the translational mode with components in x,y,z directions.
        When the CO2 molecule absorbs the 15um photon the rotational and vibrational modes will be activated.
        What is not clear to me is how the models deal with the issue of thermalisation.
        Do they subtract the average emitted radiation from the average absorbed radiation to obtain the average thermal energy gained and hence the temperature rise of the volume under consideration?

      • This will likely be covered later in the thread, but it might help to understand that absorbing radiation into a vibration mode does not change the thermalization (temperature) of the gas. As emission does not cool it. btw, there is considerable confusion and disagreement with this, probably because a construct called rotation temperature and vibration temperature has been devised to help analyses and discussions — but they aren’t real temperatures like in ‘that feels warm.’

      • Personally I do not usually like argumentation on what is the real meaning of a word. Sometimes it is necessary, but mostly not. Within one field of application the meaning may be well defined, but very often the same word is used with a different meaning in other applications. None of the fields of application can forbid differing uses in other fields or in more loose discussions.

        Concerning the concept of temperatures of various systems, it is often a useful way of describing the energy content of interconnected states, which are not interacting as strongly with other modes of the system. One of my first publications really long time ago was on the behavior of nuclear spins at very low temperatures. The spins of neighboring nuclei had a rather strong interaction with each other but a very weak interaction with other degrees of freedom of the system. Thus it was useful to discuss the temperature of the spin system separately from the general temperature of the material.

        The vibrational states may be in a have a similar situation in the uppermost thin part of the atmosphere where collisions are rare and the molecules loose their excitation mostly through radiation. Here I introduced the concept only because the comments had some connection to Gunnar Strandell’s original question, where he discussed LTE.

      • Your comment is obviously entirely wrong and in error.
        It’s “lose their excitation”, not “loose”.

        ;) ;)

      • Rod B
        That’s a good point, the rotational and vibrational degrees of freedom do not affect temperature.
        However by collision KE from these modes can be transferred to translational modes which do.
        I agree that there is a lot of debate on this point.
        However most on both sides of AGW debate agree with the above.

      • Bryan,
        Yes, that’s exactly how IR radiation warms the atmosphere — CO2 molecules heating mostly N2 and O2 molecules via collision and relaxing the vibration level.

        It is probably a minority but there are very learned people that will swear that exciting a vibration mode warms that (ideal) molecule.

      • It matters to you a lot…when considering the overall coupling of energy…whether a billiard ball is spinning or not?

      • One way to visualize how these radiative transfer models work is that the CO2 molecules replace a fraction of background photons with photons emitted at the gas’s own temperature. This considers absorption and emission effects together. They separate the photons into upward and downward ones, where the upwards ones are dominated by absorption, and the downward ones by emission. The net effect of these two streams gives the heating rate in a layer.

      • Jim D
        That is true but can be misleading. For fear of opening a can of worms prematurely, the photo emitted from a relaxing vibration energy level is part of a physical process that is totally different from a photon being released because of its temperature ala the planck function. The vibration emission is always at the exact (pretty much) same energy level regardless of the gas’ temperature; not true for planck-type emission.

      • Can you elaborate? A photon emitted from an excited CO2 molecule should not know how the excitation came about? Temperature does affect the number and energy of the various quantum levels involved in the excitation process, but the process is still quantized. All those quantum transitions are the same as those capable of being mediated by photon absorption, and those released by collision de-excitation will be the same as those released by photon emission.

      • According to Heisenberg uncertainty principle the accuracy of the energy level is inversely proportional to the lifetime of the level. Therefore the energy level is not very precisely defined, when the lifetime is short for whatever reason. The pressure broadening of the emission (and absorption) lines is due to the frequency of collisions which makes the lifetimes be short.

        It does not matter, what is the way the excitation occurred unless it influences also the lifetime of the excitation. In dense gas the lifetime is determined by the collisions. This leads in particular to the Lorentz line shape.

      • Pekka – As I understand it, it is true there is a small element of uncertainty broadening, but pressure broadening is primarily a consequence of the ability of interactions with neighboring molecules to borrow or lend energy sufficient to eliminate disparities between the energy of an incoming photon and the energy needed by an absorbing molecule for a particular quantum transition.

      • Fred,
        I have not checked this in detail, but I think the two factors are just two ways of expressing the same fact.

        The lifetime is short only, when something makes it short and no interaction can broaden the line without inducing transitions, i.e. shortening the lifetime. This is not necessarily true in solids, where particles are bound to their location, but it is almost certainly true for gases. In solids the local conditions may vary in a time-independent way leading to stable differences in the locations of energy levels, but in gases all is related to collisions and they affect unavoidably both the line shape and the lifetime through the same interaction.

      • Fred and Pekka,
        Everything I have seen talks of three distinct line broadening mechanisms: 1) that caused by the energy/time factors of the Unceertainty Principal — all call this academically interesting with no practical effect on the line and then drop it, 2) Doppler, and 3) Pressure (also called Collision). The pressure broadening does not stem from the uncertainty broadening.

        The relaxation of the 1st vibration level of CO2 emits a photon a photon at 15um with about 1.4×10^-20 joules. This is true of every CO2 molecule every time (I’m ignoring for discussion the tiny variations coming from broadening.) It is correct that a higher am_bient (don’t know if there is a spam filter here that will vomit that word…) temperature will increase the number of molecules likely in the excited state and therefore the number of photons potentially emitted. But all of those photons will have the same energy, above. Yes. this is the same energy that will transfer in a collision to another molecule’s translation (kinetic) energy, though this transfer is not photonic.

        The average molecular kinetic energy of CO2 at 300K is about 6×10^-21, a little less than half the above (It’s about 4×10^-21 at the 220K tropopause). By Boltzmann there are probably a few molecules with kinetic energy around the vibration energy. But if you’ll permit me an idealized hypothetical molecule, if the 1.4×10^-20 vibration energy were instead emitted because of temperature, the required kinetic energy would put the CO2 molecule at a minimum of about 630K. And that would be for every molecule making that emission.

        The temperature has no bearing on the absorption/emission of the vibration level, other than indirectly determining via the Boltzmann Factor the likelihood of naturally being in the excited state to begin with. On the other hand temperature is the only parameter in planck-type emissions and the emission is theoretically (and generally) very broadband and not single frequency.

        As I understand it…

      • Rod B,

        I think you just answered some questions I probably wasn’t even asking correctly in other venues. Thank you.

      • Rod,
        I agree fully that the uncertainty principle is not the right way of calculating the actual line width even though it is closely related and gives a value not very far from the real value.

        Two- and three-atomic molecules (N2 and CO2) are massive and complicated enough for allowing a more detailed analysis of what happens in a collision than just applying a coupling between two states and saying that that is all that we can tell about the occurrence. If this approach of just determining one coupling constant would give a complete description, then the relationship between the life time and the line width would be exact. The full theoretical calculation would involve calculating with quantum mechanics the outcome of all possible collisions (distance between the trajectories of the molecules, directions of their axes, their rotational states, their relative speed, etc). After this is done for a representative set of situations, the average can be calculated. It is clear that the result differs from the limit given by the uncertainty principle (again we have a limit, which is obtained only in specific cases, but is a limit for all). Intuitively I thought that the relationship would be reasonable close to the limit and checking with the figures I could find, this appears to really be the case. Reasonable close allows in this case a factor of two or some more, but not a factor ten.

        Uncertainty principle is not a mechanism, it is a lower limit that all real mechanisms must obey. For simple enough quantum-mechanical effects the real value is close to the lower limit, for more complicated ones it is further. Here we are in the intermediate region, where the limit is not far, but still somewhat off from the real value.

        The uncertainty principle tells also that it takes time for a molecule to settle in a well defined state of a specific energy level. It is really a property of the state that its energy level is not more accurately defined than the uncertainty principle allows within the time available for the state to persist. If the lifetime of the excitation is short, then the energy of the state itself is not precisely known. There are no infinitely sharp energy levels that are not fully stable (and we cannot observe anything that is fully stable also under observation).

        If I use the concept of temperature in connection with energy levels, I use it only in the sense that the relative occupations of energy level are prportional to the factor exp(-E/(kT)), where k is the Boltzmann constant. When several states are involved and part of the transitions are forbidden, there may be exceptions to this rule. This affects often levels well above thermal energies, but does not influence situations where thermal excitations through collisions dominate.

        In atmosphere a couple of percent of CO2 molecules are always in a vibrational excited state that corresponds to 15 um radiation. It is a matter of taste, whether a few percent is considered a high or low value for such a share. I consider it a large share, because much lower values are typical for many important excited states in all kind of applications. Still it leaves the share of the vibrational ground state close to 100% (5% may be large, but 95% still close to 100%).

      • It’s “uncertainty principle” of course… — not the head of a goofy-named school!

      • Pekka,
        Your point that collisional interactions are greatly more complex than the basics describe is well taken. I felt it would be way too much to describe the whole elephant when the basic point can credibly be made. (Plus it goes beyond my smarts…) None-the-less, I agree the details are important, yet not completely understood. This, IMHO, adds uncertainty to some extent to the whole analysis of radiation transfer.

        I also agree with your description of the uncertainty principle in this situation, though I don’t agree with your magnitudes (or I could easily be misreading what your saying). As a rule of thumb the uncertainty principle demands that the energy change of the 667 cm^-1 line be greater than about 10^-11 cm^-1, take or leave a magnitude or two. (Sorry to change units to wavenumber on you; it’s all I got here.) It’s no great shake to exceed that probably even immeasurable threshold. A representative doppler broadening is roughly 0.02 cm^-1, a factor of maybe a billion.

      • Rod – is it possible you’re confusing what will appear as a continuous emission spectrum from a hypothetical black body with the emission spectrum of CO2? Both vary in terms of total and peak energy as a function of temperature, but the black body spectrum appears continuous because it represents the contribution of a theoretically unlimited number of different absorbers/emitters, each with their own spectral distributions. They are all quantized, but it is the heterogeneity that gives the appearance of continuity.

        For CO2, the number of emission lines is very large, but still limited, because each represents a different transition or combination of transitions that CO2 can undergo. If magnified, the spectrum will in fact be seen to consist of hundreds of individual lines corresponding to these quanta. Each could, in theory, emanate from molecules that were either thermally excited or excited by photon absorption The emitted photons would not know which phenomenon led to their existence.

        (Note that I’m referring to vibrational/rotational transitions, which exhibit energies in the IR. Electronic and other modes are of little relevance to the climate influencing properties of CO2 at atmospheric temperatures, although they are also quantized.)

      • I return to, what we can conclude from the uncertainty principle.

        At earth surface the mean free path in air has been determined as 64 nm. The average speed of N2 molecules is 500 m/s. Thus the typical time between collisions is 0.13 ns. The CO2 molecule is larger and will therefore be hit more frequently by N2 (or O2) molecules. It is likely that most collisions lead to de-excitation of a molecule in vibrationally excited state. Thus the lifetime is likely to be close to 0.1 ns. According to uncertainty principle this corresponds to an energy uncertainty of 5.3*10^-25 J. This corresponds to 0.0265 cm^-1 half width of the line, which is close to one half of the measured line width. I would not expect anything better (or even this good) from such a simple calculation based on incomplete data.

      • Rod – There is no distinction between what you refer to as Planck type emission and emission from a molecule excited by photon absorption. Collisional excitation, which is dependent on temperature, creates the same quantum transitions in absorbed and emitted energy as photon excitation. The temperature-dependent kinetic energy of molecules capable of activating these transitions is not quantized, but the transitions themselves are. Note that the distribution of energy states upon which an absorbed photon adds a further quantum, is dependent on temperature via prior temperature-dependent quantum transitions created in the absorbing molecule.

      • Fred,
        Yes and no! ;-) One has to be clear on what process is being discussed. A colliding molecule can transfer its translation or vibration or rotation energy to another molecule’s translation, vibration, or rotation energy. If vibration or rotation is part of either molecule the energy transferred is quantized per the quantum levels of vibration and rotation. Translation (kinetic) energy is not quantized. But in any event a collision transfer is not photonic: there is no electromagnetic waves/pulses emitted or absorbed. (A vibration to vibration transfer is possible which does involve a photon, but this is not a collision.)

        By far the most common general transfer is translation to translation. With radiation transfer the typical vibration/rotation to translation collision transfer is of most interest. CO2 absorbs a photon (at a discrete energy) into its vibration, then bumps a N2 and transfers that vibration energy to N2’s translation energy, warming the N2. There is no photon involved with the second transfer.

        None of the above are Planck function “transfers”, which are always photonic, are broadband and not discrete, and are not quantized (in the quantum mechanics sense.). The amount of EM energy emitted via planck function is directly related to the temperature of the emitter (and also directly affected by certain physical properties of the emitter, which can vary with wavelength.) The discrete amount of energy emitted by relaxation of a vibration level (photonic OR collisional) has a very tenuous connection with the temperature of the relaxer. Higher temperatures will distinctly and always increase planck emissions while higher temperatures might have a tendency to reduce vibration relaxation via photon emission — because higher temperature will cause a vibration level to more likely be filled and not relaxed. Generation of planck-function photons is seldom a result of vibration or rotation relaxation; generation of greenhouse gas type photons is always a result of such relaxations.

        A big source of confusion, IMO, is the use of planck-function equations to analyze GHG radiation transfer. While they are not the same physically (and where I have some serious concerns — but that’s another subject) the mathematics and equations seem to do a credible job and, as long as one selects the correct parameters and coefficients, matches observations fairly well. So Planck functions and their subsidiary laws (Kirchoff, Beers, etc.) seem useful to analyze GHG radiative transfer — even though (I’m becoming a broken record…) the process are dissimilar.

      • Rod B,
        You made many statements that either do not understand or do not agree with.

        None of the above are Planck function “transfers”, which are always photonic, are broadband and not discrete, and are not quantized (in the quantum mechanics sense.).

        All radiation is quantized. The spectrum is continuous when the source can do quantum transitions at any energy, not only between some discrete levels. Solid surfaces and water are examples of that while gas molecules have line spectra.

        Higher temperatures will distinctly and always increase planck emissions while higher temperatures might have a tendency to reduce vibration relaxation via photon emission — because higher temperature will cause a vibration level to more likely be filled and not relaxed.

        The higher temperature does not prevent the emission from a vibration level. If the number of molecules in a particular level increases, the related emission will also increase.

        Generation of planck-function photons is seldom a result of vibration or rotation relaxation; generation of greenhouse gas type photons is always a result of such relaxations.

        I do not understand, what you want say by the above sentence.

        A big source of confusion, IMO, is the use of planck-function equations to analyze GHG radiation transfer. While they are not the same physically

        What do you mean by the claim that they are not the same physically?

        My impression is that there is some confusion in your ideas.

      • Pekka Pirilä,
        Everything is quantized per Heisenberg, but things like vibration, rotation, and electron levels are quantized in a non-trivial fashion. Translation energy (of a molecule or an airplane) is also quantized but in a non-interesting way since the level differences are infinitesimally small and have virtually no affect on analyses.

        On average a bunch of CO2 molecules will naturally have a small percentage with excited vibration, statistically based on ambient temperature — the higher the temperature the more will likely be excited. So I’m just saying if the temperature increases, a larger percentage will be excited which indicates less natural relaxation. Admittedly it’s all pretty loose ala quantum statistics and nothing prohibits relaxation with emission, as you say. My point was to make a distinction with planck function which will clearly and greatly increase its emitted radiation with higher temperatures — internal energy relaxation not so much.

        Basically, planck function radiation comes from charge acceleration. Changing internal energy levels within vibration and rotation involves very little charge acceleration — often none. (Though changing electronic levels does cause charge acceleration and so planck-type radiation, though this is discrete radiation, different from the more usual broadband radiation.)

        The above generally describes the physical difference between these two types of radiation. (I should say radiation source; once it’s radiation — a photon — they are all exactly the same.) However, with the proper assumptions and coefficients “GHG” radiation can be made to fit very closely the mathematics of planck-type radiation and that is very useful even if not exactly the same.

      • Rod B,
        The second part of this statement is false:

        So I’m just saying if the temperature increases, a larger percentage will be excited which indicates less natural relaxation.

        With increasing temperature the rate of excitation will increase and so will also the share of molecules in excited state. That leads to an increase in emission by the same factor as the number of molecules in excited state has increased. The ratio molecules in vibrationally excited state (15 um line) to those in vibrational ground state is proportional to exp(-E/(kT)) where E is the energy of excitation and T the Boltzmann constant. This function tells, how the emission rate increases with temperature.

        The Planck law is actually very closely linked to the proportionality that I gave in the previous paragraph. The intensity of radiation at a fixed wavelength (or energy or wavenumber) increases with temperature in the same way for gas and black body or a solid surface. The same exponential dependence on temperature is seen in the Plancks’s law. The difference is that for a black body the increase happens at all wavelengths and in relative terms faster at low wavelengths. Therefore the peak of the distribution moves and the total emitted energy increases more rapidly than the intensity at a fixed wavelength.

        There is much less difference between the radiation from gases and from solids than you seem to think.

      • I still disagree, but not quite as much after reviewing your last comment. It is true that both planck-type radiation and the degree of excited molecular states are proportional to the same factor: e^-[E/kT] But in Planck this factor leads directly to radiation intensity in Watts/m2 (though only in a delta freq portion of the source) In the other it leads only to the portion of molecules in an excited state; and then there is one further step to estimate the degree of radiation emanating from the excited molecules, which I presume gets into Einstein coefficients. So Planck radiation in total is proportional to T^4 while relaxation radiation is proportional to e^-[E/kT] but further lessened ala Einstein coefficients.

        On the other hand there may not be such a massive difference that one (like me) initially thinks – as you say. First we’re comparing apples with oranges in some respect. If one adds a pile of CO2 molecules, more will be excited and there will be more radiation. But if you just add molecules to a body, Planck radiation won’t increase at all. One depends on temperature, the other depends on the quantity of excited molecules (which is affected by temperature.) Also if the temperature increases (at least at the example I did – 300K to 350K), the ratio of excited molecules increase is very close to the increase in total Planck radiation. However, I don’t know if this has any real meaning or is just cutesy numerology (in one case the number of excited molecules jumps about 2 percentage points; in the other total radiation goes up 80 to 400 watts/m2.)

        None-the-less Planck radiation is generated differently (in most cases) than relaxation radiation is and so (still) they are not the same. Though as I said before planck mathematics can fit pretty well to relaxation and is useful for analyses. How accurate the usefulness is I wonder about but don’t know. The process of CO2 radiation absorption and emission is different from planck radiation absorption and emission. The greenhouse effect stems for the former. Yet it is the latter that is used to explain atmospheric warming with their multiple flat layers (slabs) of atmosphere and planck radiation of the sigma(T^4) variety between the slabs. That may (has??) prove to be reasonable – but is it robust? Unassailable?

      • Rod B,
        The temperature dependence of radiation following the Planck law increases with temperature for precisely the same reason that makes the emission from a gas to increase with temperature. The reason is an increase in the occupation of states that can emit at a particular wavelength.

        This is usually not discussed explicitly, when discussing the Plank law, but the reason is really that.

        The only difference is that in gases only a discrete set of excited states is available for excitation while a solid or liquid that emits with a continuous spectrum the number of possible excitations is so large the resulting spectrum is continuous and typically close to Planck law.

        The black body is an idealization that can be approximated by a cavity with a very small opening (pin-hole) compared to its size. The geometry of such a cavity leads through multiple reflections to the result that any non-zero emissivity of the surfaces will be upped to apparent emissivity of 1.0 at the whole.

      • Pekka, I agree with what you say though one can easily skip over the significant “nuances” between the different radiations if not careful. Even though, as you say, “…Planck law increases with temperature for precisely the same reason that makes the emission from a gas to increase with temperature…” the radiations are different and that difference is relevant to climatology, yet there are many who ignore or deny the differences.

        The other interesting can of warms (which I said earlier I do not want to bring up) is whether a gas does or does not radiate ala Planck. But pretend I didn’t mention it!

        BTW, I’m still trying to understand where and how one posts comments. I trust this is going in the logical/correct slot….

      • Pekka,

        what I am getting in this thread and the previous one is that it is believed that CO2 absorbing IR does no cause much emission but the energy is conducted through collisions. CO2 colliding with other air constituents DOES cause emission.

        As we are told how powerful backradiation is, does this account for the back radiation correctly??

        Depending on air temperature and ground emissions, doesn’t this only happen during heating periods?

        In other words, is there a coherent explanation of what happens in the morning going from a cold period of no SW to the afternoon where temps peak from SW absorption and back to the evening with no SW and everything cooling?

      • This may help with part of your question (but it’s a very very rough calculation). The mean free path in the atmosphere at 1 bar is ca. 70 nm. The mean Boltzmann velocity is ca. 500 m s-1 (I’m just taking very rough values). These imply a mean lifetime between collisions of about 0.1 – 0.2 nanosecs (obviously increases with decreasing P as free path increases). The fluorescence time scale is typically order 1-10 nanosecs. So, one way to think about this is that collisions happen “faster” than photon emission. i.e. the emission time scale is slower than the thermalization time scale. This will be especially true at higher P, but at low enough P emission will dominate (as Pekka hints above). If the ratio of emission to collision time scales is, say, 1/20, then 5% of the absorbed energy is re-emitted a la Kirchoff, and 95% goes to collisional energy (i.e. heating). I haven’t had any time to work out the numbers precisely, so this is just a quickie hypothetical, but it should be in the ballpark

      • My thinking was OK ,but my recollected fluorescence time scale was apparently way too short for CO2 (the 10 nanosec value I gave before arise more appropriate to some UV transitions). Ray gave the relevant numbers in his article for a 0.1 atm case. Tau (lifetime) of excited state O(1e-2 sec), tau collision O(1e-7 sec). So the thermalization/emission ratio ought to be roughy 1e5. Clearly thermalization dominates in all but the “thinnest” part of the atmosphere.

      • “Isn’t the photon’s energy being counted twice? It can’t both result in local warming and stimulate equal IR emission (since that emission would cool the “local” molecule(s) right back down again.)”

        Brian H (and also Bryan) –

        In a steady state, a layer of atmosphere absorbing IR photons, thermalizing the energy, and then experiencing photon emissions as a result will maintain a steady temperature – it won’t heat, because energy gain is balanced by energy loss. The important question is what happens when the IR photon input rises, so that more photons are absorbed (e.g., because there is more CO2 in an adjacent layer). Photon emission rates are a function of temperature, because almost all molecules emitting photons do so from thermal excitation rather than as a result of a photon they have just absorbed. If the temperature were to remain constant, the extra energy absorbed would not be matched by an increase in photon energy released. For a new steady state to be established, the temperature must rise until the new photon emission rate once more matches the absorption rate.

      • Fred Moolten
        Do the models explicitly subtract the average emitted radiation from the average absorbed radiation to obtain the average thermal energy gained and hence the temperature rise of the volume under consideration?
        In addition the models would in the same volume have to account for phase change effects, convection and diffusion(conduction).
        Turbulent conditions are very likely as well as surface inhomogeneity.
        Add in Earth-spin day/night, thunder and lightning and volcanic activity
        They are to be congratulated for even attempting what seems an impossible task.
        However a little modesty about the predictive value of the models would be appropriate until a track record of some success is evident.

      • “must rise”. Well, yes, arithmetically speaking. But that rise is not free. Until your new suitable emission temperature is reached, all the energy for that must come from the incoming radiation. So, assuming an artificial “step function”, there would be a pause in all emissions until that temp is reached. So the “cost” of getting there must be accommodated.

        Double counting must be scrupulously avoided! At all “costs”!

        ;)

  62. Hi Judy – I have published a comments on my weblog

    Comment On Raymond T. Pierrehumbert’s Article In Physics Today Titled “Infrared Radiation And Planetary Temperature”
    http://pielkeclimatesci.wordpress.com/2011/01/21/comment-on-raymond-t-pierrehumberts-article-in-physics-today-titled-infrared-radiation-and-planetary-temperature/

    with the text

    Judy Curry comments on the Raymond T. Pierrehumbert Article In Physics Today Titled “Infrared Radiation And Planetary Temperature” in her post

    Pierrehumbert on infrared radiation and planetary temperatures [from Climate Cash]

    I agree with Judy that this is a very informative and valuable article on the role of CO2 and other greenhouse gases in the Earth’s (and other planet) atmospheres. However, there is one major error in my view in the article.

    Ray concludes that

    “… increasing CO2 would warm the atmosphere and ultimately result in greater water-vapor content—a now well-understood situation known as water-vapor feedback.”

    This significantly overstates our understanding of the water vapor feedback on Earth since phase changes of water are intimately involved. In a world without these feedbacks, but in which evaporation from the surface increases if surface temperature increases from the added CO2, his conclusion would be correct.

    However, as summarized by Graeme Stephens in my post

    Major Issues With The Realism Of The IPCC Models Reported By Graeme Stephens Of Colorado State University

    where he wrote, for example,

    “Models contain grave biases in low cloud radiative properties that bring into question the fidelity of feedbacks in models.”

    “The presence of drizzle in low clouds is ubiquitous and significant enough to influence the radiative properties of these clouds and must play some role in any feedbacks.”

    ”….our models have little or no ability to make credible projections about the changing character of rain…”

    major uncertainties remain.

    The water vapor feedback in the real climate system remains incompletely understood.

    • thanks roger, i totally agree that the water vapor feedback is incompletely understood (not to mention cloud feedbacks, etc.)

      • Love those adjectives.
        The total spherical energy output of the sun is incompletely absorbed by the Earth.

        Does “incompletely” cover the territory, “very badly”, “hardly at all”, “minimally”, etc.?

    • roger, the link to stephens presentation is broken, do you know if this is still available somewhere? thx

    • Is there really a difference in opinion or only two different ways of understanding the words “water vapor feedback”. Pierrehumbert may have interpreted them to refer only to the feedback by water that is in gaseous state whereas Pielke includes also the connection to clouds. For the first interpretation the statement that it is well understood is justified, for the latter not.

    • Roger,
      Yes, but it appears there is more wrong with the paper than that. Everyone knows the CERES data shows too much warming so in 2009 Trenberth rejected the directly measured data for his guess of 0.9 W/m^2. It appears Raymond has assumed Trenberth’s guess is correct, which appears to me to be a fine example of circular reasoning. Why not 0.3 W or 0.1 W?

  63. Hi Judy – That link no longer works for some reason. I found his powerpoint slides, however, at http://gewex.org/2009Conf_gewex_oral_presentations/Stephens_G11.pdf
    I will update on my weblog also. Thanks!

    Roger

  64. Ray Pierrehumbert’s article on infrared radiation and planetary temperature provides a useful and informative perspective on the nature of thermal radiation, and how this relates to the radiative transfer of thermal radiation and the greenhouse effect that is a common characteristic of terrestrial-type planetary atmospheres illuminated by solar radiation.

    Raypierre describes some of the basic fundamentals that are important to the radiative transfer of thermal radiation. He notes that (1) the coupled vibrational and rotational states of CO2 have very long lifetimes compared to the collision frequency with other molecules; (2) molecular collisions establish and maintain the local thermodynamic equilibrium distribution (and population) of the vibrational-rotational states from which spontaneous photon emission and photon absorption transitions arise; (3) detailed balancing of energy transfer transitions under LTE conditions, as described by Kirchhoff’s Law, requires that Planck function limited thermal emission balance the absorption of thermal radiation at all wavelengths.

    Naturally,because of space limitations, details of radiative transfer formulation and the radiative structure of the greenhouse effect are necessarily sketchy. For those interested in the details, there is the 500+ page book by Pierrehumbert, as well as a great many other books and articles on radiative transfer and the greenhouse effect.

    There are now well over 250 comments on this thread, some perhaps in response to the question raised by Judy whether anyone has learned something, or changed their mind as a result of the discussion here. A glance at the “same old comments” makes me doubt that anyone has actually learned anything new – but learning is a personal experience best left for those to speak for themselves.

    I have, however, been particularly impressed by the comments that have been put forth by Fred Moolten (and on earlier threads by Chris Colose). Fred, if I am not mistaken, is a semi-retired Medical Doctor who only recently has taken an interest in understanding the nature of global climate change, and Chris is a soon-to-be graduate student. Both have demonstrated excellent understanding of the basic facts, physics, and issues that define the global climate change problem that we face. I cannot recall any explanation that they have given that is at variance with our current best understanding of the facts as we know them. Would that all those who work as climate scientists had as clear and accurate grasp of the basic working of the climate system, relevant measurements, modeling analyses, etc. as Fred and Chris.

    All this is very encouraging since it means that understanding global climate change is not limited to climate science experts who have been studying the problem for decades. Anyone who has the interest, and is willing and able to spend the time and effort to read and research the literature can come away with a good understanding of how the climate system works, what is driving climate change, including also an appreciation of the complexity of the climate system, and limitations of available observational data, that temper the conclusions that can be drawn.

    Fred has been very patient in providing informative and well thought out answers to a great many questions that have been posed here. I believe that Fred has stated as much, that trying to explain a problem to someone less knowledgeable is the best way to learn. In that I am in full agreement.

    Thirty-five years ago I had no clue at all as to what thermal radiation is about. I had just finished implementing a solar radiation model into the early version of the GISS GCM, and was asked to do the same for thermal radiation. As you well know, computers are totally clueless (but fortunately computers don’t have the arrogant ignorance that is sometimes exhibited by some of the commentators here), so that very addition, multiplication, subtraction, and division needed to describe the physical problem has to be painstakingly laid out step by step by step.

    I was then asking, and having to find answers, to many of the same questions that are being asked here. What is an absorption coefficient? Optical depth? And why does it have to depend on pressure, temperature, and absorber amount? What happens if here is overlapping absorption, like between water vapor and CO2? Do we need to worry about scattering by clouds? Is the spectral variation of absorption coefficients important? If averaging of absorption coefficients is bad, what other options are there? Why is the Planck function required to multiply emission, but not absorption? Is there a ‘right’ answer, and how would we know it if we saw it?

    It turned out that in the process of explaining thermal radiation in sufficient detail for the GISS computer to understand it, all of these questions became adequately answered. As outlined by Raypierre, invoking Kirchhoff’s Law under LTE conditions, we find that thermal emissivity must be equal to thermal absorptivity. Radiation emerging through a pinhole from an isothermal cavity of temperature T must be Planck radiation B(v,T). If an atmospheric slab of temperature T and optical depth TAUv is inserted in the cavity just beyond the pinhole, the emerging radiation from the pinhole according to Kirchhoff’s Law must still be Planck radiation, which can also be described as consisting of two components: the transmitted radiation, B(v,T) exp(-TAUv); and the emitted component, B(v,T) [ 1 – exp(-TAUv)]; the sum of which is equal to B(v,T).

    Thus, each layer of the atmosphere will be characterized by its transmission, exp(-TAUv,n); its absorptivity, [ 1 – exp(-TAUv,n)]; and its emissivity, B(v,Tn) [ 1 – exp(-TAUv,n)]. Radiative transfer starts with Planck radiation B(v,Tg) being emitted by the ground. The outgoing flux at the top of the first layer will then be the sum: F1top = B(v,Tg) exp(-TAUv,1) + B(v,T1) [ 1 – exp(-TAUv,1)]. The second layer is then added on to obtain F2top = F1top exp(-TAUv,1) + B(v,T2) [ 1 – exp(-TAUv,2)], and so on to the top of the atmosphere.

    The above holds for monochromatic radiation. It involves nothing more complicated than exponential extinction (Beer’s Law absorption), specifying the temperature, absorber amount, and absorption coefficient in each layer of the atmosphere, then going through the stack of atmospheric layers and summing up the products of the radiation transmitted through each layer and the radiation emitted by each layer. A tedious task to do by hand, but a rather simple task for the computer.

    The complexity arises when we need to apply the above set of calculations to the entire spectrum. In line-by-line modeling, several million monochromatic calculations need to be performed. This is far too computation intensive for GCM applications. For climate GCM applications, we can regroup the brute force spectral calculations in terms of correlated k-distributions that only require a few dozen pseudo-spectral calculations to achieve nearly the same accuracy as the line-by-line calculations.

    • A Lacis: Did you ever get around to answering Willis E.’s shot across your bow about Pinatubo?

      I appreciate Fred M. too, but frankly I thought you were a classic internet blowhard on the climate change side because of how poorly you handled discussion beyond technical issues — though you had no compunction about expressing yourself at that level.

      I have to ask, since you ask it of the other participants here, “Have you learned anything new?”

    • Andy,

      I agree with your points. I’ve learned a lot from Fred, Chris, you, Roger except from the posts of Judith and Peter’s. Even though your viewpoints on climate dynamics may be totally different, the debate following the spirit of real science is very helpful to new generation of scientists and also to general publics. Given the complexity of climate system, it is natural for scientists to debate with each other, but the history of development of modern meteorology as shown in the books [ The Atmosphere a Challenge: The Science of Jule Gregory Charney, http://www.amazon.com/Atmosphere-Challenge-Science-Gregory-Charney/dp/1878220039/ref=sr_1_5?s=books&ie=UTF8&qid=1295669247&sr=1-5 ; Meteorology at the Millennium http://www.amazon.com/Meteorology-Millennium-83-International-Geophysics/dp/0125480350/ref=sr_1_1?ie=UTF8&s=books&qid=1295669337&sr=1-1 ] would be able to tell us how climate scientists have been facing the challenge, and I hope the readers might be interested to understand the climate change dynamics from a broader perspective by reading these and other books.

    • Dr. Lacis,
      Because I published a quarterly newsletter for medical doctors, I have to read scores of scientific papers in a different field from climate science. I can assure you I never see the journals in my field the kind of arrogance displayed by Raymond Pierrehumbert. He titled his paper “Infrared radiation and planetary temperature.” Wrong. The paper is about infrared (and I assume visible) radiation and inferences about planetary temperature. The arrogance displayed in the title alone is enough to be off-putting to any careful scientist.

      • It is a review paper. It reviews the state of the science and presents nothing new. The title is fully appropriate for a review paper.

  65. A Lacis:

    “All this is very encouraging since it means that understanding global climate change is not limited to climate science experts who have been studying the problem for decades. Anyone who has the interest, and is willing and able to spend the time and effort to read and research the literature can come away with a good understanding of how the climate system works, what is driving climate change, including also an appreciation of the complexity of the climate system, and limitations of available observational data, that temper the conclusions that can be drawn.”

    Puhleeeeeze, dear Dr., stop this patronizing, elitist, ivory tower, know-it-all, arrogant obfuscation, and deal with the QUESTION OF WHY THE HYPOTHESIS DOESN’T MATCH REALITY. We know what you and other CAGW zealots believe; we know the radiative physics. What we DON’T know is why all this nice physics is not explaining anything that is happening with the temperature. Could it be that there is something beyond “radiative physics” that influence the temperature/climate?

    • Paul Middents

      Why would a climate scientist not want to engage with statements like these?

      To Dr. Lacis: Thanks for recognizing the contributions of Fred Moolten and Chris Colose to these discussions. I would add Pekka Pirilä to this list. He is infinitely patient, knowledgeable and polite. Recognizing that he is not working in his first language makes his effort even more impressive to me.

      I have also greatly enjoyed Vaughn Pratt’s not always patient or polite, but always provocative and entertaining contributions.

      • I agree.

        And there are others, including Judy, who, through their comments and perspective, add real value to this blog.

  66. JAE, local ground level temperatures are mostly set by local insolation, cloud cover, albedo, soil moisture, ET, lapse rate (partly a function of local relative humidity), extent of local convection (which can depend on above, plus local differences in the land surface and cover that will set up local convection cells). A classic very simple case is the “sea breeze” problem, given in many elementary texts. As the local land heats up, it sets up a convection cell that draws in cool air off the oceans, lowering adjacent land T. To a rough sort of approximation the Indian monsoon works the same way. The greenhouse effect of CO2 and/or H2O doesn’t set local temperatures. You’re simply looking at the wrong mechanism. Of course there is something beyond radiative physics that sets local temperatures – just ask you local meteorologist. And on of the above posters foamed at the mouth over Ray’s lack of discussion of atmospheric convection. Don’t worry, Ray understands that and it’s importance very well.

  67. Dr. Curry,
    I don’t know if you have noticed, but the discussion above between Fred, Pekka and cba this morning is very interesting.

    At January 22, 2011 at 3:45 am Pekka quotes from a 2009 paper from Trenberth and co-authors:

    “As the value 0.9 was considered to be much more accurate than the direct analysis of the CERES data they made adjustments to the analysis in order to reach consistency. ”

    And it appears Raymond has used this figure in his 2010 paper rather than the direct CERES measurements, isn’t Raymond’s paper built entirely on circular reasoning?

    • thx, i’ll take a look in detail this aft

    • Ron,
      These papers and presentation involve many separate issues. Some of the papers are trying to determine, how much warming is taking place, but neither Pierrehumbert nor the various papers by Kenberth, Fasullo and in some papers also Kiehl are aiming at that. They try only to explain what is going on and present some quantitative numbers on that. They pick the number from the approach that they think gives the best estimate for that number without any implication that this would be contain something new.

      These papers are descriptive, not new original research in the sense that it would aim at a more accurate estimate of the warming trend. I think these papers are very useful and they should not be controversial at all, when their goals are understood properly.

      While the Trenberth – Fasullo papers tell what is their goal and how they have reached their results, I think that they are not at all as clear as they could be. This is the problem with these papers. If they would be clearer much present confusion would have been avoided.

      • Pekka,
        Usually when measurement data is found to be wrong, researchers are able to explain why – what went wrong with the instrument and how they were able to determine the extent of the bias, etc. It does not appear Trenberth tried to do that at all. It appears he just said “Well, we know that’s not right. Let’s replace it with 0.9 because that fits with our theory.” I don’t buy it. That is not science.

      • Ron,
        It is known that the accuracy of the CERES analysis for the global energy fluxes is not accurate. According to the title of the paper that is, what a 2006 paper by Wielicki et al is about. (It is a conference paper and I have seen only the abstract and what Fasullo and Trenberth have picked from the paper.) Thus it is indeed known that the result given as 6.4 W/m^2 is not accurate, but as an error that may be as large as the deviation from zero.

        Whether one trusts Hansen or not, it is rather easy to estimate that the net flux cannot be much more than 1 W/m^2 without heating the earth much more rapidly than it has heated. Anything close to 6.4 is really out of question. Thus this number is really only an indication of, how close to reality the CERES analys can be. The number 0.9 is picked as one value with some justification. Another choice would have been to force it to zero and say that we know that this is not correct, but it lets us to get the rest of the numbers reasonable. Whether it is set to 0.9 or to 0.0 it is anyway a input constraint, not a result. This is a central point that Trenberth (et al) tells in all these papers but not as clearly as he should tell it. In particular the 2009 paper where he is the only author is far from clear enough.

      • Sorry for the bad sentences (starting with one formulation and ending with something different) and other linguistic errors.

      • Whether it is set to 0.9 or to 0.0 it is anyway a input constraint, not a result. This is a central point that Trenberth (et al) tells in all these papers but not as clearly as he should tell it. In particular the 2009 paper where he is the only author is far from clear enough.

        Actually Trenberth, K. E., and J. T. Fasullo, 2010: Tracking Earth’s energy. Science, 328, 316-317 tells quite a different story.

        The human influence on climate, mostly by changing the composition of the atmosphere, must influence energy flows in the climate system (4). Increasing concentrations of carbon dioxide (CO2) (see the figure) and other greenhouse gases have led to a post-2000 imbalance at the top of the atmosphere of 0.9 +/-0.5 W m–2 (5); it is this imbalance that produces “global warming. p. 316 my emphasis.

        Reference (5) is to. K. E. Trenberth, J. T. Fasullo, J. Kiehl, Bull. Am. Meteorol.Soc. 90, 311 (2009), which as we know tells quite a different story:

        Thus, the net TOA imbalance is reduced to an acceptable but imposed 0.9 W m−2 (about 0.5 PW)

        I think you are being far too generous to both authors.

  68. Jim Owen | January 22, 2011 at 12:24 pm |
    Yep -cognitive dissonance, JCH. Rewriting history is one of the symptoms.

    The irony is that the Westside highway did flood and has since been rebuilt!

    “This was probably one of the few elevated roads that could be blocked due to flooding during a rainstorm! ”
    http://www.nycroads.com/roads/west-side/

  69. Hunter said:

    “Yet how many posts here assume that AGW is basic physics and that the climate is deterministic.”

    ___
    Well, certainly if AGW is occurring it is basic physics and quite deterministic. Some might incorrectly assume that the climate, as a chaotics system is not deterministic, but it is quite so, but being chaotic it is not fully predicable, and will have deterministic but unpredictable “tipping points” where it will suddenly shift to a new balance point of equalibrium.

    • R. Gates,
      Thank you for that, but perhaps I did not make my point as clearly as I wished.
      When I say the climate is not deterministic, I am using that term in reference to the atmosphere/climate system it in the sense that it is not linear, all of the variables are not accounted for or even understood, the sample size of relatively high quality data is very small, the margins of error are large and functions of the atmosphere/climate system are not well understood.
      Australia is a great example:
      Climate scientists proved disastrously unhelpful in actually preparing for the end of the cyclical drought.
      Pielke, jr. has highlighted a study that compares Earthquake losses to corrupt societies. Another way of looking at corruption that leaves buildings poorly built in earthquake zones is that the advice policy makers choose to follow or ignore is based on a faulty view of risk.
      Climate science pushes global climate disruption caused by CO2 as the main issue of climate today. This is at the expense of studying natural history, of listening to civil engineers, of considering that perhaps they do not have the picture in full or very accurately.
      Instead there is the push of CO2.
      Not adaptation. Not reflection on the scale of the impacts of CO2. Not an admission that not one natural disaster has been linked to AGW. Just reduce CO2, no matter the price.
      That is not really better than letting building inspectors pass concrete buildings with little re-bar and weak foundations in Haiti.

      • Rebar? In Haiti? Wow, that would be an innovation. It certainly wasn’t used when I was there.

      • Jim,
        I did not want to say ‘no rebar’, and have someone post a photo of a broken Haitian building with some rebar and dismiss my comment on those grounds.
        Other earthquakes worth studying are the pre-WWII earthquakes in Japan, where building codes permitted poor quality construction.
        If I recall, one thing that made Frank Lloyd Wright famous was that the hotel he designed for Tokyo survived the earthquake due to his insistence on high quality concrete construction.
        I put New Orleans and Katrina in as an example of how corruption leads to bad policies which leads to disaster, by the way.

      • hunter –
        I’ll apologize for being late to the party here.

        Just wanted to say that consistent with many Third world countries, most construction in Haiti is either concrete block or stonemasonry. Except for the large percentage of people who live in stick and wattle huts. Rebar is certainly used, but not as a standard construction technique – only for “special” buildings. Which, in Haiti, didn’t include even government buildings.

        I certainly agree about New Orleans/Katrina.

  70. Harold Seelig, PE

    The biggest issue regarding CO2 as a greenhouse gas, and I’m sure no one is denying that CO2 is a greenhouse gas, seems to be figuring out just how much effect CO2, and all greenhouse gases together are having. This is whether we consider thin opaque layers, or some other strategy, the real question is “what is the total Carbon Dioxide contribution, in temperature terms, and therefore the human contribution?”
    We know the Earth’s climate is “just right” for life, and we know much about the other planets’ conditions and temperatures. I hope I’m not being a bit blunt in this highly technical conversation, but it seems we have a perfect example of ‘what if’ very close to us. It’s our Moon.
    If we answer the simple question about why the Earth is warmer than the Moon, it should help put a lot in perspective. I’ve not heard any discussions about the actual temperature benefit for Earth’s Climate from Earth’s internal heat. Our crust is only 0.075% of Earth’s radius, with molten rock below. In the 2 hottest places on earth, the crust is only half this value. So, internal heat conducted through the crust is warming us some. There is a thermal flywheel of Nitrogen and Oxygen, which if on the Moon, would hold down ‘daytime’ temperature, greatly reducing radiation losses (temp^4), and increase the average temperature.
    The remainder is the insulating value of greenhouse gases. It’s my belief that once we (some talented scientist) puts a number on internal heating and flywheel warming, the remainder will be greenhouse gases, of which there are many, but only a few with significance. At this time, we can put real numbers on the actual effect, be it 10 deg or 0.001 degree.

    • Harold –
      Some time ago I asked Andy Lacis about the internal heat engine that we live so close to (the Earth’s core) as well as about the heat that accompanies all that CO2 production that’s the central point of contention here. I got no answers. Nor do I expect any. Recently I suggested that the new Indian paper re: cosmic ray effects might be worthy of consideration (especially considering the recent success of the Cloud experiment) – and was dismissed out of hand. Expecting answers to “inconvenient questions” is not something I do much – at least not wrt GW/CC. It’s why I long ago coined the phrase ” The Church of AGW”. :-)

      I wonder if anyone really believes those things are factors considered by either the models or the climatologists? I wonder if anyone else believes they should be?

  71. Harold Seelig, PE

    Jim,
    Thanks for the polite response. I know it’s very easy for a group of enthusiasts to lead themselves off on a tangent, and lose perspective of “the big picture”.
    A burning question of mine, is that since the identification of the “Ozone Hole” was coincidental with development of measurement techniques, that perhaps the Ozone Hole has come and gone over the eons. NASA animations of the Ozone Hole shows a ring of much higher than normal concentration (450-500 daltons) around the hole, which makes me think about ‘seasonal displacement’, rather than ‘seasonal destruction’. But now that this is ‘settled science’, perhaps “Peer Review” no longer applies? In years past, people blamed themselves for various natural disasters, “the gods are angry”, and the rational thought at the time was sacrifice, as it is today.

    • Harold –
      A burning question of mine, is that since the identification of the “Ozone Hole” was coincidental with development of measurement techniques, that perhaps the Ozone Hole has come and gone over the eons.

      I have had the same question cross my mind more than once. But the truth is that nobody knows. Actual ozone data measurement started in the mid-50’s and showed no thinning of the ozone over the Antarctic. I won’t pursue the question of observation accuracy or technique – nor of instrument accuracy or reliability back then. The Hole was “discovered” in the early 80’s , first by ground based measurement of the ozone column, then later confirmed by examination of the Nimbus 4 and Nimbus 7 satellite data. Since then it has waned and waxed with little regard for theory or the intent or results of the Montreal Protocol.

      And recently there was something that crossed my path that indicated some question about the actual role of CFC’s wrt the Ozone Hole. But I didn’t save it and don’t know how reliable it was.

      But don’t despair. At various times an Earth-centric Universe, phlogiston and Newtonian mechanics were also “settled science”. None them survived the “cut”. And today’s “settled science” will become tomorrow’s historical oddity. We just may not be here to see it happen. After all, it took 2500 years for Democritus’ atomos to be recognized and transformed into the Bohr atom. Which, in turn, has been replaced several times in less than 100 years. :-)

  72. Jim Owen,

    Keeping track of the energy is key to understanding what is happening with the climate system. There are indeed a half-dozen or so different sources of energy that contribute to the energy balance of Earth but are not being included in climate model simulations of terrestrial energy balance. These include the internal geothermal energy (volcanoes, geysers, earthquakes, etc), tidal friction, meteoric accretion, nuclear energy production, and the heat released by the burning of fossil fuel.

    All of these are significant amounts of energy (in the human perspective). For example, there is an estimated 1200 tons/day of meteoric dust impinging on the Earth at velocities of roughly 40 km/sec. This amounts to 10^15 J/day (or the equivalent of 250 kilotons of TNT/day). But since the surface area of the Earth is 5×10^14 m2, the total meteoric energy input amounts only to 25×10^-6 W/m2 compared to the global mean 240 W/m2 solar energy. Similarly, the heat energy that is generated by tidal friction is about 0.0075 W/m2. The internal geothermal energy is the largest of these minor contributors, with the global mean energy amounting to about 0.05 W/m2 (Allen, Astrophysical Quantities).

    The world production of coal is about 7 billion tons/year. With the energy equivalent of 10^7 J/kg, this amounts to about 4×10^-6 W/m2. Roughly similar amounts of energy are contributed by the burning of oil and gas. Nuclear energy amounts to about 10^-6 W/m2. Thus the direct heat input to the climate system due to the burning of fossil fuels is quite negligible. But not the greenhouse effect due to the CO2 that is being added to the atmosphere.

    While we are on the topic of 7 billion tons of coal/year, it is instructive to note that a cubic meter of water weighs a ton, and that the specific gravity of coal is about 1.2. This means that each year humans are digging nearly 6 cubic km of coal out of the ground to be burned. (Oil and gas production results in a similar amount of carbon, but because of the hydrogen content, oil and gas produce roughly twice the energy per kg of carbon.)

    For additional perspective, the atmosphere contains about 390 ppm of CO2 by volume. The weight of the atmosphere is about 1 kg/cm2. Thus the total mass of the atmosphere is about 5×10^15 tons, so that 1 ppm of CO2 (44/29×10^-6) comes to 7.6 billion tons. With a specific gravity of 1.5, the effort to extract 1 ppm of CO2 from the atmosphere (as part of some geoengineering effort) would require the accumulation and sequestering of about 5 cubic km worth of dry ice equivalent of CO2 – a not particularly attractive prospect.

    The above puts it all in perspective. Globally, compared to solar energy, all of the other energy sources are negligibly small – i,e, much smaller than the uncertainty in the solar energy absorbed by Earth. Locally, there are places where tidal and geothermal energy is sufficiently concentrated to be viable replacements for fossil fuel energy sources. Likewise, solar, wind, and nuclear energy sources are available, but are not being fully utilized.

    All this points to coal as being the most problematic greenhouse contributor of atmospheric CO2, and the least efficient in energy production per kg of carbon burned – and thus the obvious candidate to be phased out as quickly as practicable, if we are to be serious in averting the looming dangers global warming.

    • This picture scanned from the 1979 book “Renewable energy” by Sorensen gives a nice overall view of the energyflows of the earth. Numbers are given in TW. In comparison the present energy use of human societies is about 15 TW.

      The data is old and some of it is outdated, but in general they should give the correct picture.

    • I will say “Thank you” for the answer. It’s the best answer I’ve gotten in 10 years and I appreciate that. I see some problems with it, but I won’t argue those points at the moment. I’m sure that will come later. :-)

    • A Lacis | January 23, 2011 at 3:34 am | Reply

      if we are to be serious in averting the looming dangers [of] global warming.

      Which begs the question, of course. Are there dangers? Are they looming? Is global warming occuring as a result of CO2 changes?

      Your answer to all the above is clearly “yes!”. But none are proven, and there is much evidence against each and all of them.

  73. Harold Seelig

    A Lacis, Pekka Pirilä,
    I appreciate the knowledgeable response and figures. Without having calculated the conductive transmission of internal heat, I thought it was much bigger. I went back and calculated the value using conductance of limestone, and the simple math gives exactly 0.05 W/m2. At Death Valley, this rises to only 0.1 W/m2.
    How about the flywheel effect of our atmosphere? If on the Moon, etc. (my previous post). The only way I can imagine to consider this, is to consider only non-greenhouse gas, 21% O2, 79% N2 at our pressure.
    Surely the surface of the Moon is very highly insulating, and heats up very quickly. An atmosphere with convective cooling of the surface would drastically reduce daytime re-radiation, and the warm atmosphere would greatly warm the night-time surface. Thoughts?

  74. Harold Seelig

    I was aware of the work on measuring Ozone and in the very early days and heard about the CO2 concerns (which my Dad was skeptical about) in the 70’s and 80’s. My Father (rip) is Walter R. Seelig. He was responsible as Project Manager for mapping the Antarctic, he was on the first flight (2 military transports) which flew over the South Pole, has 2 mountains named for him, Mt. Seelig, and Seelig Peak, both in the Antarctic. So, while I’m proud of him, the point here is that he was inside the NSF, went to the Antarctic a lot of times, was Scientific Liaison with the captains of the Eltanin and other scientific cruses, so, I lived through nameless numbers of slide shows, heard all about life at McMurdo, saw stuff from Scott’s Hut, photos of penguins, etc, etc. , and I heard a lot about the programs going on in particularly the Antarctic. I studied “OPERATIONS HANDBOOK – OZONE OBSERVATIONS WITH A DOBSON SPECTROPHOTOMETER
    by W. D. Komhyr Prepared for the World Meteorological Organization Global Ozone Research and Monitoring Project June, 1980”, and that making a measurement and expecting it to be accurate to even a few parts/10million was at that time not usual. So while most believe that CFC’s and not solar blasts (the N&A. Lights) caused the Ozone Hole, the results of the Montreal Protocol were not overly damaging to the Economy, and whether the Science is right or not, the downside is not too bad. Sulfur and Acid Rain? Absolutely. Tetra-ethyl Lead? Asbestos? PCB’s? VCM’s? Thalidomide? Nicotine? Did whales sunburn in the past? Did people? We’ve come a long way. Recently, the jury is back in regarding DDT and Eagles.
    The cost of a mistake in Energy is extreme.

    • Harold –
      If you can find a copy of Aaron Wildavksy’s book “But is it True?” you might find it interesting. It’s a research report wrt the truth or falsehood of many environmental issues.

      • Jim,
        I have not read Aaron Wildavsky’s book and just checked, what Wikipedia tells about him. Judging from that description he does not like the precautionary principle or at least the way it is used in practice. To state my own prejudice: I do like the principle, but I do not like the way it is used. It is possible that Wildavsky would agree, but it also possible that he does not like even the principle itself.

        Why do I write this comment. It is to tell that I consider these issues the most central and essential problem in the whole question on, what we should do with the climate change. They are in my mind so difficult that the wise decisions cannot be reached in an informed way without a deep discussion of these issues accepting opposing points of view and spending a lot of effort in trying to reach wise conclusions.

        Presently few people are willing to go deeper in these issues. They rather make up their mind based on their general political attitudes following usually the decisions other people with similar general attitudes have made before. They are for rapid action, if they believe more generally that free uncontrolled development based on market forces is creating more and more problems. They are against such actions, if they have in other connections learned to trust that free markets with minimal regulation is best for us. When they have made up their mind, they try to use the uncertainties in their favor. Either they may say that precautionary principle is easy to apply and we should act promptly, or they say that we do not know whether this is serious at all and we should postpone all action.

        The issue is not that simple and it may indeed be very crucial. While I am not at all certain on the outcome, I am convinced that anthropogenic influences are presently so strong that dangerous consequences cannot be simply ruled out as being an a level not reached by human influence.

        This means that I do not want to rule out precautionary principle. If this is accepted, we must proceed to think in more detail. Indiscriminate application of precautionary principle leads often to stupid decisions, which are unlikely to help the issue considered and are likely to cause damage elsewhere. This appears to be a point made by Aaron Wildavsky. The short references to his thoughts presented in the Wikipedia article include some very good points. But my own conclusion is still that this is not a sufficient basis for ruling out the precautionary principle. It only tells that we must be very discriminate in applying it and that we should spend much more thought in finding out, what this leads to.

      • Here’s the most relevant application of the PP:

        “If one rejects laissez faire on account of man’s fallibility and moral weakness, one must for the same reason also reject every kind of government action.” Ludwig von Mises – Austrian Economist 1881 – 1973

        In the current context, “fallibility” applies to making errors about the impact of CO2, in particular human output thereof, and about the consequences of major arbitrary changes thereto. ‘Man’s fallibility’ when given access to unrestrained government power is massively documented by history, both recent and ancient.

        And the consequences of choking off CO2 production in any meaningful way are not in doubt; they are brutally negative for the vast majority of the world’s population. It is notable that this is a “feature, not a bug”, for many of the strongest proponents of CO2 controls and cutbacks.

        So the PP, in any sane view, mitigates against empowering those who would willingly, even eagerly, cull the planet with regulatory and economic suppression of emissions.

      • Pekka –
        I’m personally antagonistic to the PP because I see it as fear based. And also because I’ve seen it used indiscriminately to “prevent” actions or occurences that are either extremely unlikely or conversely, are normal hazards of living as a human being.

        As Brian H pointed out, the application of the PP to “climate policy” would have disastrous effects on much of the world’s population. China and India seem to understand this. Witness the attempts by both to upgrade their technology and infrastructure to provide better survivability for their people. China, for example, is building massive power generation capability, including wind, solar, nuclear and coal. And India is following suit. There’s a reason why Copenhagen and Cancun failed – the probability of success was somewhat less than that for the survival of a snowball in Hell. This, of course, is 20/20 hindsight on my part.

        Anyway, back to the lack of necessity for the PP in this context, if you can find a copy of Matthew Kahn’s 2010 book “Climatopolis” you might find it interesting. He’s a economics professor, a warmist and a believer that the human race will survive “climate change” very well.

      • Jim,
        One of the common problems with PP is exactly that its proponents typically select one risk, claim that PP should be applied specifically to that and neglect its application to the actions proposed. This is one thing that I had in mind, when I wrote that I do not apply the ways PP has been used and that it should be used much more discriminatingly.

        On the other hand PP is essentially synonymous to risk aversion, which is accepted generally as a correct guideline in many fields, most concretely in investing. Few people argue that risk aversion is not the correct way of considering risks and risk management. The problem is that PP is often taken as argument by people, who cannot or are not willing to consider risks quantitatively. It is used to justify almost anything to mitigate the declared risk, even taking larger risks of other kind.

  75. Tomas Milanovic

    A.Lacis wrote

    Thus, each layer of the atmosphere will be characterized by its transmission, exp(-TAUv,n); its absorptivity, [ 1 – exp(-TAUv,n)]; and its emissivity, B(v,Tn) [ 1 – exp(-TAUv,n)]. Radiative transfer starts with Planck radiation B(v,Tg) being emitted by the ground. The outgoing flux at the top of the first layer will then be the sum: F1top = B(v,Tg) exp(-TAUv,1) + B(v,T1) [ 1 – exp(-TAUv,1)]. The second layer is then added on to obtain F2top = F1top exp(-TAUv,1) + B(v,T2) [ 1 – exp(-TAUv,2)], and so on to the top of the atmosphere.

    This model answers also the question discussed above how compares absorption to emission in any given layer n.
    They are exactly equal.
    The equation above says that exiting power = transmitted power + emitted power
    But as absorbed power = entering power – transmitted power and the equilibrium condition says exiting power = entering power it follows that
    emitted power = absorbed power for every n as long as the layer is considered in equilibrium (constant T).

    • Tomas,
      A. Lacis describes what happens to each wavelength separately. The energy need not be conserved for each wavelength but only when all wavelengths and also convection, latent heat and conduction are taken into account.

      For these reasons the radiative fluxes do indeed not conserve energy precisely.

      • Tomas Milanovic

        Pekka

        The energy need not be conserved for each wavelength but only when all wavelengths and also convection, latent heat and conduction are taken into account.

        This is correct and I didn’t say something different .
        I didn’t even use the conservation of energy just the fact that there was equilibrium (LTE and constant T) which implies that the distribution of the quantum states of CO2 is constant . From there follows that entering power (for v) = exiting power (for v).
        As the time scales for collisional processes are 7 orders of magnitude smaller than convection and conduction processes, from the point of view of radiative equilibriums convection and conduction can be neglected for all but the most violent processes.
        This is btw one of the reasons why the radiative processes are not really interesting for me because the true dynamics of the system happen at much much bigger time scales where indeed convection, conduction and latent heat play the fundamental role.

      • Tomas,
        I do not really understand, what is the point in your messages. One possible problem is in the assumption of constant T. You may use it in a way that is not correct. If the temperatures of successive layers differ, one must be very careful in using the assumption of constant T even for a single thin layer.

  76. Harold Seelig

    It’s my experience that people with conservative tendencies tend to be cautious regarding real threats, and do indeed take a broader view tempered by rational thought. Caution is a valuable strategy for survival. My perspective regarding controlling CO2 to mitigate a considered threat is that the social commentary at present a highly emotionally promoted strategy….The ‘forget rational thought, it’s an emergency” tactic….What about Polar Bears, etc?
    Due to a ban on hunting Polar Bears, the population has skyrocketed over the past ~50 years. There are no reports I’ve seen showing any change in this trend. Of course we can always find dead examples of any species which died of starvation. But I drift from my point.
    I admire the rational, data based discussions here. Wildavsky’s book is on Amazon, but it’s a bit steep.
    I’ve heard numerous claims that Venus’ unusually high temperature is from runaway greenhouse, but absolute silence regarding Mars. Isn’t Mars’ temperature ‘about right’ for its’ distance from the Sun? In a general sense, Mars’ level of CO2 is equivalent (molecules/m2) to 55,000 ppm of CO2 at Earth’s temperature and pressure. Yet with a low overall atmospheric pressure (~5% of Earth’s), the “Thermal Flywheel” on Mars is lower than Earth’s, and albedo of Mars is 25% lower than Earth’s. Isn’t it a fair comparison to use Mars’ relatively greater level of CO2 to rationally lead one to believe that perhaps CO2 is a minor player here on Earth? Doesn’t this show Earth’s warming is something else?
    James Barrante, in “Global Warming for Dim Wits”, though an unfortunate title in my opinion, fairly well proves patterns of Global Temperature and CO2 put cause and effect in the proper perspective….
    The geologic record does show higher temperatures cause higher levels of CO2. Temperature goes up, delay, CO2 goes up, and Temperature goes down, then, CO2 goes down. Has Dr. Barrante missed some big point?

  77. Harold,

    I don’t spend much time worrying about the health and well being of polar bears. (The best of luck to polar bears.) I have read where changes in sea ice have made life more difficult for some species of penguins, better for others. A noteworthy observation and point well taken, but not something to cause me to write my Congressman about (yet) , as this is not really part of my direct responsibility or research to worry about.

    On the other hand, comparing the greenhouse effects on Mars and Venus is quite relevant to the study of climate of Earth and the mechanics of the terrestrial greenhouse effect. Since the greenhouse effect is primarily a radiative effect, the same radiative modeling must be applicable to Mars, Earth, and Venus, and be able to explain their different temperatures.

    The relevant Mars parameters (from Wikipedia) are: Bond albedo = 0.25, and mean Sun-Mars distance = 1.523 AU. This means that the absorbed solar radiation by Mars is (1 – 0.25) x 1367/4 W/m2 /(1.523)^2 = 110.5 W/m2, which corresponds to 210.1 K as the effective equilibrium black body radiating temperature, i.e., (110.5 /W/m2 = 5.67×10^-7 x (210.1 K)^4

    The other relevant data are: Mars atmosphere is 95% CO2, mean surface pressure = 0.63 kPa (0.62% Earth’s), and 0.376 Earth’s gravity. This gives Mars’ CO2 per unit area as 0.0062 x 0.95 /0.376 = 0.0157, or about 40 times greater than the 0.000390 value of Earth.

    Yet, the greenhouse effect that we calculate for Mars is only about 5 K, compared to 33 K for Earth (and about 500 K for Venus). The reason for the big difference is the low pressure on Mars (equivalent to 35 km altitude on Earth). The pressure broadening line width of CO2 absorption lines is directly proportional to the atmospheric pressure. The CO2 absorption line strength (spectral area) is basically the same on Mars as it is on Earth. At the low Martian air pressure, the spectral absorption is piled up to be more than a 100 times stronger at the absorption line centers on Mars compared to Earth (and proportionately weaker in the line wing regions) making the spectrally integrated absorbing ability of CO2 much less efficient on Mars compared to Earth.

    On Venus, the atmospheric pressure is about 100 time greater than on Earth. This has the effect of spreading the spectral absorption much more evenly across the spectrum, making CO2 absorption (and greenhouse effect) that much more effective on Venus than on Earth.

    There is one additional fact about the greenhouse effect on Earth that is different from Mars and Venus. Earth has a strong water vapor (and cloud) feedback effect that acts to magnify the CO2 greenhouse effect. Thus, of the total 33 K terrestrial greenhouse effect, CO2 accounts for only 20% of the effect, with water vapor accounting for 50%, and clouds accounting for 25% of the terrestrial greenhouse effect (the other 5% comes from methane, nitrous oxide, ozone, and chlorofluorocarbon gases).

    If it weren’t for CO2, CH4, N2O, O3, and CFCs (the non-condensing greenhouse gases), which provide the necessary support temperature for water vapor and clouds to remain in the atmosphere, the terrestrial greenhouse effect would collapse, and the Earth would plunge into an icebound state. As feedback effects, water vapor and clouds provide a strong magnification of the CO2 greenhouse warming. That is why it is important to control the amount of CO2 in the atmosphere because CO2 is the principal controlling factor of the terrestrial greenhouse effect.

    There is an additional factor to note about the Martian surface temperature. It exhibits large diurnal and seasonal amplitude, ranging from as low as -90 C to as high as 30C. This is because the Martian atmosphere is so thin, and because the Martian soil has little heat capacity. The ocean on Earth has a very large heat capacity, moderating the diurnal and seasonal temperature range, and requiring decades to centuries for the full effects of global warming from increasing CO2 to materialize.

    • I agree mostly on what you’re saying on Mars. With 40 times the co2 one sees the difference with the total pressures not broadening the linewidths. That leads to less blocking. even on Earth, most of those strong lines have extremely short optical paths so tall narrow lines don’t do much. Another factor is that there is some altitude distance between lowlands and highlands that appear to be enough to affect surface pressure. Also, there are variations in albedo that are significant. I recall seeing a NASA albedo map that shows this in detail and using a single number for albedo may not yield that great an answer. Also, I seem to recall some differences in reported mean T that could attribute as much as almost 10 deg C to the gray body.

      I think that co2 on Earth is just under 30w/m^2 while the total is near 110 w/m^2 for ghgs (my model) but the whole blocking including cloud cover is around 155 w/m^2.

      I do not agree that there is a strong h2o /cloud feedback effect. In fact, the cloud feedback I’m pretty sure is quite negative. The h2o vapor positve feedback is very limited.

      • cba:
        As you say, of the roughly 150 W/m2 of the terrestrial greenhouse effect, CO2 accounts for about 30 W/m2. Water vapor accounts for about 75 W/m2, clouds for 37.5 W/m2, and the minor GHGs for the remaining 7.5 W/m2. These numbers are for the current total atmosphere (the impact of cloud albedo on SW radiation produces the negative part of cloud feedback).

        For small climate perturbations relative to current climate (such as doubled CO2), the different feedbacks are not in linear proportion to the fractional attribution listed above. In particular, the cloud response appears to be much more strongly saturated than the water vapor response. I think the net cloud feedback is still positive, but only marginally so.

      • A. Lacis,

        On average, each each m^2 of total cloud cover will block somewhere around 40w/m^2. My own tinkering yielded slightly lower numbers than you present but I’m limited to 65um on the long end and there’s just a bit more below that. Total ghg for mine was around 110w/m^2 with co2 at almost 29 and h2o at almost 70. I seem to recall a hansen paper claiming 120 total ghgs. I’m not sure if it included the 7.5 nonghg contribution and mine doesn’t.

        For overall blocking, I like the simple averages approach. Assuming 288.2k (1976 US std atm surface mean) and 1.0 emissivity for deep IR, one calculates 391.16 w/m^2 via stefan’s law. Taking surfaced averaged TSI at TOA corrected for albedo reflection, we get 238 W/m^2 being absorbed for albedo of 0.3 and TSI (new TIM value) being 1360.8/4 w/m^2 . for practical purposes, we can say 391 is emitted and all but 238 w/m^2 is being blocked for the real Earth average and that amounts to 153 w/m^2 (or in essence, 154 w/m^2 for the earlier TSI value). This is for an earlier point in time when there was an average balance assumed.

        A similar breakdown to Trenberth’s, KT97 yields a serious error in their breakdown of albedo fractions. Using reasonable values for clouds, oceans, land, provides their original breakdown of 0.08 for land and 0.22 for cloud/amtospherie contributions. They used a 62% total cloud cover for their model there. They didn’t take that into account as I recall when figured the final contributions of either. 0.08 indicates 27 W/m^2 surface albedo where there isn’t cloud cover. However, with clear skies being 0.38, we get surface contribution of only 10 w/m^2 out of 102 w/m^2 using 0.30 albedo and TIM’s 340.2 W/m^2 TSI . That leaves 92 w/m^2 for clouds/atmosphere and I believe the valid value for atmosphere is around 10w/m^2 or a bit less. That leaves 82 W/m^2 of albedo for clouds.

        Using 37.5 w/m^2 for blocking and albedo contribution of 82 w/m^2, we have a net effect of 82-38 = 44 w/m^2. How do you get a positive feedback contribution out of that? with that much net effect if there were any positive feedback present what so ever in the cloud cover fraction, this would be unstable and it would be driven as close to the clear sky rail as it could possibly go and stay there. We’d have a new balance point to achieve, one with an albedo of 0.08 and required mean T increase of around 7 deg C.

    • Nice, even though Mars has much more CO2 it has little “greenhouse” effect. Silly scientists thinking CO2 was an issue.

      On Venus it is different again. The window is open. CO2 is the only thing substantially blocking and even with pressure broadening there is darned little of that window blocked!!So, what does all the RTE really do for us on Venus when there is so little else to capture the IR from the ground which, by the way, due to its much higher temp, is NOT near the 15um primary CO2 band?? Oh, and there is very little in the way of other GHG’s to give that feedback effect. It is what it is.

      The fun thing about Venus is that at 480c the black body peak is at about 4um instead of 15 giving about 61w/m2 for the 15um and 990w/m2 at 4um. Since CO2 isn’t as efficient at 4um how does that actually affect the calculations?

      http://www.spectralcalc.com/blackbody_calculator/blackbody.php

      There are no other particles to speak of for the CO2 to thermalize. When there is a collision it is with another CO2 particle. Wonder what happens to the chances of emission??

      Here is an interesting calculation of the probable temp of Venus based simply on the dry adiabatic lapse rate and the atmospheric mass.

      http://motls.blogspot.com/2010/05/hyperventilating-on-venus.html

      The other fun stuff is that there isn’t even the beginnings of enough SW making it to the ground to heat it. So, exactly how does CO2 in 3 tiny bands heat the surface of Venus to the point it is outputting close to 6000w/m2???

      • kuhnkat,

        you forgot the clouds. 100% coverage, optically thick.
        An interesting calculation might be in order. One must remember that gases like co2 will radiate their spectrum at a characteristic temperature (their temperature). It’s just that if their temperature is less than that of a background – the surface – then it will turn out to be a net absorption rather than a net emission. the calculation is to find just how much radiated power is coming out of the high pressure co2 “ocean” versus how much solar power is being absorbed by the high pressure co2 ocean and ground surface.

      • Well, no, I didn’t FORGET the clouds. The clouds are what block or reflect most of the SW AND LW PREVENTING it from heating the surface or even the lower atmosphere. Since the reflection is about 75% what penetrates the lower atmosphere is then less than what reaches the surface of the earth and the energy available to heat the atmosphere is less.

        You aren’t going to suggest that there is large scale downward convection of heat from areas colder than the surface??? So about 40 kilometers above the surface little heat is going down and little is going up. Now this is sounding like the fabled blanket that is told to us about the CO2 on earth, but, it isn’t caused by primarily by CO2.

        Here is some more from Lubos Motl:

        http://motls.blogspot.com/2010/05/venus-chris-colose-vs-steve-goddard.html

        You may be interested in the emissivity of CO2 at the pressures and temperatures seen in Venus’ lower atmosphere.

        ” the calculation is to find just how much radiated power is coming out of the high pressure co2 “ocean” versus how much solar power is being absorbed by the high pressure co2 ocean and ground surface.”

        There is very little solar power making it past the clouds you so kindly REMINDED me of.

        What do you think of that Venusian adiabatic lapse rate under those clouds?? Looks to be about 10c per kilometer to me. How does all that CO2 and other stuff manage to lose heat so fast if it is such a great insulator/blanket/slower of cooling???

        Let me make one final suggestion. Try to ignore how Venus managed to get in the condition it is in. Anything based on hypothesis of what created the conditions are probably not germane to the current conditions. Try and concentrate on the actual observations of what is currently happening. Once that is understood we MAY be able to say something about how it got to this situation.

      • kuhnkat,

        I don’t like venus as an example. It’s got an “ocean” of co2 with pressures out the gazoo. It’s about 3/4 of an au and I doubt that the albedo is really only 0.75. it also has a day that lasts closer to a year than an Earth day which would be disastrous were it not for the cloud cover.
        I doubt there’s any convection going. I also suspect that the co2 radiating amount is in excess of the incoming solar. That sorta makes the whole mess a cloud function.

      • I agree that Venus is a poor example of anything but Venus and basic physics.

    • So clouds are now positive feedback?
      And you have shown that where?

  78. Harold: IMHO, the simple 3-page article I linked debunks about everything Dr. Lacis said.

    • Unfortunately precious little of that document is correct!
      Here’s a comparison of a portion of the CO2 spectrum on both Mars and Earth at surface conditions which bears out Dr Lacis’s post:

      • Er, Phil: you gotta provide more than a couple of spectra to prove your point. WTF?

        The PROBLEM with the “alarmist theory de jour,” ala Dr. Lacis here, is that, although the little radiation cartoons and equations look “scientific,” there is ABSOLUTELY NO EMPIRICAL EVIDENCE THAT THEY ARE CORRECT (think about that if you are a scientist, please, since even Einstein didn’t get a pass until a certain eclipse occurred! No evidence, no science; science is that simple). The paper that I linked to actually has some empirical evidence (from NASA, no less–see the references). If YOU can show that the article I linked is flawed, then I might bow down to you and submit to your arm-waving and admit that I’m wrong. Otherwise, you wasted some bandwitdth, as far as I can see. And…this goes for the good Dr. Lacis–it is up to him and his friends to produce some ACTUAL DATA to show that all the little radiation cartoons that he and all his ilk keep producing as “facts” have any real basis in FACT. Post Haste! The world of science is waiting for some real science in “climate science!” And, since temperatures are cooling while CO2 levels are increasing exponentially, I think the alarmists have a real problem with the public. LOL. If the Republicans in the House have any sense, they will severely cut the budget for climate science research.

      • Here’s the first para. of your link.
        Climate science’s method of deriving a surface temperature from incoming radiant energy (whose intensity is measured in watts per square meter) is based on the Stefan-Boltzmann formula [1], which in turn refers to a theoretical surface known as a blackbody – something that absorbs and emits all of the radiance it’s exposed to. Since by definition a blackbody cannot emit less than 100% of what it absorbs, this fictional entity has no option of drawing heat into itself, for that would compromise its temperature response and thus its thermal emission. Its 100% thermal emission effectively means that a blackbody is a two dimensional surface with no depth.

        Here’s the first para. of your link.
        Several errors, ‘climate science’ regards the moon as a grey body in the solar irradiance range of wavelengths with an albedo of ~0.15. In the IR region where it emits it is a blackbody like the earth. They also claim that the moon is warmer than predicted, in fact the prediction yields a rather good match to the data, revealing a Lambertian profile.

        Time to bow down.

        By the way when ‘ACTUAL DATA’ was provided your response was an obscenity!

      • Phil,

        You are saying that Climate Science admits that some of the 33C is NOT due to CO2 and other Greenhouse gasses??

        Halleluyah!!

      • No I’m not saying any such thing!

  79. Things an IR Camera CAN’T Do
    1. It Can’t See Plumes of Hot Air.
    It would be great if we had a camera that could show heat flowing from place to place. Herbie’s manager commanded me to bring over the IR camera so he could see the hot air coming out of the vent of Herbie’s new shelf. I had to explain to him that air is about the only thing that is transparent to infrared, so the camera can’t show you hot air. The explanation wasn’t good enough. He had seen Robocop, too. So I did a quick demo for him using the monitor of his desktop computer. The image showed the hot surface, and some hot components inside the cooling vents, but no plume of hot air coming out. He could see his warm hand in the image hovering over the monitor grill, but no hot air. He could even feel the hot air with his hand, but couldn’t see it on the image. That was enough to finally convince him. He seemed more disappointed than he should have been. It turned out what he really wanted was to borrow the camera to look for air leaks around the window frames of his old Victorian house.
    If you think about it, if air weren’t transparent to infrared, the camera wouldn’t be of much use. All you would see in the image would be the layer of air right in front of the camera lens.
    More Hot Air, by Tony Kordyban, page 167

  80. Ken: Interesting observation. I think that typical IR cameras are specifically designed to measure IR radiation in the (transparent) 8 – 12 micron window region so they will be able to see as far as possible within the atmosphere and be able to detect any temperature contrasts of solid surfaces which emit radiation at all wavelengths of the IR spectrum.

    As you say, if the atmosphere were opaque, i.e., if the camera was measuring IR radiation at 15 microns (the highly opaque part in the middle of the CO2 band), you would only be able to see inches beyond the camera lens. They do have false color IR cameras that have broad band responses in different parts of the spectrum, and are thus able to differentiate differing amounts of water substance in their field of view.

    However, it should be possible to detect hot air plumes in practical situations with an IR camera by utilizing appropriate CO2 spectral regions where CO2 is only partially opaque. That is, after all, how they do temperature sounding of atmospheric temperature profile from satellite orbit by measuring the IR brightness in selected narrow spectral regions of the CO2 band.

    • You can convert a digital camera to work in the IR (4.25μm I think), there’s an example:

    • Here’s my question, A Lacis. When you guys point your IR camera in the air and “see” >300W/M^2 of “back radiation” what do you think your imager is focused on?
      How effective do you think radiation is at transporting heat energy? If I heat something (like a volume of air), is 1% transported by radiation? 2%? I know back radiation exists, but it is small and does nothing measurable to our surface temperature.

      • It’s not an imager, there are no lenses, it’s a sensor which measures all the IR incident on it regardless of angle.
        Radiation seems to work rather well in a microwave oven.

      • My mistake, I thought we were counting infrared photons. You’re saying the IR camera does not “see” the 300W/m^2 emission the thermopile detects. Outstanding.

        I can’t wait for the industrial applications of your 300W/m^2 to come online…we’re barely able to get 300W/m^2 from sun-bathed solar panels. Free power, day and night, from thin air. I like it.

      • Ken,
        Those IR photos are indeed coming to the detector or any surface, but at the same time the surface is also emitting radiation and usually more. Therefore you cannot produce energy out of that. Thus the net effect is cooling, but the radiation is still there.

        The IR sensor is also emitting more IR radiation than receiving when it is pointing up. It determines the down coming radiation by knowing its own temperature and taking the emission into account while processing the signal.

      • Radiation at Point A = Radiation from source B summed with radiation from source C? Yes.

        Radiation at Point A = Radiation from source B minus radiation from source C? Nope.

      • Yes, but so what?

        Ne energy flow to volume A = energy flow into volume A – energy flow from volume A.

        Both the flow into the volume and out of it may include radiative fluxes.

      • Pekka,

        is it possible for you to show us how a bottle of water could be frozen by radiating during a clear, calm evening where the temps do not go under 45F?

        Maybe this is perfectly consistent with the math and science you are trying to show us? It would seem to give a point to compute primarily radiant fluxes especially since AGW is supposed to show up at night?

      • I cannot tell where the limit for such cooling is, but that might be possible, if the air is dry. In that case it requires isolating the bottle from radiation coming from ground or from directions, where the sky is warmer for some reason. This could be done using a deep parabolic mirror from material that reflects well in the infrared as many well polished metal surfaces do.

        The temperature that on can reach in that way depends naturally on the strength of convective warming of the bottle by the surrounding air.

        That the surfaces can cool two or three degrees below the air temperature without specific measures is, however, common experience at least here in Finland. The surface of a small pond does often freeze although the temperature of air stays well above freezing point at a normal measuring altitude of around 2 m.

      • Pekka,

        no evaporation is allowed from the container and any evaporation from the outside of the container would be from condensation from the air. The claimed temp differenctial so far is about 10c under “normal” conditions. The following page describes the rudimentary, but effective, set up:

        http://www.solarcooking.org/plans/funnel.htm

        toward the bottom.

        I am wondering if this can give us some kind of range of energy transmission from backradiation in an empirical form??

        In this setup conduction and convection has been minimized so radiative effects should be primary and calcuable??

        I wonder if there has been any high quality gear used to do similar experiments to find estimates??

        At least part of my confusion is that when the “sky” is measured it would seem to be too cold for the high mass of CO2 that is concentrated at the lowest part of the troposphere. Why would the temperature of the higher layers have anything to do with what we see looking up as their emissions should mostly be getting absorbed by this lower, denser, warmer, mass of CO2 just like the grounds IR emissions are supposedly absorbed by this same near ground layer?

      • Kuhnkat,
        There has been some discussion about Kirchoff’s lower in this chain. This is a place, where it can really be applied. Kirchoff’s law tells that the net radiative energy flow between two bodies is always from the hotter to the colder at every wavelength separately. At the wavelengths for which both bodies have a non-zero emissivity the gross transfer in each direction is given by

        Emissivity1 * Emissivity2 * geometric factor * Planck law at the temperature of the radiating body

        The geometric factor tells, how widely the bodies see each other and it is the same in both directions. Thus the two directions differ only by the last factor, which grows with temperature.

        The Kirchoff’s law tells that the two factors are both emissivities or both absorptivities, because the emissivity is always the same as absorptivity. It is a property of the material and depends on wavelength for real materials. Fore some materials it is very close to one for all infrared, but for gases it is large at some lines and very small at other wavelengths.

        The second law of thermodynamics gives the same general answer, but it does not tell the details as the Kirchoff’s law tells, when combined with Plank’s law.

        The sky can be considered as a body of some temperature (the temperature of the sky is, however, different for different wavelengths, because we see through the air large distances at some wavelengths either to upper layers of the atmosphere or the very cold outer space, but only the very close air at some others, e.g. at the center of the 15 um line. Calculating the total cooling requires summing over all IR wavelengths. At some wavelengths the cooling is strong, at some others (e.g. 15 um) very weak.

      • Pekka,

        thank you for the detailed answer.

        Unfortunately you repeated what has me confused.

        “…because we see through the air large distances at some wavelengths either to upper layers of the atmosphere or the very cold outer space, but only the very close air at some others, e.g. at the center of the 15 um line.”

        I am led to believe by a number of people that using an infrared detector centered on the 15um line shows temps representative of the upper trop rather then what is close. That would not seem to be possible to me, but, the ability to freeze water through pure radiation would make it appear that they are right. Yet we are told how much energy is in back radiation to slow the cooling of the surface.

        I must be misunderstanding something.

        Basically the water freezing would seem to show little power in the full spectrum of incoming, not just the CO2 bands, as opposed to the full spectrum outgoing.

      • kuhnkat,
        The cooling effect comes from wavelengths, where the atmospheric absorption is weak. At those wavelengths the solid surfaces are loosing much more energy by emitting radiation than they are absorbing incoming radiation.

        At the center of the 15 um line the incoming and outgoing radiation are almost equal. Thus there is practically no heat loss at this wavelength.

        When looking from outer space the 15 um radiation is coming from the top of troposphere or even from stratosphere. The strength of 15 um radiation corresponds to the local temperature at all levels of troposphere as it is absorbed very close to the point where it was emitted.

      • Thank you for your patience Pekka. It seems to me that you are confirming the idea that CO2 has very little net effect on the outgoing radiation. At best it slows it slightly in its own bandwidths.

        What it apparently does is help the atmosphere change temp more quickly?

      • kuhnkat,
        The influence of CO2 is limited, but what I have stated in in full agreement with the estimates of the GHG role of CO2. The atmospheric window is wide enough for these effects after the influence of CO2 has been taken into account. Without any CO2 the cooling would be significantly stronger.

        This picture

        shows that the radiation from the surface can escape almost freely at most wavenumbers beyond 800 (1/cm) and to some extent also below 600 (1/cm). CO2 cancels the whole effect for wavenumbers 620-700 (1/cm) and has influence over the wider range 550-770.

        This picture is for U.S. Standard atmosphere. The details depend on the amount of water vapour in the atmosphere.

      • I just add that the picture is not, what is seen at the surface, but what is seen at TOA. When the radiation temperature of certain wavenumber is maximal, we know that the radiation from surface is escaping. In other cases this picture does not tell directly about, what we want to know here, but gives good hints anyway.

  81. In the dialog occasioned by this post I have discussed mainly with Pekka Pirilä the dry adiabatic lapse rate.
    Sure enough radiation is not involved with the derivation of the formula.
    However, I kind of assumed it would play some minor role in the wet adiabatic lapse rate.
    However, even there convection and latent heat seems to be the physical cause of the observed rate.
    Why then do we have the IPCC obsession with the radiative effects?
    Try to find the “greenhouse gas” effect in this NASA account of the atmosphere.
    http://rst.gsfc.nasa.gov/Sect14/Sect14_1b.html

  82. phil,

    no way on the digital camera. 1 um is doable. When you hit 4um, you’re in a different world than ccd and cmos. many ccd cameras are rather low in sensitivity by the time you hit 1um.

  83. Phil: Please at least take time to read the article and supporting references (most of which CAGWers will certainly agree with. The “grey body” concept (absorptivity/emissivity) is included. But even with those parameters, the equations are being used incorrectly–as the article shows, they entirely ignore the three-dimensional aspect of real substances–and consequently, they ignore the heat that is absorbed (i.e., assume it is all re-emitted).

    • JAE – The article you link to displays a common misunderstanding of the absorption/emission concepts fundamental to understanding the Earth’s climate system and the role of greenhouse gases in maintaining temperatures above very frigid levels.

      A theoretical perfect black body absorbs all radiation at all frequencies. Its ability to absorb is defined by a term – absorptivity – that is assigned a value of unity, representing the principle that it can absorb all incident radiation. The same black body radiates energy as a function of temperature, and this ability – emissivity – is defined by a term assigned a value of unity as well. In fact, by Kirchoff’s Law, bodies in equilibrium have an emissivity equal to their absorptivity. This is true whether they are black bodies (emissivity and absorptivity = 1) or not. For example, an entity with absorptivity 0.5 absorbs half as much incident radiation as a black body (that is where the 0.5 comes from), and is capable of emitting half as much radiation as a black body at the same temperature. What happens to the other 0.5? It is either transmitted through the object or scattered/reflected away from the object.

      Does this mean that a black body will always emit as much radiation as it absorbs? The answer is NO, and it is this point that is a source of much confusion (including the confusion in the linked article). In isolation, where a black body can only acquire and discharge heat via radiation, the absorbed and emitted radiation will in fact be equal at equilibrium. However, if a perfect black body with energy acquired by radiation is in a circumstance where it can discharge energy by conduction and/or convection, it will emit less radiation than it absorbed. It still has an emissivity of unity, but its emission is reduced. Conversely, a black body heated by conduction without absorbing any radiation can still emit the energy by radiation – it is now emitting more radiation than it absorbed. It will emit in accordance with its temperature, regardless of how it acquired the heat.

      The basic principle is:
      Absorptivity = Emissivity
      Absorption need NOT equal emission

      The Earth’s surface (both land and oceans) is known to act much like a black body in the infrared wavelengths relevant to greenhouse gas warming, but not in wavelengths in the visible range where most solar energy resides. Substantial solar energy is scattered and reflected rather than absorbed, although most is absorbed – both in the atmosphere and by the surface (mainly the oceans).

      Radiation received by the Earth’s surface is absorbed, except for the quantity of solar radiation that is reflected or scattered. It is NOT immediately re-emitted as radiation. Rather, much of it is transferred to below the surface, mainly in the oceans, through turbulent mixing, convection, and conduction. Ultimately, at equilibrium, it will again be emitted from the surface at infrared wavelengths, with some also leaving via conduction (a small part) as well as latent heat transfer due to evaporation.

      In essence, the observation that the Earth’s surface can absorb and emit with absorptivities and emissivities close to unity in the infrared does not conflict with its ability to retain heat rather than immediately re-emit it. These principles are fundamental to the nature of radiative transfer underlying the greenhouse effect – an effect that is now well-documented through both theory and measurements at both the Earth’s surface and from space (see, for example, Dr. Curry’s thread on radiative transfer models for several relevant references).

      • Fred,
        I agree completely with your message, but I have a minor comment.

        A thin surface layer of the ocean is an example of the situation, where the emission exceeds absorption. For infrared alone the difference is obvious, but the solar SW is not sufficient for removing this difference unless a rather thick layer is considered as most SW is either reflected or penetrates some distance into the water.

        Thus it is not correct to say that mixing moves heat from the surface to deeper layers unless the surface is taken as rather thick.

        The layers below immediate surface on the other hand absorb SW, but do not emit any radiation, but cool by convective mixing.

      • I agree. Emission is a surface phenomenon, while absorption occurs over a considerable depth.

      • What do you guys think of Dr. Pielke’s contention that the ARGO floats should have measured, assuming it’s in the ocean, the “missing heat” while it was in the 0-to-700-meter layer? It seems like you’re there. Trenberth seemed to think ARGO was not configured to accomplish that, and yet, whatever, Pielke’s contention seems to be reasonable.

      • In my view, Dr. Pielke’s position is reasonable if a long enough interval is allowed – e.g., a decade or more. Otherwise, the measurements will be confounded by ARGO technical and sampling problems as well as short term internal climate dynamic variations affecting heat distribution.

      • Fred,
        Just out of curiosity, how did you come to this view? What made 10 years the magic number for you? Why not 6 or 8 or 12?

      • Lol, I don’t much understand it, but Dr. Pielke seems to be chompin’ at the bit to take his “measured in ‘Jewels'” snapshot, like, right now, or even better yet: yesterday.

      • I would say the longer the better, but practicality dictates intervals on the order of a decade rather than two or three – at least, ENSO variations would tend to average out. Also, past bumps and dips in upper ocean measurements of OHC have tended to average out better over longer intervals.

      • More data is always better than less data. That’s a truism. The question is why do you think 10 years is the minimum? Because of the enormous size of the oceans, their ability to store heat and the fact Argo has global coverage, I think four years is more than enough time to show if a radiative imbalance exists. Convince me I’m wrong.

      • Fred, let me explain my view this way – The amount of heat stored in the oceans is so great that when El Nino comes along and burps a little heat into the atmosphere, it could set a new global surface temp record without a significant decrease in ocean heat content. If a real radiative imbalance is present it would show up year over year over year unabated.

      • Pielke agrees with me that 10 years is reasonable. Four year variations have been common in the past, but have averaged out to a clear multidecadal rising trend in OHC. Within any one year, of course, a radiative imbalance may either grow or disappear for reasons that are transient. In fact, within a year, the magnitude of TOA radiative imbalance changes seasonally. Persistent forcings (CO2, solar, etc.) require longer intervals for a trend to become evident.

      • that is true only when you’ve got apples and oranges.

        emission is a surface thing because it is IR emission and it has very strong absorption in the IR. incoming SW absorption has a great penetration because it is not as strongly absorbed and for ~300 k there is no SW emissions. Incoming LW is also a ‘surface thing’ because of the strong absorption of IR by h2o.

      • (see, for example, Dr. Curry’s thread on radiative transfer models for several relevant references).

        Unfortunately, the useful parts of those threads are buried under a lot of ego gratifying noise produced by certain bombastic commentators. I remember a couple useful link provided by Dewitt Payne:
        SpectralCalc
        look inside the book
        AERI FT-IR spectrophotometer
        details on HITRAN
        line-by-line
        pictures on Venus

      • Fred,
        I saw some problems with your post. Kirchoff’s law is valid by wavelength in a fashion such that if the incoming radiation is at the same characteristic temperature as the gas parcel itself, then it will be balanced. If the incoming radiation is not, then the incoming will be absorbed at the absorption rate by wavelength and the emission will be by that same rate times the boltzman distribution emission of a BB radiator at the temperature of the gas parcel. Unfortunately here, I’m working from memory and may have trouble explaining just what I’m trying to say. If the gas parcel temperature is greater than the incoming radiation, this will result in emission lines. If the gas parcel temperature is less than the incoming radiation, then there will be absorption lines. If they are the same T, then there will be neither, you’ll simply see the BB emission curve as a function of wavelength.

      • Kirchoff’s Law requires absorptivity and emissivity to be equal at a given wavelength. It does not require absorption and emission to be equal at the specified wavelength. The Earth has fairly high absorptivity/emissivity at the wavelengths of visible light, but it absorbs much more visible light than it emits (almost none of the latter), because the visible emission spectrum predominates at the temperature of the sun’s photosphere (about 5800 K), whereas at the Earth’s temperature of about 288 K, a body with the that same emissivity radiates primarily in the infrared. If the sun were at 288 K, it would do the same.

        I’m not exactly sure what point you were making, but perhaps it was the same as my statement above. If not, you should clarify.

      • I meant kirchoff’s law will be valid at each and every wavelength – given that the gas T and the BB emission T are the same.

        First off, visible light is being emitted by an object at an apparent T of 5800k, a far cry from 288k. If somehow one could heat up the Earth to 5800k without changing anything associated with the visible light albedo (still oceans and clouds present) then the Earth should have an emissivity of 0.7 and a reflectivity of 0.3. By the time one gets to significant IR wavelengths, both the absorption and emissivity are approached 1.0.

        most of what I was saying is along the lines of you last post above.
        I was also mostly talking of a parcel of gas in the atmosphere or elsewhere in front of a continuum source emitting a BB curve at a temperature T. Of course a gas parcel, unlike a solid will radiate upward and radiate the same amount downward. The geometry of the situation means that the gas parcel cannot maintain the same temperature as the source of the continuum which is from below. Conservation of energy will establish a lapse rate. Going up, sooner or later, you reach the point where there is optical transparency and there begins to be a loss of power. Of course for conservation, all energy or power inputs must be included.

      • cba,
        You have misunderstood Kirchoff’s law. You describe the physics correctly (or at least I did not notice anything wrong by fast reading), but it is in complete agreement with Kirchoff’s law and actually exactly, what Kirchoff’s law requires.

      • I comment on the correct use of Kirchoff’s law a couple of messages above (January 26, 2011 at 1:56 am).

      • pekka,

        you’re not making much sense there. If I am correctly describing Kirchoff’s law, then how do I have it wrong? Actually Kirchoff’s law predates many things and was not originally applied as must be now, by wavelength or by frequency. It’s the tidbit about thermal equilibrium that ties the blackbody continuum radiation involved to the temperature of the body involved. The sun is not on thermal equilibrium with the Earth because one has a ‘surface’ T of almost 6000k while the other has around 288k. Hence the solar radiation (6000k black body) is not in thermal equilibrium with the Earth’s emissions despite that there tends to be a radiative energy balance equilibrium on average but that is not Kirchoff’s law.

      • cba,
        After looking at further literature, I have learned that there are so many different versions of Kirchoff’s law for radiation, that my statement was not justified. Apparently Kirchoff published the law in the form: “Radiation inside a closed isothermal cavity is independent of the material of the walls and its intensity and spectrum depend only on the temperature”. From this one can derive various corollaries that are the versions of Kirchoff’s law that are now seen in literature.

        His original law was strongly linked to discussing the spectra as Kirchoff stated specifically that the spectrum inside the cavity is a function of the temperature only. The form of the function was given over 40 years later by Planck.

        In any case my impression is that we have here no disagreement about the physics. The difference did only concern the question about what part of physical knowledge should be called Kirchoff’s law of radiation.

        I have not seen the original paper, but what I write above is a conclusion based on several articles that I found from the net.

      • Pekka,
        It is the isothermal cavity version of Kirchhoff’s law that I find most useful in checking the validity of the various formulations for thermal radiative transfer. Basically, the radiation emerging through a pinhole from an isothermal cavity is going to be Planck radiation – whether the cavity is empty, or contains absorbing, scattering, or absorbing material – as long as all of the material inside the cavity is in thermodynamic equilibrium with the cavity temperature.

        Thus, in viewing a reflecting surface (located inside the cavity) through the pinhole, the emission from the surface plus the reflected component must add up to precisely Planck radiation. Similarly, a slab of CO2 at some pressure P, when viewed through the pinhole for any viewing geometry, will exhibit pure Planck radiation. This means that spectral emission plus spectral transmission must add up to being precisely Planck radiation, or that emissivity is equal to absorptivity.

        The same considerations apply for scattering material such as a cloud layer of any arbitrary optical thickness inside the isothermal cavity. The sum of the reflected, directly transmitted, diffusely transmitted and self-emitted components must all add up to reproduce the pure Planck spectrum for radiation that emerges from the isothermal cavity.

        The gas layer or cloud layer will exhibit the same radiative performance characteristics when it is taken out of the isothermal cavity and placed within the ambient atmosphere (under LTE conditions).

      • A Lacis,

        Your last comment doesn’t sound right. If the gas layer is taken out of the isothermal cavity, it is no longer going to be in thermal equilibrium at the temperature of the BB continuum radiation. While inside , it was absorbing and emitting at the continuum T. Once outside, it will emit at its own temperature and is probably not going to be able to maintain the temperature the same as for the inside of the cavity. There, we suffer a loss of the thermal equilibrium and hence the equality of emitted and absorbed radiation. If we have a thin slab at the opening of the cavity, we have radiation coming from there to heat it up, We then have radiation emitted back into the cavity and radiation emitted away from the cavity, each at the characteristic temperature. Assuming no radiation coming in from the outside direction, that means 1xE in and 2xE emitted – requiring a drop in temperature of the gas for power balance. Only with two identical cavities in thermal equilibrium on each side of the thin slab would the incoming energy permit the gas slab to also maintain thermal equilibrium with the cavities.

        The only thing LTE will buy you is that the CO2 molecules will maintain the same temperature in the gas slab as the O2 and N2.

        Am I misunderstanding something here?

      • cba,

        The atmosphere has a vertical temperature gradient that needs to be modeled. We subdivide the atmosphere into sufficiently thin layers such that each layer can be approximated as being isothermal (at its local atmospheric temperature). This reference layer will have a slightly cooler layer above it, and a slightly warmer layer below. This is what is referred to as local thermodynamic equilibrium (LTE).

        The assumption is that the radiative properties of this reference layer will be the same as the radiative properties that would be in force if this same reference layer were placed in an isothermal cavity having the same temperature (and pressure) as the local atmospheric values. This defines the thermodynamic equilibrium vibrational-rotational states that define the spectral absorption/emission properties of that gas at that pressure and temperature (which is what is done for line-by-line calculations).

        In its atmospheric context, the reference layer might or might not be in radiative equilibrium. If it is, it will be emitting twice its absorptivity (in the upward and downward direction). And, it will be absorbing slightly higher temperature radiation coming from below, and slightly lower temperature radiation from above, in proportion to its absorptivity.

        If the reference layer happens to emit more than it absorbs, it will be in a state of cooling. At the next time step, this atmospheric reference layer will have a lower temperature than it started out with. If the absorbed and emitted fluxes happen to be equal, the reference layer is in radiative equilibrium, and will maintain its original temperature – unless it is also being affected by other physical processes.

      • And how many layers are you using over what range of altitudes?

      • cba,

        For most of our climate modeling applications we use between 20 to 50 atmospheric layers. But in our GCM radiation modeling, we make use of a radiative transfer formulation that permits specifying a temperature gradient within every atmospheric layer to adequately resolve the temperature profile. For radiative models that use isothermal layers, they may use several hundred layers, or more.

      • and how does a T gradient within a shell layer work? I thought the object was supposed to be to have a shell with uniform T and P and concentrations.

  84. “I agree. Emission is a surface phenomenon, while absorption occurs over a considerable depth.”

    LOL. Then your epistle to me above is not quite correct? ??

    Fred, et. al. It looks to me that you guys are putting your heads in the sand and are NOT dealing with ANY of the alternative explanations of why planetary surface temperatures are much higher than the simplistic interpretations of the SB equations predict. Come on, please address the whole paper and concept, rather than pick some little issue you THINK is missing or wrong (aka a straw man). It turns out, if you will actually read this stuff, that we don’t need a GHE to explain the temperatures on the surface of this planet or other planets. And nobody has yet to demonstrate the GHE even exists! And the “measurements” of radiation do not prove ANYTHING, other than IR exists. Of course it exists. I have used it to help identify chemicals!

    • JAE – I responded to your earlier coment because I don’t believe that you are the only one who has been confused by the distinction between emissivity and emission, and between absorptivity and absorption. I’m hoping a larger audience will find the explanation helpful, including readers who don’ t participate in the discussion.

      There was nothing incorrect in my comment – Pekka and I were clarifying the use of the word “surface”, but we agree on the physics.

  85. Fred, I appreciated the clarification which actually seemed in reasonable accord with my limited understanding. Now, can you show us how the conduction into the earth and reversing energy fluxes as the direct SW goes away each night keeping temperatures elevated at the surface are represented in the models?

    • Not being a modeler, I can’t provide the specifics, but diurnal variation is explicitly addressed in the models. As an example, they show how surface cooling is greater under cloud-free conditions, how the ocean cools less than land (because water’s high specific heat capacity and turbulent mixing augment the minor heat storage capacity achievable by conduction), and how atmospheric temperatures can be moderated under such conditions by advection of warmer air from outside the local area.

      Regarding some comments below by Bryan, JAE, and Phil.Felton, lapse rates are a critical component of climate change basic physics as well as model outputs. As Phil.Felton indicates, lapse rates in the absence of greenhouse effects and radiative/convective adustments have essentially no ability to determine surface temperature, which must be in equilibrium with absorbed solar radiation as described by the Stefan-Boltzmann (SB)law. Gravitation can’t change the principle that energy absorbed at the surface must equal energy emitted, and that the latter is fixed by the SB equation. Given a specified surface temperature, gravity does of course determine how atmospheric temperature varies with altitude.

      • If somebody wants to read a bit more extensively on many issues brought to this discussion, SoD provides a good starting point:

        http://scienceofdoom.com/2010/12/07/things-climate-science-has-totally-missed-convection/

      • Pekka,

        on the Moon Greenhouse article, SOD made the statment that you can see Greenhouse in the difference in TOA w/m2 and surface w/m2 numbers. 200 and 300 something. As this was quite a misleading statement I am not going to take the chance of being educated in things I cannot evaluate at all by him. He apparently has a bias that shows in his work.

      • Fred Moolten

        …..”Regarding some comments below by Bryan, JAE, and Phil.Felton, lapse rates are a critical component of climate change basic physics as well as model outputs.”…….

        Not meaning to be polemical here, just interested.
        If the dry adiabatic lapse rate is taught to Climate Science students, don’t they get a little alarmed when the major determinants of the temperature profile are found to be the surface temperature and the Gravitational Field Strength?
        As a teacher I would be lost for words if one of my students after studying
        http://rst.gsfc.nasa.gov/Sect14/Sect14_1b.html
        Said to me “Sir Ive just read a very interesting article covering the main features of the troposphere and couldn’t find any reference to greenhouse gases”
        Kevin Trenberth is famed for saying “Where’s the missing heat?”
        Who is going to be famous for saying “Where’s the missing Radiative Transfer Equations?”

      • Thank you Fred.

  86. From W C Gilberts paper

    dT/dh = -g/CpT = -9.8 K/km

    which is a temperature profile often observed in our atmosphere on a daily basis. This static temperature lapse rate (in this model atmosphere) is identical to the dry adiabatic lapse rate theoretically derived in Meteorology for a convective adiabatic air parcel. In both situations it is solely a function of the magnitude of the gravitational field and the heat capacity of the atmospheric gas, and nothing else. And this relationship aptly describes the bulk of the 33ºC so-called “Greenhouse Effect” that is the bread and butter of the Climate Science Community.

    It is remarkable that this very simple derivation is totally ignored in the field of Climate Science simply because it refutes the radiation heat transfer model as the dominant cause of the GE. Hence, that community is relying on an inadequate model to blame CO2 and innocent citizens for global warming in order to generate funding and to gain attention.

    Full derivation given in;

    Atmospheric Temperature Distribution in a Gravitational Field Author: William C. Gilbert

    • That is the limiting profile reached in a vertically stable atmosphere, the lapse rate. It tells you nothing about the temperature of the atmosphere, that depends on heat transfer to the atmosphere, mostly by radiation. Left to its own devices the radiation sets up an unstable profile which is modified by convection. Increase the heat transferred to the atmosphere increases the temperature but maintains the same vertical gradient.
      This relationship does not describe any of the 33ºC so-called “Greenhouse Effect”, but it is an integral part the physics of the atmosphere, where you get the strange idea that climate science ignores the lapse rate is beyond me!

    • Yes, what Bryan says. The paper that I linked shows that the temperature on the surface of all planets that have an atmosphere is much higher than the BB equations indicate (interestingly, 33 C for Earth). This is because of gravity and heat storage, not a GHE. The equation Bryan provided has only two variables: T and Cp. There is no “GHE” or “radiation” variable.

    • p.s. As shown in the linked article, the BB temperature for Earth represents the temp. at 100 millibars, not at the surface.

    • Bryan,

      By proclaiming that “It is remarkable that this very simple derivation is totally ignored in the field of Climate Science simply because it refutes the radiation heat transfer model as the dominant cause of the GE.”, are you reporting on something that you have observed as the result of your study and analysis of climate science?

      Or perhaps you were simply quoting something from someone else who unfortunately happened to be woefully uninformed, or might even have been sowing deliberate misinformation as to what is actually being done in the field of climate science and on how the real world operates.

      Since atmospheric radiation operates at virtually the speed of light, radiative heating and cooling are the fastest physical processes in the atmosphere. In climate modeling, at every radiation physics time step, for that particular absorber and temperature distribution, an instantaneous radiative heating and cooling rate profile is calculated (solar radiation heating, thermal radiation cooling). If the atmosphere had negligible heat capacity, each atmospheric layer would quickly heat up, or cool down, until each layer was in radiative equilibrium where solar heating would just balance thermal cooling. Such a radiative equilibrium atmosphere (for current climate composition) would be about 66 K warmer than the black body global equilibrium temperature of 255 K (instead of the actual 33 K).

      But the atmosphere has substantial heat capacity, and before the surface temperature ever gets that hot, dry convection sets in and transports heat upward to make sure that the dry convection stable lapse rate (dT/dh = -9.8 K/km) is not exceeded. Dry convection has a fairly fast response – about like that of a hot air balloon.

      Somewhat slower is the atmospheric temperature response to the moist adiabatic lapse rate limit. This is the roughly 4 – 6 K/km lapse rate that gets established in the atmosphere as the result of heat being transported upward in the atmosphere due to water vapor evaporation/condensation. All of these “weather processes” are being explicitly calculated in climate GCM simulations.

      It is these dry and moist convective processes, including also larger scale advective energy transports, that together with the radiative heating and cooling determine the temperature structure of the atmosphere. This is where the 33 K value for the terrestrial greenhouse effect comes from either the radiative transfer modeling result that is based on the current climate temperature and atmospheric absorber distribution, or the observational result (in principle) as the radiative flux difference between the global mean flux emitted by the ground surface (390 W/m2, 288 K equivalent) and the outgoing flux at the top of the atmosphere (240W/m2, 255 K equivalent).

      • A Lacis
        I find the current IPCC overemphasis on the radiative transfer of heat a bit of a puzzle.
        R W Wood conducted his famous experiment over a hundred years ago and concluded that the radiative effects of the atmospheres gases were almost insignificant at atmospheric temperatures.
        As far as I know this conclusion has not been challenged.
        On this thread I was involved with a discussion about the dry adiabatic lapse rate .
        This shows the major determinants of the Earths temperature profile are the surface temperature and the Gravitational Field Strength which I already knew.
        However the discussion made me look up several accounts of the atmospheres structure and to find other drivers that cause a departure from the dry adiabatic LR.
        The next important driver was given as the Latent Heat of Vapourisation of water, convection was also regarded as very significant.
        Hardly any account regarded greenhouse gases as significant except for radiating long wavelength EM radiation to space.
        This is a fairly typical account
        http://rst.gsfc.nasa.gov/Sect14/Sect14_1b.html
        I must say I was a bit surprised that the Greenhouse Gases did not seem to matter even in a small way.
        The quoted 33C greenhouse gas effect is based on a most contrived analysis and does not stand up to a realistic scrutiny.

        Basically the project was to find if it made any sense to add Infra Red absorbers to polyethylene plastic for use in agricultural plastic greenhouses.

        Polyethylene is IR transparent like the Rocksalt used in Woods Experiment.

        The addition of IR absorbers to the plastic made it equivalent to “glass”

        The results of the study show that( Page2 )

        …”IR blocking films may occasionally raise night temperatures” (by less than 1.5C) “the trend does not seem to be consistent over time”

      • Part of my last post was missing.
        It is of a modern experiment that seems to agree with the conclusions of the Wood experiment
        http://www.hort.cornell.edu/hightunnel/about/research/general/penn_state_plastic_study.pdf

      • I’d like to thank Pekka, cba, Andy Lacis, and others for the informative discussion above the last couple of comments. Although I’m familiar with Kirchoff’s law and radiative tranfer, I always learn something new from these exchanges, and the contributions of a climate science professional like Andy are particularly valuable.

        As to Bryan’s comments, I have the sense that we’ve come to the end of the road, now that he is repeating claims about greenhouse effects derived from a misinterpretation of data from real greenhouses, as pointed out to him on another thread. I don’t say that to cause offense, but rather to spare all of us, including Bryan, an exercise in wasting time. It seems to me that we are not going to convince him to relinquish his opinions, and that he is inadequately qualified by knowledge of climate dynamics to convey useful information to us.

        I’m prepared to move on beyond replying to the above comments rather than continue in the same vein, barring some unexpected new source of relevant information, in which case I’ll look forward to further discussion.

      • Fred Moolten

        Who are the “us” you refer to?

        ……..”It seems to me that “we” are not going to convince him to “…..

        I had no idea that I was addressing a corporate entity.
        I certainly do not speak for anyone but myself.
        I think other readers should be told of the real structure of this website.

        As for myself, whether someone finds something that they wish to debate with me or not, is up to them.
        I find though those that post “I’m not speaking to you” rather childish.

      • Bryan – I’m sorry you’re offended – that wasn’t my goal. Rather, I wanted My purpose was simply to spare others from what they may end up seeing as having wasted their time. Others will make up their own minds about debating you, but I’l be surprised if they disagree with my conclusion that “we” are not going to convince you to relinquish your opinions. What I was trying most was to alert them that based on earlier threads, my conclusion had a sound basis in experience.

        What might be useful for others is to suggest to you useful source material for augmenting your background in this area. I have some ideas, if you’re interested, but the other participants may have better ones.

      • Fred Moolten

        I gather you speak for yourself.
        My advice to you is practice being a little less condescending.
        Nobody twists your arm to reply, its up to you.

      • Your point is well taken. The truth is that I find it hard to achieve a proper balance when discussing these topics with someone whom I’m convinced is unwilling to give up misconceptions based on a shaky scientific understanding, but who at the same time deserves the respect each of us owes to each other. My assessment of your level of understanding may be wrong, of course, but my judgment on these matters has often been reinforced by the views of other very knowledgeable participants. What I try to do is withhold judgment until I consider further exchanges most likely to lead only to a repetition of what came previously. I then prefer to cease. In this particular case, I was relying on the discussions from a previous thread involving greenhouse experiments. I was troubled that you brought the same assertions from that thread to this one, ignoring the points I and others made there as though what we said was totally inconsequential. I found that a reason to alert others such as Andy, who is probably unaware of the previous thread, to the probability that the same thing would happen here.

        That’s the best I can do.

      • Fred Moolten

        …. “I was troubled that you brought the same assertions from that thread to this one”….

        Apparently sceptics are not allowed repetition, once some attempted answer is given by an IPCC proponent that should be the end of the matter.
        However the repetition rule does not apply to themselves.
        How many times has the 33C Greenhouse Effect consequence been trotted out in this thread alone?
        Yet the quoted 33C greenhouse gas effect is based on a most contrived analysis and does not stand up to any realistic scrutiny.
        All my posts on Judith’s site were around 5 themes
        1. Woods Experiment and modern experimental support.
        2. The contribution to the climate debate of the G&T paper.
        3. The dry adiabatic lapse rate.
        4. The lack of any apparent major radiative effect in the temperature profile of the troposphere.
        5. The different use of definitions of Heat,Work and Energy as used in climate science which conflict with traditional Physics usage.
        Any reasonable person reviewing the exchange around these topics would conclude that perhaps I was correct as the counter arguments were far from convincing.

      • Thanks for the explanations about the lapse rate(s). From some of those, and the mention in the article that radiation transparencies are linked to the quantity/mass of gasses, not on their extension (same absorption and emission from 1 kg of air, if this kg is a 1m thick column near the ground or a 10 m thick column high up), I think I now have a more usefull simple model of greenhouse effect in my mind, compared to the simple ground / visibly transparent but IR opaque shell “atmosphere” that is used for vulgarisation (and simple back of the enveloppe computations, unfortunately):

        I imagine the atmosphere above the ground, each layer not referenced by its height but by its average pressure (which is equivalent to the mass of air above him). There is “incoming” short wave energy, which is solar energy minus the directly reflected part ( (1-albedo)*S). There is outgoing long wave energy, radiated at a certain average level pressure p_r (equivalent to an above mass, or, indirectly, to a certain average altitude. I prefer the use of mass or pressure, because it seems less variable that height, and is more directly related to optical depth given that it is the quantity of gas that absorb/emit, not the extension). The radiation must be equal to incoming short wave radiation, so together with disk/sphere geometrical consideration, this gives the temperature T_r at p_r. The ground temperature T_g is best derived from the temperature at p_r using the average lapse rate (which vary depending on the water vapor content), with T_g=T_r*(p_g/p_r)**lapse(humidity)…
        CO2 effect is thus to reduce p_r (optical depth for IR is reduced, so the average height of radiation increase). The main feedback is related to H2O, meaning that when CO2 decrease optical depth p_r, the T increase below this height, which may increase H2O.
        Increased H2O can have multiple effects:
        1) decrease further p_r (it is an average, so even if the effect is below the average height, it can still affect average height): positive feedback
        2)increase albedo: negative feedback
        3)decrease lapse rate: negative feedback
        4)increase p_g (more atmosphere): very small??? positive feedback

        The analysis is far from perfect, compared to a fully coupled analysis…but, imho, it is a simple model that is closer to what really happen than what I have encountered yet. Feedbacks 3) and 4), for example, are not explicitely mentioned afaik…..So, what do you think about this mental model, does it have some value?

  87. A. Lacis,

    I have a post above to you dated “January 25, 2011 at 8:01 am” with comments and a question or two. I don’t know if you missed seeing the post or chose not to respond.

  88. Harold Seelig

    I didn’t mean to drop out of site. I find this discussion quite refreshing in that there are a lot of rational people using scientific methodologies in explaining their views.
    I’m presently trying (ugh) to concoct a double integral to model solar insolation and heat absorption/re-radiation from a surface. The heat absorbing is via a simple surface, with heat going into a liquid and a gas. As a simple model, I’m considering the surface, the liquid and gas are all at equilibrium (infinite heat transfer…..aren’t assumptions wonderful?).

    Not being a pure Scientist, but rather a practical Engineer, I find it helpful to use simple math to get an ‘order of magnitude’ model before committing to head-long immersion in a venue. But, I’m realizing it has been a little while since I needed to integrate or differentiate anything.

    So, I’ve got a sketch of a ball, with an equator and a ring around the top. Inside the integral, I have sin(a) for the effect of the angle of the element from the top, and sin(b) for the angle from one horizon. I plan to integrate along one line of latitude (using N as 0 deg), then integrate from N to the equator. The southern hemisphere will mirror the northern one.
    After the “daylight heating”, I’ll have the model go through one 12 hour night of cooling.
    My problem is deciding how much water and gas (neither having a greenhouse effect…..the gas is absolutely transparent to all wavelengths longer than UV-C). I’ve chosen to use 1000 lbs of gas, and 200 lbs of water. As stated, I need a simple model, without all the convection and so on. I really want to know the absolute effect of this “thermal flywheel”, so it can be subtracted from 33 deg, to know the real “greenhouse effect”.
    Most everyone seems to agree the “atmospheric effect” plus “land/water heat absorption effects” are equal to 33 deg…..the beneficial warming we have over a planet with our albedo and/or our emissivity, at our distance from the Sun, and with a perfectly insulating surface, and without atmosphere.

    So, what mass of gas (with a Cv of 0.241 B/lb.F and density of 0.075 lbs per ft2 at 70F and 1 atm) and mass of liquid (with a density of 1 and Cv of 1 B/lb.F) should I use? And, I’m using 1000 w/m2 (317 B/hr.ft2) as the noon/equator heat input……Is this OK?
    Thanks. Harold

  89. A Lacis says:

    “Since atmospheric radiation operates at virtually the speed of light, radiative heating and cooling are the fastest physical processes in the atmosphere.”

    One thing I definitely agree with. So please look at this scenario:

    Pick a spot on the Great Plains and go there for the month of July. The air air mass there varies widely in humididy while the soil often stays dry as toast. Take along a good thermometer, barometer and psychrometer (or some other means of measuring relative humidity). Record max T, min T, pressure, RH for each hour of each day. Calculate absolute humidity for each hour of each day. For all days where there was no precipitation for the past 3-4 days (no evaporative variable), plot max temp., min. temp., avg. temp for each day against absolute humidity and report the results. If ther is a greenhouse effect ALL of the temps will be highest when the absolute humidity is highest. That would prove the GHE. Why has nobody not done this?

  90. Jae – In case Andy Lacis doesn’t get a chance to respond, let me suggest a brief answer. What you suggest would work well if utilized on a global scale, with the results integrated to yield meaningful statistics. In fact, climate models do some of this based on observational inputs from around the globe. In a very localized area, you’re seeing the effects of weather patterns, and so drawing inferences about climate is problematic because of too many variables other than the ones you mention. These include the fact that much of the humidity and temperature you measure is imported from elsewhere via winds, and much of the effects of changing atmospheric humidity even in one region operate over thousands of square kilometers, within which temperature may vary greatly from one place to another. In addition, the ocean, which occupies about 70 percent of the Earth’s surface, behaves in a quantitatively different manner from the land.

    That is just a sampling. Without going into all the possible variations, let me just illustrate with one. It’s entirely possible that you might see a higher temperature on a more humid day, but there are good reasons you might not. One is simply that humidity tends to be associated with cloudiness and thus reduced exposure locally to the heat of the sun. Dry days are more likely to be cloud-free and therefore hotter. They are almost always cooler at night than cloudy days, however, because the water in clouds, like water vapor, has “greenhouse” (infrared absorbing) properties that retard the escape of heat to space. In fact, night-time temperatures are a better measure of greenhouse effects precisely because variations in sunlight are eliminated.

    I’m sure other participants can add further to the list of confounding variables.

  91. “I’m sure other participants can add further to the list of confounding variables.”

    LOL. And others can also add further obfuscations to hide the FACT that the GHE simply cannot be AND HAS NOT been demonstrated empirically and, therefore, has not yet made it beyond simple hypothesis to real scientific theory. No matter what I could offer as a “test,” your ilk have an excuse as to why it won’t suffice. That is pea/thimble, sir! Not science. It is really up to YOU guys to provide the “test” of the hypothesis. Are you agreeing with Trenberth that the null hypothesis has to be changed so that the “deniers” have to prove a negative?

  92. Hope you slept well, LOL. When you wake up, we can continue.

    “Dry days are more likely to be cloud-free and therefore hotter. ”

    I forgot to specify that only those measurements made on clear days are “counted.” You will STILL not see a positive correlation between abs. humid. and temp! Do it for a thousand locations/times, and I theorize that you STILL will not see a positive correlation between absolute humidity and temperature. If I am correct, there is no GHE, correcto?

    Please admit, at least, that there has been no empirical demonstration of the GHE.

    • I replied to you or someone on this subject before recently. It is not true that the GHE dominates temperature. Soil moisture has a major impact on how hot it gets in the day. Look at dry Phoenix versus moist Atlanta with the same solar radiation. Phoenix has a dry soil and all the heat goes into temperature increase, while Atlanta has a moist soil and as much as half the heat goes into evaporation, so Phoenix gets hotter.
      Now look at both of them at night. The downward longwave flux in Atlanta is much higher than Phoenix, and I would bet it cools less quickly at night as a result of the GHE.

  93. Trenberth’s suggestion about changing the null hypothesis is really creepy. I wonder if my boss will allow me to change my paycheck’s null hypothesis to “I should be getting $300K a year…now you prove I shouldn’t.”

    This talk about radiative balance is really irritating me. We don’t have to worry about radiative balance…it’s an artifact. It’s an effect…a result of the temperature of things with mass…it doesn’t cause much of anything. There is nothing humans can do…our incoming and outgoing radiation will balance.
    Think of this from an engineering point of view. Let’s find a control lever for the earth’s surface temperature. Shall we use radiation? Let’s see, not only is the radiation ‘control’ factor in the denominator of the SBL fraction, but it has an exponent…not a square, not a cube, but a quad exponent for the love of Gaia. And what can I do with radiation? I can deflect it, reflect it, focus it or diffuse it, but I can’t get useful work out of it. It is inconsequential.

    It’s completely true I don’t understand the nuances of how IR radiation works, but who cares? It can’t do anything measurable to our surface temperature. It can’t trap energy. It can’t store energy for more than a few microseconds. Forget about it. It’s useless.

    • ken,

      think cloud cover fraction. The more clouds, the more power reflected away from Earth and the less IR radiating away. It’s about a 2 to 1 ratio so the more clouds, the cooler it gets. Less clouds, less power reflected and more power absorbed, more IR emitted but in the 2/1 ratio so it gets warmer.

      • I’m happy to talk about anything that affects insolation and I believe 99% of what happens on our surface is directly related to incoming energy and anything that can affect it…clouds, aerosols, ozone, TSI, albedo, you name it. What I categorically reject is the notion that “back radiation” does anything measurable to our surface temperature. Radiation is an effect…it doesn’t cause anything measurable in our surface temperature.

  94. Pekka Pirilä | January 25, 2011 at 12:23 pm |
    Rod B,
    The second part of this statement is false:

    “So I’m just saying if the temperature increases, a larger percentage will be excited which indicates less natural relaxation.”

    With increasing temperature the rate of excitation will increase and so will also the share of molecules in excited state. That leads to an increase in emission by the same factor as the number of molecules in excited state has increased. The ratio molecules in vibrationally excited state (15 um line) to those in vibrational ground state is proportional to exp(-E/(kT)) where E is the energy of excitation and T the Boltzmann constant. This function tells, how the emission rate increases with temperature.

    Just a minor quibble with this, you’ve left off a term. Excitation does increase exactly as you say, but so does collisional deactivation and the net effect is to give the population ratio exactly as you say. However that does not necessarily increase the emission because the lifetime of the excited state will decrease whereas the spontaneous emission lifetime remains the same, (in the case of CO2 the emission lifetime is orders of magnitude greater than the lifetime of the excited state).

    • Phil. Felton – This is an interesting point. From a qualitative perspective, my thought would be that as average molecular kinetic energy (and the proportion of molecules with high energy) increases, the ratio of excitations/deactivations will increase – both the numerator and denominator increase, but the former more than the latter. This leads to an increase in the proportion of vibrationally excited molecules, as stated. As long as this is true, the probability of a spontaneous photon emission, while small, is applied to a larger number of molecules, and so emission rates increase, even though a correction has been made for deactivation frequencies. Has this been addressed quantitatively, and if so, how well do the solutions match empirical data on the temperature/emission relationship for gases?

      • I’m sure someone has calculated it. The point I’m trying to make is that even though the population of the excited state has increased the probability of it emitting has gone down so the emission rate will be given by:

        P(T)*exp(-E/(kT); where P(T) is the probability of emission (which will be a very small number)
        I’d expect P(T) to be related to (1-Pcoll(T)); Pcoll would be a fn of √T

      • It seems to me that in a steady state at a given temperature and a very large population of excited molecules (albeit a small fraction of the total), the number of excited molecules (n) will be relatively constant, and so photon emission per unit time will be given by P(n), where P is an inherent property of the mean decay time. The constancy of n does not mean that warming will not increase excitations and deactivations, but rather that they have achieved a new balance. Each individual excited molecule will be deactivated more rapidly, but the balance is achieved by the increasing rate of activations. The value of n would thus be determined by the distribution of kinetic energy that allows excitations and deactivations to occur at equal rates.

        Am I missing something?

      • Phil.,
        Actually I tried to be precise in selecting words fro what I wrote, but I decided to leave same details out.

        With increasing temperature both the number of transitions to the excited state and transitions by collisions from the excited state increase, but what is important that the occupation of the excited state increases as a net effect. You seem to agree on this point.

        For the intensity of radiation the important factor is the number of molecules in the excited state at any moment. As I said this number increases with temperature and the emission rate is proportional to this number. The fact that any single excitation has a higher probability of loosing the excitation through collision does not change the conclusion, because this effect is already taken into account in the determination on the occupation of this state. The effect must not be double-counted. Your statement that P(T) is dependent on temperature represents such double-counting.

        The fact that Pcoll(T) is increasing is counteracted (and more strongly) by the increase in the rate of excitation by collision.

      • Not really double counting because the lifetime of the state gets shorter and the probability of emission will decrease, I would expect a sampling from a Poisson distribution.

      • When the lifetime gets shorter each individual excitation has a smaller probability of leading to emission, but the shorter lifetime reduces also the number of molecules staying in the excited state, and the reason for the smaller probability is a consequence of this fact and there is no additional reduction in the probability.

        When we know that the number of molecules in excited state increases in spite of the reduced lifetime, we know that the reduction in lifetime is not sufficient to reduce the number of emissions as it was not sufficient to reduce the occupation of the level.

        You try to calculate twice: First you reduce the occupation (which remains still higher, but less so) and then you make an additional reduction due to lifetime. That is double counting.

      • A Lacis gives the formula that corresponds to my explanation in his message just below.

      • In the case of CO2 the lifetime of the state is entirely determined by collisions as you have said (proportional to √T*exp(-E/(kT)), emission is so infrequent as to be immaterial for this purpose. Lets say that the mean radiative lifetime is 100msec and the mean lifetime of the state is 1nsec then the probability of emitting a photon is ~1×10^-8, double the lifetime of the state and you’ll ~double the probability of emission.

      • The situation corresponds to the case where we have only two states. We have in addition rotational states of both vibrational state, but they do not influence the present argument.

        Lets consider N molecules of CO2. The occupation of the excited state is n and that of the ground state is N-n . The rate of transitions from the ground state to the excited state is c*√T*exp(-E/(kT)*(N-n) and the collision induced transitions back to ground state c*√T*n and transitions from ground state to the excited state due to incoming radiation b*(N-n). The rate of emission is n/τ. The total rate of transitions in both directions is equal in equilibrium. The radiative transitions are much less common (by a factor of 10000 or something like that). Thus the level of occupation can be calculated to be

        n = N * exp(-(E/kT) / (1 + exp(-(E/kT))

        writing the result in the form, where the partition function appears explicitly in the denominator.

        The rate of emission is the obtained by dividing by the constant value of τ, which is the lifetime of the vibrational state in free space, where no collisions occur.

        The emission rate calculated in this way is monotonously increasing and in agreement with the formula given by A. Lacis below.

    • re the following discussion: First I’d like to point out the discussion that Pekka and I had and is supplemented here simply states that` Planck-type radiation clearly and significantly increases with temperature, while vibrotational radiation maybe will… or not – only God knows for sure (which pretty much sums up quantum mechanics. :D ) But this is now old and getting boring…

      Here I just want to comment on the following discussion. Being a new commenter here, I am quite impressed with the quality of comments and especially how they try to bore into the actual physics in all its detail. I have spent most of my learning time on molecular absorption, emission, and transfer (I am an amateur, not a climatologist and don’t work in a science field.) I think this area is critical to AGW yet has considerable uncertainties and unknowns, at least beyond the fundamental basics, that often get glossed over. I have learned a bunch reading these comments, and thank you.

      I am also impressed with Curry’s blog. I’m pretty much a denizen of RealClimate and, though I’m kinda their long-term unofficial resident skeptic (which means I get beat up now and then ;-) ), I do like their quality most of the time. Looking forward to participating here.

      Sorry for blabbing…

      • Rod B,
        My claim in that chain and also elsewhere in this chain is that the emission from vibrational bands of CO2 increases as well. Both A Lacis and I have also presented the full formula telling, how it increases. The contrary claims have been presented without concrete justification and are simply false. I cannot see any uncertainty in this claim.

  95. In HITRAN line-by-line calculations, the formulation that is used for the temperature dependence of line strength is listed as:

    S = So [Vo/V Ro/R] exp [hc/k E” (1/To – 1/T) ]

    where So is the line strength at reference temperature To = 296K, E” is the energy of the lower state of the transition, Vo/V is the temperature dependence of the vibrational partition function which is close to unity, while the temperature dependence of the rotational partition function is (To/T)^j with j equal to 1.0 or 1.5 depending on the type of molecule.

    All of the above constants are tabulated as part of the HITRAN data base.

    • I could add that the vibrational partition function starts to reduce the strength of emission from a particular line, when higher energy vibrational modes have so high occupation that their share leads to a reduction in the occupation of the lower energy mode. This is not even near in the case of CO2 at atmospheric temperatures.

    • A. Lacis,
      Is that from the 1996 Appendix A ?

      • The actual formula is from the old 1973 McClatchey et al AFCRL Report.

        You can get a copy from google: original 1973 hitran report

      • A. Lacis,

        I really didn’t recognize that from glancing at the appendix A. It looks somewhat simpler than what has been recommended since 1996.

        I was wondering if you had seen my post to you from “January 25, 2011 at 8:01 am” and decided not to try to answer my question or if you had missed reading it as it seems that this thread makes it hard not to miss posts.

  96. This is all very well – there is obviously a CO2 affect in the infrared. And you see this by looking at ‘snapshots’ in the infrared between say 1979 and, say, 1998. As in the original Harries (2000?) methodology. There must by a fundamental law of energy dynamics be an impact on global temperature.

    But hell – the really interesting stuff in ISCCP-FD, ERBE and CERES data is in the bloody shortwave. Ray knows this – the IPCC know this. What the IPCC do is say that the results are not supported by surface cloud observations. Utter rubbish. They need to look at the Pacific for the observation of decadal changes.

    Ray’s analysis tells only half the story of radiative flux.

    • I seem to recall a bit more of Ray’s radiative xfer approach from a chapter of an online book he had a year or two ago. While I found a fair amount of the book somewhat interesting, I think his radiative approach utilized approximation approaches that assumed either optically thick or optically very thin conditions. ( or something along those lines). I found it most unsatisfactory as both conditions exist in the real atmosphere, separated by a small difference in wavelength.

  97. Actually less than half – because both ERBE and ISCCP-FD show cooling in the infrared.

    CERES shows an increase in net radiative flux between 2000 and 2009 – again in the shortwave and showing large fluctuations associated with ENSO (e.g. Dessler 2010).

    We need the full story with full disclosure and scientific integrity.

    • I agree with the need for full disclosure, and with the need for scientific integrity, for which full disclosure is important to avoid false impressions –

      Flux Data

      • Sure seems to correlate well with lack of ocean heating since 2003.

      • It’s not really possible to say, for two reasons. First, this is part of the interval that is distorted by measurement adjustments. Second, the baseline is not a zero flux anomaly, but rather the flux level of the 1985-1989 average. An upward deviation from that average might represent either an upward or downward deviation from zero.

      • “An upward deviation from that average might represent either an upward or downward deviation from zero.”

        You said the 0 is an average for a previous period. An upward movement from an average anomaly will still be positive (with possibly a negative absolute value) and a downward movement will still be negative. Since there is no real 0 for these values it really doesn’t matter where you set your 0 except when you are talking about absolute. I understand it shows anomalies

        By the way, if there are so many problems and distortions then you are saying the data is actually useless as presented??

        “The overall slow decrease of upwelling SW flux from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements. “

      • I probably wasn’t clear enough. If we want to know whether we are warming or cooling, we need to know whether the net downward flux is positive or negative, and that requires us to determine whether it is greater or less than zero net flux. Since we don’t know whether the 1959-1989 average was zero, we can’t tell whether being above it is the same as being above zero, and vice versa for being below it. Therefore, we can’t tell whether we are warming or cooling.

      • Fred,

        I do not know your assumptions, but, I am not ASSUMING any particular anomaly is warming or cooling. I am looking at the fact that the OHC has been flat since about 2003 and so are these indicators of energy flow. I find that interesting and possibly in conflict with the IPCC meme.

        The exception is the OLR which is going less negative or more positive. Of course, you might want to pass on that anomalies can be confusing when the underlying concept isn’t clear. Having the absolute data charted can help.

      • kuhnkat,
        I think that you are correct in the statement that this data is in good agreement with the idea that the warming seen so clearly in period 1980-1998 has not continued after 2003. While this may be in conflict with some interpretations presented in IPCC reports, it is not strong evidence against the main conclusions (but still likely to have some effect on them).

        A very reasonable interpretation is that the decadal oscillations are not only or even predominantly related to the transfer of heat between the oceans and the atmosphere but oscillations in the net energy flux related to variations in the cloud cover. Such variations of cloud cover may of course be both oscillatory returning soon to the previous phase and more permanent affecting the equilibrium climate sensitivity through cloud feedback. Perhaps there is a combination of oscillatory and more permanent changes.

      • Fred is correct in that the data is not yet well established and involves uncertainties, but this is anyway the data. Similar comments apply to the Argo data. It is still uncertain, whether the data of the last 10 years will turn out to be a combination of a minor fluctuation and somewhat erroneous data, but it starts to appear that several indicators point in the same direction. Knowing the admitted weaknesses of the atmospheric models, the interpretation I gave in my previous message is not against any well established understanding that I know about.

        Thus it should be accepted as one possibility and perhaps even the one in best preliminary agreement with observations. Unless I am mistaken on the overall evidence the climate scientists should take this data seriously and discuss its effects on their understanding.

        It is also a real possibility that I have got a wrong view about the overall evidence by following the net discussion with its unavoidable selection bias favoring data in conflict with main stream interpretations. I try to identify the proper balance, but cannot do it reliably.

  98. Was that the recent dessler paper claiming that there is a net positive cloud feedback of something like 0.4 +/- 0.7 with a correlation coefficient R^2 = 0.02 ????

    On a recent ‘chase through the tulips’, I viewed some of the literature concerning Ceres and TOA imbalance. It seems that Ceres data is so screwed up – something like an imbalance of 6+w/m^2 of averaged power discrepancy that they resorted to using a GISS model to get their 0.85 w/m^2 heating imbalance after two pages of discussion of Ceres and possible fudging of calibrations etc to try to get the measurements into line with reality. These were papers from trenberth, hansen, and others. If someone wants the references, I’ll have to try to dig it up for them.

    suffice to say it looks like the two papers, Kopp & Lean 2011 and Douglass & Knox 2010 combine to really put the hurt on the notion of excess heating going into the oceans.

    • You must be one of dem zombie sceptics – smile.

      Kopp and Lean readjusted the solar constant to 1631 W/m2 at the solar min in 2008 with a new calibration for SORCE instruments. Great – but it can’t have any effect on the real world because the solar constant didn’t actually change – only the measurement.

      Douglass and Knox used Lyman analysis of ARGO data to suggest a negative radiative imbalance. You must be aware of the van Schuckmann paper that suggests something different? We may think what we like – but unless there is uncontested data we are still in the dark.

      But if you look closely at the Dessler paper – the feedbacks are all cloud changes due to ENSO. As ENSO has an effect on surface temperature – Dessler imagined he could use this as a surrogate for global warming. There are however structural differences in ENSO clouds and global warming clouds (Zhu Ping et al 2007 – reference is here – http://www.earthandocean.robertellison.com.au/) and I agree that the conclusions are a bit silly. But it is a use of CERES to link ENSO to cloud –
      something that is fundamental to annual, decadal and longer energy dynamics. It is ENSO and clouds that is important not what Dessler imagines. The SW changes are the most significant in climate change.

      Cheers
      Robert

      • Considering that there is a significant correlation of cloud cover to surface T, caused by the clouds blocking SW incoming, the fact that dessler’s paper failed to show correlation at all while trying to prove the opposite – temperature affects cloud formation – suggests he managed to massage out the data while keeping the random noise.

        I did notice there were references in a table to other analysis in Douglass & Knox, most, I thought, were in agreement with Douglass.

        The thing about L&K is that it readjusts the numbers for absorbed solar. Hansen and trenberth’s little 0.8 w/m^2 imbalance goes poof. With Argo data from Douglass and most of the other Argo researchers mentioned by Douglass, it seems that the supposed hidden ocean heating caused by this 0.8 w/m^2 imbalance also goes poof. This does nothing to falsify global warming or that it has gotten hotter, but it does stick a pin in the catastrophic category where there is supposedly additional warming that will happen from the already present ghgs. What you see is what you got

      • cba – We’ve discussed this before, and our disagreement remains. The TSI correction significantly affects the CERES radiative imbalance, which nevertheless remains strongly positive (net warming), but not the Hansen model imbalance of 0.85 W/m^2, because the latter is based on forcing, including solar forcing (i.e., the change in solar irradiance since 1880, and not the actual TSI value. The best I can recommend is that interested readers visit the Hansen reference themselves (linked to in our earlier exchanges) to make their own judgments.

        Regarding Dessler, clouds, etc. – cloud feedback is critically dependent on cloud type and height, as much or more than total coverage. Low clouds cool (via scattering) more than they warm via greenhouse effects, but the balance is the opposite for high cirrus clouds. Dessler’s results are compatible with observed cloud data, and his conclusion that the net feedback is positive should be considered supported in the absence of identified errors in his analysis. My main problem with Dessler, or for that matter Lindzen/Choi or Spencer/Braswell is that short term feedbacks inferred from ENSO-related data originating in a warming ocean and imposed on an unwarmed atmosphere simply can’t be extrapolated to the long term effects of atmospheric warming from CO2 imposed on an unwarmed ocean. To his credit, Dessler acknowledged this possibility in his paper.

      • Fred,
        hansen’s approach presumes a constant (or at least totally predictable albedo) for the baseline. Neither is the case. Looks like a bit of circular reasoning going on as his value is substantially based upon the ‘measured’ value for ocean heating. This is the ‘heating’ that is missing from all Argo data analysis except for Von Shuckmann which is an analysis down to 2000 meters. The other studies like Douglass AND hansen 04 go down to only 700m. Hansen’s result used is for data to 2008 and back several years before 2003 and is not Argo data. The source data for hansen, Levitus, indicates warming during the Argo time frame of 2003 onward for data in the top 700m while several Argo studies for that depth range indicate cooling or no heat content change. The equipment other than Argo has serious problems in the data.

        It is very curious that hansen is using differentials to determine what is a difference in incoming versus outgoing power. In any case, he is using absorbed power measurements that are quite questionable in the light of other studies using better equipment. He is ignoring the fact that absorbed power is the difference between reflected (albedo) and incoming solar power. That differential between 1880 and now is a flat out SWAG. TSI changes slightly but albedo changes significantly and the causes are not all known and are not necessarily all internal to the Earth system.

      • cba,
        One may try to determine the annual energy balance of the earth system by two independent methods.

        One method is determining the net flux of energy at TOA. Unfortunately CERES and other data sources do not have a sufficient temporal and geographic coverage for allowing a precise determination of this flux. The estimated result is essentially a warming of 6.4 W/m^2 with an uncertainty that is as large as the value. This is not good enough accuracy.

        The other method is estimating the energy stored in the earth system at various moments. Hansen’s result from this approach is 0.85 ± 0.15 W/m^2, which is much more accurate. Even a skeptic, who does not trust Hansen’s results should agree that the value is well inside the range 0.85 ± 0.85 W/m^2.

        We can conclude that the second method is more accurate and that the best estimate based on the first method “must be wrong” as Trenberth commented it. Changing the input to the first method will change its result, but not in the near future make it accurate enough for telling, what the real imbalance is. The second method remains at least for a while the most accurate one and better empirical data about the warming of the oceans will make it more reliable and accurate.

      • Just because ceres is in gross error does not mean it is not possible to make measurements of the three parameters necessary. 1 TSI – now quite well known but it is 1360.8 w/m^2 to excellent accuracy. 2 albedo – fairly well known to average around 0.30 and is measurable by means other than ceres and it has been determined to vary. That leaves OLWIR, the outgoing LW IR that does not include any near IR from SW – measurable at night.

        The ceres data estimated error of 6.4 w/m^2 is out of the average numbers, not out of the TSI at the TOA. ~6.4/240 w/m^2 means the error is in excess of 2 1/2 %.

        hansen’s warmed over result is really just based upon the recent ocean heat content and a bunch of SWAGs about what happened for a hundred years without any other measurements to back it up. That it came out the same as the ocean heat measurement claims is not unexpected.

        However, what was pointed out in my previous post was for the Douglass paper, the several listed Argo measurements covering the depth range used by levitus show a most likely value that is negative just a decade after the end of levitus’ measurements. That and the difference in instrumentation suggests the levitus values are overstated. Of these four studies, only Douglass & Knox have an error range that could include the most likely value from Hansen’s paper.

        I see no reason to accept your suggestion that 0.85+/=0.85 certainly includes the range. It might or it might not be. None of those 4 studies are within that range with their most likely values.

        So, you have presented a measurement with an apparent error that is in excess of 2 1/2 % of the average absorbed and emitted radiation and a simulation based upon what appears to be erroneous input data along with nonexistent input data assumed to be constant??? and known to be variable (albedo). And you’ve claimed that the simulation is the better result. I find neither approach acceptable.
        A direct calculation requires knowledge of TSI, known to vary by a fraction of a w/m^2 at the TOA and albedo, known to vary by a few %. Both must be divided by 4 to get the averaged Earth value. That’s a total of perhaps 5 w/m^2 prior to division by 4 or 1 1/4 w/m^2 after division. What is left is outgoing LW IR.
        Are you telling me that there is no decent measurement averages for LW IR that can be combined with the average TSI – albedo reflection? That is any LWIR that is not in error by over 5 w/m^2 as is the ceres data when combined into a result????
        If not, I’m sure we could take the average cloud cover fraction, the average cloud cover height and top temperature and just calculate the average output from surface and clouds, run through a 1-d absorption model, and come up with a more accurate answer than you propose.

      • cba,
        The reason for my stated range 0.85 ± 0.85 W/m^2 is based on the following argument:

        The lower range is stating that the earth has been warming, not cooling. While the flux may be oscillating to the extent that one year may go to the cooling side, few would doubt that the recent average over 20 years or so has been warming.

        The upper limit is similarly related to the speed of the observed warming. Making it double of what Hansen et al tell would require extending the estimate of heat transfer to deep ocean at other possible storage well beyond widely accepted bounds.

        Neither limit requires wide trust in details of other empirical observations or in various models. They are far enough for being considered practically certain on the basis of rather straightforward arguments and simple calculations.

        As you can read from the papers of Trenberth et al, the analysis of the imbalance from energy fluxes done before the recent papers gave still significantly less accurate results than the present ±6.4 W/m^2. It appears indeed that the data does not yet allow for a better overall accuracy when uncertainties of all processing steps are included. As an example the CERES satellites make all their observations at four fixed times of the day. There is certainly some uncertainty in calculating the full 24 h value from data on four fixed times. Similarly averaging correctly over varying weather patterns making proper adjustments for the effects of the cloud optical depht involves error etc. There remains also significant uncertainty in the absolute calibration of the observations. The estimated uncertainties are larger for the reflected SW (albedo) than for the LW, but both have their own problems.

        As long as the uncertainties in the albedo and in the LW are as large as they are now, the improvement in TSI value does not make this approach accurate enough compared to what we can estimate without any other measurements than the changes in the surface temperature. The is nice that the results of the two approaches are converging. This improves trust in our understanding of the composition of the energy balance, but does not yet help much in estimating the size of the overall imbalance.

      • A. Lacis,
        Thank you for the information. I will be digesting it for a while. One thing I didn’t see was your sensitivity to forcing – deg C per W/m^2 .

      • Pekka,
        oops put a thankyou in the wrong place.

        It is an assumption that even in a warming situation that there will be some sort of warming of the ocean. It’s not impossible that warming could lead to a cooling ocean response. What warms the ocean below the surface is SW incoming. IR is blocked from entry so its energy absorbed is likely to simply increase the water vapor cycle. An increase in cloud cover due to this would actually reduce the penetrating SW, reducing the ocean heating. So while there could be more power hitting the ocean surface, the shift in SW to LW could reduce than increase the deposit of energy into the ocean. NOTE: I’m not claiming this IS happening, rather that it is a plausible alternative to the expectations that global warming must cause heating of the oceans.

        My own ‘expectations’ are simply that there is no ocean heating boogey man and that all effects of the warming that has occurred so far have also occurred.

        have you looked at albedo efforts done by Palle’ & Goode?
        Measuring albedo is not so much a measurement accuracy problem as it is a variable with significant variations that is not really being treated as such.

        I find it hard to believe the ceres data error of 6.5 w/m^2 out of 240 w/m^2 is representative of all outgoing SW or LW radiation data accuracy.

        Change is surface temperature is so tied to cloud presence which is tied to albedo.

        Lean has a simple model in the Kopp & Lean 2011 paper.

      • Both cloud radiative forcing,and albedo are first order non trivial problems that need to be understood,
        a) for the antithermodynamic effects,and
        b) to ascertain what is the range of natural variation and indeed sensitivity.

        As Lacis has introduced a range of .28-.32 for model albedo , and if the higher number is correct then we are very close to the bifurcartion (tipping) point eg ZG10 a troublesome property for the near future.

        Ramanathan is quite succinct on the importance of crf and phase space excursions.

        Cloud radiative forcing (CRF) is defined as the difference between the radiation budget (net incoming solar radiation minus the outgoing long wave) over a cloudy (mix of clear and clouds) sky and that over a clear sky. If this difference is negative clouds exert a cooling effect, while if it is positive, it denotes a heating effect. Five-year average of the cloud radiative forcing [1] is shown in Fig. 2. The global average forcing is about –15 to –20 W m-2 and thus clouds have a major cooling effect on the planet.

        The enormous cooling effect of extratropical storm track cloud systems.Extra-tropical storm track cloud systems provide about 60% of the total cooling effect of clouds [2]. The annual mean forcing from these cloud systems is in the range of –45 to –55 W m–2 and effectively these cloud systems are shielding both the northern and the southern polar regions from intense radiative heating. Their spatial extent towards the tropics moves with the jet stream, extending farthest towards the tropics (about 35 deg latitude) during winter and retreating polewards (polewards of 50 deg latitude) during summer. This phenomenon raises an important question related to past climate dynamics. During the ice age, due to the large polar cooling, the northern hemisphere jet stream extended more southwards. But have the extra tropical cloud systems also moved southward? The increase in the negative forcing would have exerted a major positive feedback on the ice age cooling. There is a curious puzzle about the existence of these cooling clouds. The basic function of the extra tropical dynamics is to export heat polewards.

        While the baroclinic systems are efficient in transporting heat, the enormous negative
        radiative forcing (Fig. 2) associated with these cloud systems seems to undo the poleward transport of heat by the dynamics. The radiative effect of these systems is working against the dynamical effect. Evidently,we need better understanding of the dynamic-thermodynamic coupling between these enormous cooling clouds and the equator-pole temperature gradient, and greenhouse forcing.

        refs
        1 Ramanathan V, Cess RD, Harrison EF, Minnis P, Barkstrom BR, Ahmad E and Hartmann D 1989. Cloud-radiative forcing and climate: results from the Earth radiation budget experiment. Science 243, 57–63.
        2 Weaver CP and Ramanathan V 1997. Relationships between large-scale vertical velocity, static stability, and cloud radiative forcing over Northern Hemisphere extratropical oceans. Journal of Climate 10, 2871–2887.

      • my biggest problem with the mess is that given an absence of data, there should be an absences of claims and insinuations that there is in fact an imbalance when it is not known.

      • cba,
        The papers of Trenberth, Fasullo and Kiehl are actually stating that essentially all the evidence for the imbalance comes through the climate models, not directly as an empirical observation. The climate models take into account many types of empirical data, but their paper states:

        The TOA energy
        imbalance can probably be most accurately determined from climate models and is estimated to be 0.85 ± 0.15 W m−2 by Hansen et al. (2005) and is supported by estimated recent changes in ocean heat content (Willis et al. 2004; Hansen et al. 2005).

        This is not very far from, what you say. It is a total misunderstanding to think that TF&K paper is presenting evidence for the imbalance. They are not doing it and they say it. They are using the model based estimate as a constraint for their calculation of various components of the energy flows.

      • fortunate for them that they are not doing an advertisement. It would be considered false and misleading and would be in violation of truth in advertising laws. but hey, they admit it in the fine print so it’s ok. KT97 admits their cartoon numbers could be off by 20%. I think I calculated a couple to be more like 24% but I suppose it’s possible they put in some of those numbers intentionally and actually did calculate that they were only 20% low/high.

  99. I sure I don’t know what you mean Fred. The top of atmosphere is where energy equilibrium exists in accordance with the 1st law of thermodynamics – and I linked to the global upwelling at TOA graphs at the ISCCP-FD site. Simply because there is other data such as tropical and surface flux or albedo is not relevant at all and so was not linked to. There is nothing in those other graphs or in the explanation below that contradicts what I said. Explain yourself fully or desist from from introducing red herrings to the discussion.

    The ISCCP says this at the link you provided. “The overall slow decrease of upwelling SW flux from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to be caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements.’
    Seriously – this is so bloody obvious that I am constantly astonished by a failure to acknowledge the implications and to follow those through.

    There is much more about the major reason for those changes here – http://www.earthandocean.robertellison.com.au/

    For engineers – a simple 1st order differential climate change equation works. It is really only the changes in radiative flux that we are interested in and these are known far more precisely that the absolute values.

    • “I sure I don’t know what you mean Fred”

      Robert – with due respect, I believe interested readers will know exactly what I mean if they first look at your selected graphs and then visit the actual site I linked to and read the entire explanation below the figures, including details about distortions in the record due to measurement adjustments.

      You should feel free to describe your conclusions as completely as you like, but if you invoke the principle full disclosure, you should probably conform to it.

    • I agree with the general principles you enunciate, but you should acknowledge the distortions in the figures you cite, as I mentioned above. You should also correct your statement that ” A flux of one Watt for one second is one Joule “

      • You’ll have to enlighten me?

        ‘Watts are units of Power, whereas Joules are units of Energy. Power is Energy in accordance to time: P = E/t . So One Watt of Power is equal to one Joule per second.

        P=E/t
        1 Watt = 1 Joule/ 1 second’ or 1 Watt X 1 second = 1 Joule

        There were considerable analysis and instrumental difficulties with ISCCP and ERBE data – hence the 2007 ISCCP-FD reanalysis. The consistency with ERBE for toa flux data is important as is surface observation of decadal cloud changes in the north eastern Pacific. It is also consistent with observation of cloud change during the 1997/1998 El Nino – and I have provided references for these things.

        If you read closely the ISCCP-FD site:

        ‘In the first row, the slow increase of global upwelling LW flux at TOA from the 1980’s to the 1990’s, which is found mostly in lower latitudes, is confirmed by the ERBE-CERES records. However, the sudden increase in upwelling LW flux in late 2001 may be exaggerated because it is associated with a spurious change of the atmospheric temperatures in the NOAA operational TOVS products that are used in the calculations.’

        ‘In the second row, the prominent peak in upwelling SW flux at TOA at around 1992 is caused by the volcanic aerosols from the Mt. Pinatubo eruption; the large values at the very beginning of the record may be due to the remnants of the El Chichon volcanic aerosol. However, the magnitude of the upwelling SW flux perturbation is exaggerated in these calculations (as shown in the comparison with ERBS) because the aerosol effect is included explicitly by using the SAGE II stratospheric aerosol record in the calculations and implicitly through the ISCCP cloud properties, which were not corrected to account for the extra aerosol. The sudden decrease of upwelling SW flux at TOA near the end of 1988 and its generally lower values until the end of 1994 (except for the Pinatubo event) may indicate a low bias of visible radiance calibration for NOAA-11; calibration of this satellite against the other polar orbiters was made difficult by the Pinatubo event. There is a brief increase of upwelling SW flux at TOA at the end of 1994 that appears to be caused by a high bias in the visible radiance calibrations of some satellites for a few months when no “afternoon” polar orbiter was available to normalize the calibrations. The overall slow decrease of upwelling SW flux from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements.’

        The important issues for TOA fluxes I have not glossed over at all. The important issue for trend in toa sw flux is the last sentence. Some anomalous results do not seem to invalidate the overall trends. This should instead prompt attempts at confirmation by looking at other data sources and other ways of considering cloud effects.

        Cheers
        Robert

      • My point was the importance of accompanying the figures you reproduce with an explanation of the distortions – otherwise, they create a misleading impression. I believe you should have done that to start with rather than merely reproducing the figures for readers to interpret as though there were no problems with them.

        Also, as I mentioned above, reference to a 1985-1989 baseline doesn’t tell us whether a net flux anomaly is positive or negative compared with net zero flux, and so we don’t know whether we’re seeing net cooling or warming. Otherwise, I have no problems with this particular dataset among others in assessing temporal flux variations.

        One watt = one joule per second. It’s clear you’re aware of that, but your readers may not be. To state what can be interpreted as the reverse is confusing, although if you first define watt and then, to avoid ambiguity, state, “one watt operating over the course of one second therefore delivers one joule of energy”, no-one will be misled. Without a definition of watt to start with, at least some readers will get the concepts mixed up.

        The equation in your title is not new, but a standard description of the Earth’s energy budget.

  100. JimD:

    “Now look at both of them at night. The downward longwave flux in Atlanta is much higher than Phoenix, and I would bet it cools less quickly at night as a result of the GHE.

    Please do look at the data. You can find it here: http://rredc.nrel.gov/solar/old_data/nsrdb/redbook/sum2/state.html

    Contrary to the commonly-accepted myth (another “consensus” perhaps?) of the “cold desert night,” Phoenix stays much warmer at night than Atlanta on a summer’s eve. I wonder if there might be another climate-related “consensus” that is a myth.

  101. I don’t want to be distracted from the main point.

    LW SW

    ERBS Edition 3 Rev 1 0.7 -2.1
    ISCCP FD 0.5 -2.4
    (source IPCC AR4 s 3.4.4.1)

    The big warming was in the SW in both ERBE and ISCCP-FD data and both show cooling in the LW. Am I alone in wondering about this? I have checked the sources of course.

    ‘The overall slow decrease of upwelling SW flux from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements.” (ISCCP) This very much suggests a somewhat important role for cloud changes in climate change in the satellite era. It is a bigger story than infrared alone.

    The equation provides a way of thinking about energy fluxes. It is obvious because it is based on a fundamental physical principle – but I haven’t seen it anywhere so I feel justified in claiming it. It simply provides a way of thinking about energy dynamics. Putting it as a differential puts it in a dynamic form that emphasises the importance of the time element and is potentially useful:

    Ein/s – Eout/s = d(GES)/dt

    We start at a specific global heat content which is the sum of all energy imbalances to that point – the initial value in the global energy storage differential. What causes the planet to heat or cool after that is the changes is both the starting points and changes in incoming and outgoing radiant energy flux.

    If we are in a period of global warming such as 1900 to 2000. It is obvious that d(GES)/dt is positive and Ein/s > Eout/s. As above – the details of SW and LW flux are important. After that, I think my analysis needs more work.

    I don’t want to make a big deal of it – but the zero point in the radiative flux is nominal and irrelevant and this is a bit confusing. It is a bit like a –ve temperature anomaly – it doesn’t mean that the temperature is minus something or other. All that matters is the trend. If it starts off at -1W/m2 and ends at 1W/m2 – the change in radiative flux is 2W/m2. There is a 2W/m2 more radiative flux energy entering the atmosphere at the end of the period than at the beginning regardless of where the zero point is – and this of course is a warming trend.

    Finally, as an engineer and environmental scientist of some 30 years standing – I stand by my characterisation of a Joule.

    Cheers
    Robert

    • Robert

      …”Finally, as an engineer and environmental scientist of some 30 years standing – I stand by my characterisation of a Joule.”…
      You are quite correct and its strange that anyone would question such a basic definition.

      • G’Day Mate

        I certainly don’t want to discourage people from thinking about the science – and being wrong is not a crime – or else we’d all be hung.

        Keep smiling.

        Cheers
        Robert

    • Robert – We don’t disagree about general principles, which are well established, but I don’t agree with your statement “but the zero point in the radiative flux is nominal and irrelevant”. It’s useful, as you state, to determine trends, but it’s also desirable, at a given time point, to determine whether the climate is gaining or losing heat. To know that, one does need to know the level of LW and SW fluxes that cancel each other – i.e., the zero point. Merely knowing how much a flux is above or below 1985-1989 levels, doesn’t provide that information.

      I think we shouldn’t dwell too much on the Joule/Watt issue, which is a small point of a semantic nature. When I read your statement that “a flux of one Watt for one second is one Joule”, my thought was that you were saying “a flux of one Watt per second is one Joule”. Only later, on reflection, did I realize you were saying something different, but the reason I realized that was because I already knew the definition of a Watt. Your explanation is presumably aimed at readers who don’t, and some of them are likely to carry away the impression I first had of what you meant.

      Finally, to clear the air, I find the ISCCP graphs and explanation interesting, but my reaction to your first comments was biased by your emphasis on the need for “full disclosure” and “integrity”, as though to imply that others discussing the same phenomena were guilty of poor integrity due to lack of full disclosure, while you accompanied your comments with two graphs that contained significant distortions you failed to disclose. I accept the fact that you intended no deception, and I think we should move on.

      • G’day Fred,

        There are 3 aspects to the energy budget of course – incoming and outgoing energy and global energy storage mostly as heat.

        I think I can see what you are getting at – if we consider incoming energy as a constant (which it isn’t) there is a point in the outgoing energy graph where the radiant imbalance is zero (Ein = Eout). We don’t know that point – what we have instead is an arbitrary zero.

        You then have to bring in the third term in the equation – heat stored in the atmosphere and oceans. If we know that the planet is warming – it is trivial that Ein > Eout. The variations and trends in SW and LW are useful then in looking at why.

        I object strongly to any notion that what I said about the trend in SW or LW at toa was misleading in any way. I have shown this for your benefit in quoting the ISCCP verbatim.

        What I was getting at with Ray Pierrehumbert is that the infrared information is same old same old but greenhouse gases are not the only determinant of the global energy budget. SW changes from clouds are showing up as the major determinant of climate in the satellite era as confirmed by NASA no less. ‘The overall slow decrease of upwelling SW flux from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements. ‘

        http://isccp.giss.nasa.gov/projects/browse_fc.html

      • Robert – I agree that changes in clouds are important. This includes changes in total cloud cover, but also changes in cloud type and cloud height. Low clouds tend to cool, because their albedo effects outweigh their greenhouse role. Conversely, high cirrus clouds tend to warm – their light scattering is less, and their greenhouse effects are substantial at an altitude where they replace cloud-free air that has much lower infrared absorbing capacity.

        Clouds, however, don’t change for no reason. They are affected by internal climate variations such as ENSO, but ENSO changes are short term. Over multidecadal or centennial timescales, one needs some other factor to cause cloud cover, cloud type, or cloud height to change. In a long comment below, Andy Lacis describes how the Hansen et al 1984 climate model estimates a positive cloud feedback that would amplify the effects of CO2 forcing both from reduced cloud cover (a mainly SW effect) and increased cloud height (mainly LW). He also mentions that newer models also find a warming effect, although less than with the earlier model. An analysis of the LW contribution of cloud height changes has been described by Kuang and Hartmann 2007 . These are not proofs, but they do imply that we should not see cloud changes and greenhouse gas effects as necessarily competing explanations for long term climate changes after the shorter term internal variations have mainly averaged out.

      • G’day Fred,

        ENSO is part of a basin wide pattern of SST that varies temporally and spatially on interannual to decadal and millennial scales. We know that low level stratiform clouds form over cool sea surfaces, especially in the eastern Pacific. We know that these changes have an influence on surface temperature through energy dynamics.

        So we are talking about changes in cloud that turn up in the energy record as the largest factor in climate change in the satellite record – 2.4W/m squared in ISCCP and 2.1 in ERBS. With cooling in the infrared. I was really just looking for this in the satellite to confirm physical oceanographic and hydrological conundrum.

        The Pacific decadal pattern is associated with larger and more frequent La Nina over 20 to 40 years – as we are seeing in the current super La Nina – as well as cold water rising strongly in the north east Pacific – yet more cloud. And ENSO is not a simple oscillation – it is non-Gausian and non-stationary – and it is not an isolated phenomenon. That’s the mistake that everyone makes.

        ENSO also chaotically bifurcates every few decades. Independent of the underlying physical causes of Pacific climate variability, an understanding of the PDO and ENSO as behaving like a complex and dynamic system in chaos theory emerged from a 2007 study by Tsonis et al: ‘A new dynamical mechanism for major climate shifts’. They constructed a numerical network model from 4 observed ocean and climate indices – ENSO, PDO, the North Atlantic Oscillation (NAO) and the Pacific Northwest Anomaly (PNA) – thus capturing most of the major modes of climate variability in the period 1900–2000. This network synchronized around 1909, the mid 1940’s and 1976/77 – after which the climate state shifted. A later study (Swanson and Tsonis 2009) found a similar shift in 1998/2001. They found that where a ‘synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability.’

        I don’t believe in global warming at all. Imagine, just for a moment, a world that doesn’t warm for a decade or three – and what the political implications of that would be?

        There is just such a suggestion of climate uncertainty as a result of what the Royal Society, in their recent climate science summary, called internal climate variation. It is irrelevant to the separate risk from greenhouse gases but very relevant to climate change politics. This suggestion emerges from peer reviewed science and must objectively be accepted as a very real potential that is likely to create a very real quandary for carbon reduction politics.

        We need to go beyond the conventional ways of thinking about climate and adapt to uncertainty. Thinking on climate should be through the prism of chaos theory. The language should be in terms of climate change and climate risk, of internal climate variability and the implications of that, rather than a simple global warming in which the outcome is predictable with some certainty and within limits.

        Cheers
        Robert

        http://www.earthandocean.robertellison.com.au/

      • Robert – Because internal climate variation, predictability, chaos, climate shifts, and their implications have been discussed in detail on many occasions in this blog, without clear resolution, I have no expectation of resolving them here, and so I won’t try. The point I’ll make, with which you’re free to disagree, is that we know enough about the physics of CO2 and water (not only in the laboratory but also from IR spectral measurements in the atmosphere) to be confident that rising CO2 and the expected and observed rises in atmospheric water vapor (modified by changes in clouds and in lapse rates) must inevitably exert substantial warming effects. In other words, whatever else is happening, it would be inconsistent with the basic physical properties of CO2 to deny it an important role. To do that would require extraordinary new evidence that shows no sign of emerging.

        I’ll also repeat my contention from elsewhere that the internal phenomena such as ENSO, PDO, AMO, etc. can be shown by the data to have more or less averaged out over the course of a century, and that more generally, chaotic elements with short term consequences have lesser long term effects because of the averaging, so that the greenhouse-gas warming trend dominates over the long run.

        All of this has consumed page after page here, thousands of pages in the literature, and has been addressed by data on CO2 and climate spanning more than 400 million years, and involving hundreds of independent approaches to the subject, all of which converge to the basic conclusions about anthropogenic warming. The remaining uncertainties are quantitative, but within a relatively limited range of climate sensitivity values in the large majority of assessments.

        Could this all be wrong? Sure, but it isn’t practical for us to reargue it here. For that reason, and not from an unwillingness to consider alternative viewpoints, I’ll refrain from further exchanges on a topic as general as the entire anthropogenic warming paradigm. I’ll keep my eyes and mind open, though, for new data that shed more light on the subject. In the interim, very specific topics are always worth discussing within the limited space and time we have here.

      • G’day Fred,

        I began with the the idea that an effect of greenhouse gases can be seen in snapshots of IR emissions as in the Harries 2000 paper. The IR sprectral measurments you refer to.

        This is without a doubt a warming influence from greenhouse gases. But this cannot be discerned in the satellite data amidst the large backgound signal. It is now recognised by even such as the Royal Society that climate is ‘an example of a chaotic system’. So we have moved well beyond debating the issue of whether it is or not – and should move on to considering the ramifications.

        The claim that most recent warming was the result of anthropogenic greenhouse gas emissions is not supported by ISCCP-FD, ERBS or CERES measurements. To discern the signal of AGW from the noise of internal climate variation we need to resort to sprectral IR method. It is obviously there but there is a bigger game elsewhere – and its name is complexity.

        The big driver of decadal climate is the Pacific Decadal Pattern. The big changes in energy dynamics can be seen in ISCCP-FD, ERBS and CERES. As we are in a cool mode of the Pacific decadal pattern – the cooling influences are intensifying – and the current super La Nina is driving global cooling. These modes last 20 to 40 years in the physical record. There is some potential for global cooling over 10 to 30 years more – although this could change in a day.

        Can 400 million years of proxy records be wrong? Hell – I don’t know – why don’t we ask the NAS?

        ‘Researchers first became intrigued by abrupt climate change when they discovered striking evidence of large, abrupt, and widespread changes preserved in paleoclimatic archives. Interpretation of such proxy records of climate – for example, using tree rings to judge occurrence of droughts or gas bubbles in ice cores to study the atmosphere at the time the bubbles were trapped -is a well-established science that has grown much in recent years. This chapter summarizes techniques for studying paleoclimate and highlights research results. The chapter concludes with examples of modern climate change and techniques for observing it. Modern climate records include abrupt changes that are smaller and briefer than in paleoclimate records but show that abrupt climate change is not restricted to the distant past.’

        US National Academy of Science, Committee on Abrupt Climate Change (2002), Abrupt Climate Change: Inevitable Surprises NAP – p19

        Neither side of the climate issue has the complete picture. One side has neglected the range and intensity of natural climate variability for far too long. The other sees natural variability as an antidote to the anthropogenic changes being made in the atmosphere and the physical effects of those changes. Neither of the views is based on understanding climate as an example of a complex and dynamic system in theoretical physics. In these systems, small initial changes of whatever type accumulate until they trigger an unpredictable and abrupt change that is wildly out of proportion to the initial impetus. From this it follows that there is an irreducible climate risk from anthropogenic greenhouse gas emissions that is unpredictable but mathematically certain. No shadow of a doubt that a risk exists is possible – regardless of the extent and nature of natural variability.

        Cheers
        Robert

      • Robert, I question your mathematical claims here. If by risk you mean possibility, then risk is always there, because many things are possible. An abrupt ice age for example.

        But you say “small initial changes of whatever type accumulate until they trigger an unpredictable and abrupt change that is wildly out of proportion to the initial impetus.” I doubt this model very much.

        If you are referring to the butterfly effect, there is no accumulation involved. Nor is any given small change likely, or even possible, to cause any large effect. That is a common misconception. If you have a different mathematical model I wonder what it is and what evidence you have that it applies to abrupt climate events?

      • Your conclusion reduces to this:
        Big “flips” happen in complex systems.
        What causes them is unknowable.
        Anything we do might be enough to make one happen.

        Therefore we should hunker down and do nothing, ever.

        Blech.
        Alternative suggestion: become as wealthy and powerful as possible so we can cope with whatever happens as well as possible.

        P.S. since the stimuli for “flips” are unknowable, there is an equal chance that breaking the CO2 famine will stabilize the climate (which is much cooler than the previous interglacial, BTW.)

  102. So – doesn’t like tables aye.

    The tend in SW in ERBS and ISCCP respectively is -2.1 and -2.4 W/m2. The trend in LW is 0.7 and 0.5 W/m2. Hmmmm.

    • If clouds are sensitive to spectral irriadiance as they seem to be the use of say albedo as a constant needs either validation eg Ramanthan 2009 or a rearrangement of the energy distribution eg Penner

      • albedo is not a constant but is quite variable in w/m^2 magnitude.
        Palle’ & Goode have worked on this for some time with their Earthshine project.

      • Indeed albedo is not a constant and its use of such does require a)Theory ,and
        b)Validation

        eg Ramanathan

        It is remarkable that general circulation climate models (GCMs) are able to explain the observed temperature variations during the last century solely through variations in greenhouse gases, volcanoes and solar constant. This implies that the cloud contribution to the planetary albedo due to feedbacks with natural and forced climate changes has not changed during the last 100 years by more than ±0.3%; i.e, the cloud forcing has remained constant within ±1 Wm–2. If indeed, the global cloud properties and their influence on the albedo are this stable (as asserted by GCMs), scientists need to validate this prediction and develop a theory to account for the stability

      • the problem of course is that it isn’t stable or constant, nor is it under the deterministic realm of predictability. It very likely isn’t even in the realm of all within the Earth system.

        Actually, we might find out any time now (in astronomy time – which means sometime between the time you read this and the next two hundred thousand years or so). It appears Betelgeuse, a red giant, is collapsing. Somewhere at the end of this collapse is the likelihood of a supernova blast and perhaps even a gamma ray burst. Since Betelgeuse is very large and at a moderate distance, we’ve got a bit of a view of the surface that suggests we’re off the polar axis by perhaps 18deg. GRBs or at least some of them appear to have half angles as much as 20 deg, although some may be as narrow as 3 deg. If too many of these “IFs” turn out positive, we, or our descendants could find out about atmospheric chemistry and properties rather quickly.

    • It is notoriously true that doing experiments in “Climate Science” isn’t possible. But maybe it is.
      A province in Spain has been coated with greenhouses due to incentives offered, resulting in shiny plastic roofing covering huge areas. It has now been determined that the reflected SW is resulting in a 0.3°C/decade cooling!
      http://geographyfieldwork.com/AlmeriaClimateChange.htm

  103. Despite all the discussions in the various radiative transfer and GHE threads there is still one area of AGW theory that I can’t get my head around. Why does a temporary radiative imbalance have to result in an increase in surface temperature? Why can’t it, for example, result in an increase in convection allowing energy to be transported up to the tropopause where it can more easily be radiated to space? If the answer is that we have measured a consistent radiative imbalance at the TOA, then my follow on question would be, ‘measured with what?’ If it is with satellite measurements then are these the same inaccurate tools that have been called into question in the ‘missing heat’ saga ie CERES? In which case, we can’t be sure that there is a persistent radiative imbalance. Fred, please help…in simple terms if possible, and sorry in adavance if this is still basic stuff. Many thanks.

    • Rob – I don’t know if this helps, but –

      Radiative heating due to imbalances tends to increase the lapse rate above the adiabatic level, creating instabilities that lead to convection that restores a stable state. In other words, convection is an integral part of the basic physics and their evaluation in models. Without convection, estimated temperature increases would be much higher than the current estimates. When one also considers the convective transport of moist air with release of latent heat at higher, colder altitudes, a further adjustment is necessary – and is accommodated by the calculations. I would add that without surface and lower atmospheric warming, there would be little reason for convection to change – it is a response to warming.

      Ultimately, theoretical and modeled estimates must be validated against real world observations. Many aspects have been well confirmed – e.g., the radiative transfer codes used in the models that tell us how temperature responds to CO2 and water. Also, in the case of global climate models, predicted temperature trends based on hindcasts (and for Hansen’s model forecasts) have performed fairly well albeit imperfectly – this is discussed upthread and elsewhere and I don’t want to rehash those discussions). Others estimates have done less well -e.g., those related to ENSO, or those subjected to uncertainties about aerosol forcing. To data, energy budget calculations based on comparing incoming and outgoing radiative fluxes at the top of the atmosphere (TOA) have shown the positive (warming) imbalances predicted by theory. As you know, however, these measurements are subject to technical and sampling errors, and so the exact values are suspect.

      As to your basic question, a temporary TOA imbalance will inevitably affect temperature, simply because if more heat is added to the system, the latter must become warmer than it would be otherwise. Of course, as you imply, temporary imbalances may be masked by other effects, particularly if they are small, and also because ocean storage of excess heat does not express itself fully in a short interval. For example, a long term cooling trend might not be reversed by a temporary warming imbalance, but whether or not we can measure the effect, that imbalance will ameliorate the cooling so as to make it slower than it would otherwise have been.

      • Fred
        Many thanks for taking the time to reply. Without wishing to take up too much of your time, I have a couple of observations if I may.

        I understand that a temporary radiative imbalance will inevitably result in an initial increase in temperature. I suppose the point I was trying to make was that the energy balance can then be restored in other ways apart from radiation and I used convection to make my point. Thanks for highlighting that convection is already taken into account and that calculated temperatures would be otherwise be much higher if this was not the case.

        That said, is it fair to say that it is only by modelling that the convective effect can be considered? If so, there must be many assumptions made about the lateral extent and magnitude of convection and the overall part that it plays in the climate model. It is this that I find difficult to understand. How is it possible to model something so unpredictable? In real world terms, there must be many influencing processes and factors present when a column of moist warm air rises into the atmosphere and I find it difficult to believe that all those processes can be quantified accurately. Additionally, across the globe at any moment, convection is occuring at different places, times and rates. This makes it a very tricky and seemingly unpredictable item to quantify. Surely there must be more to it than an assumed lapse rate? If so, how’s it done?

        Sorry if this is a bit meandering and many thanks for your consideration. Rob

      • Lapse rates can be quantified by satellite measurements of temperature at different altitudes. Model results utilizing empirical data and incorporating lapse rate variations with latitude, season, etc. yield climate responses to CO2 forcing different from those derived from models using simple assumptions about lapse rates, but the differences are not large.

      • Fred,

        “When one also considers the convective transport of moist air with release of latent heat at higher, colder altitudes, a further adjustment is necessary – and is accommodated by the calculations. I would add that without surface and lower atmospheric warming, there would be little reason for convection to change – it is a response to warming.”

        Hmm, there appears to be a measurable drop in winds over a number of years that was being blamed on AGW. Your statement sounds plausible and would seem to support a lack of warming.

        What happens if there is actual cooling?

      • Although I haven’t seen the wind data, convection refers to vertical movement of air masses triggered by changes in buoyancy (warmed air becomes more buoyant) and winds are a different phenomenon.

        Cooling would tend to increase lapse rates.

        The calculation of feedbacks on CO2-mediated warming includes a positive feedback from increased water vapor and a negative feedback from reduction in lapse rates due to latent heat release at high altitudes – a process mediated by convection. Water vapor and temperature as a function of altitude can be measured by satellites, and in general show results consistent with predictions, albeit with uncertainties related to measurement error and sampling inadequacies. They tend to show that the variability of positive water vapor feedback and of negative lapse rate feedback are both greater than the variability of the difference between the two, which yields a net positive feedback. Accordingly, water vapor/lapse rate feedback estimates tend to be combined in evaluating climate responses to increasing CO2.

      • Fred, isn’t there a rather large amount of convection around the equator ?

        What do you attribute winds to if not convective activity? Think about high and low pressure areas and how they develop?

        Obviously it isn’t a one factor game, but, convection plays its part.

      • Winds are determined by many factors, including Coriolis forces. Buoyancy is important, but winds depend on differences between one location and the next, and so increased convection by itself doesn’t necessarily mediate an increase or a reduction in wind. I haven’t seen data on overall wind changes that have occurred in response to climate change. (Hurricanes are only a small subset of that type of data, and even there, the multidecadal long term trends are a matter of controversy because of uncertainties in the earlier data). Do you have a reference for wind reductions that have been demonstrated over the course of warming during recent decades?

      • Here are a couple of sources that are indicative I believe:

        http://moneymorning.com/2009/06/19/wind-power-programs/
        Wind Speed Trends Over the Contiguous United States
        http://www.agu.org/pubs/crossref/2009/2008JD011416.shtml

        Pan Evaporation down since 1990’s
        http://www.makeitgreen.webs.com/Global_Dimming.html

        Shows several data sources
        http://themigrantmind.blogspot.com/2009/11/does-pan-evaporation-indicate-cooling.html

      • Thanks. The links are interesting except for the last one, which misunderstands the difference between changes in total solar irradiance (TSI) reaching the top of the atmosphere and the insolation reaching the Earth’s surface – the latter can be reduced by aerosols without any change in TSI, and there is substantial evidence for this during the several decades after the 1940s.

        It is plausible that a combination of reduced insolation and reduced wind speed can account for the noted reductions in pan evaporation. The wind speed reduction has been attributed to greater warming of high latitudes than the tropics, thereby reducing the pressure differences responsible for poleward air movement.

        Finally, global temperature trends are dominated by the oceans, where pan evaporation is not a measurement tool as far as I know. I’m not aware of good data on wind speed trends over the oceans – most of the discussion has focused on winds of tropical storm strength, but I’m not sure whether weaker winds have been evaluated.

      • Here is another:

        This is a more global analysis
        http://ams.confex.com/ams/pdfpapers/93753.pdf

        This was intersting as it seems to show a larger reduction with altitude.
        http://www.earthgauge.net/2010/climate-fact-wind-speed-changes

      • Talking of winds, Fred. The lateral movement of air has kinetic energy that must come from somewhere. You never see winds discussed in the context of climate change. Do winds feature significantly? Are the energy levels significant in the context of AGW. Do they have to be modelled? If so, how?

        PS Last question for tonight! :)

      • I think you’re right that winds tend to be neglected in climate blogs in their own right, although they come into play implicitly in discussions of climate oscillations and atmospheric currents (ENSO, PDO, AMO, NAO, the Gulf Stream, the Meridional Overturning Circulation, etc.). In the literature and in the models, winds are critically important in determining energy flow from one region to the next. The poleward transport of energy as a result of tropical heating and its effect on atmospheric and ocean circulation patterns is one of the most important factors responsible for defining our climate and its change in response to climate forcing from ghgs, aerosols, solar changes, etc.

    • robb,

      Normally, some of the power must result in a T change in order for other parameters to change. Not all of the power change has to result in a T difference though. Simplest example would be convection. If there is added heating due to blocking of IR power, one expects that convection will increase. Since convection is a function of temperature differences, for convection to increase, the difference in temperature must increase some. Hence, the T doesn’t have to increase sufficiently for the imbalance to be made up only by radiative but rather by a combination of radiative and convection. Any other factors would also tend to be T related.

  104. cba,

    In reply to your earlier question, deducing cloud feedback sensitivity from climate model calculations is not all that easy. We did a fairly detailed analysis described in the Hansen el al. (1984) study of climate sensitivity and feedback mechanisms which showed that cloud changes in a doubled CO2 experiment produced substantial positive feedback.

    The approach was to tabulate the detailed cloud, water vapor, surface albedo, and temperature distributions of the control and doubled CO2 runs, then evaluate the radiative flux changes associated with the changes in cloud, water vapor, surface albedo, and temperature distributions that took place. The results showed that there was a 1.7% decrease in cloud cover (mostly low clouds) with an increase in cloud top height (due to increase in cirrus clouds) which together accounted for about 0.8 C of the total 4 C temperature change that took place between the control run and the equilibrium doubled CO2 run (doubled CO2 accounted for about 1.2 C of radiative forcing, water vapor for about 1.6 C of feedback response, and snow/ice for about 0.4 C of feedback response).

    The cloud feedback appears to be split roughly equally between SW albedo decrease due to cloud cover change, and greenhouse increase due to cloud height increase. Thus the greenhouse contribution of clouds (for climate perturbation relative to current climate) relative to that of water vapor (0.4 C to 1.6 C) appears to be significantly saturated when compared to the greenhouse effect attribution for the entire atmosphere (50% water vapor, 25% cloud, 25% non-condensing GHGs).

    Moreover, recent modeling results (with improved cloud and boundary layer treatment) get a 3 C sensitivity for doubled CO2. Qualitative analysis shows diminished cloud feedback, but we have not yet performed a detailed radiative modeling analysis to quantify a more precise attribution.

    Clearly, the KT97 energy balance analysis does not lend itself to infer cloud feedback attribution. Of the numbers they show in their figure, the most reliable inferences that can be made are that: (1) the planetary albedo = 107/342 = 0.313; (2) fraction of incident solar radiation absorbed by ground = 168/342 = 0.49; (3) fraction absorbed by atmosphere = 67/342 = 0.196; global mean surface albedo = 30/(168 + 30) = 0.15.

    These global numbers are the typically tabulated GCM diagnostics that are most readily available, and they can be taken as an ‘accurate and reliable’ tabulation of all the complex radiative interactions that take place in GCM operation. But, unless special diagnostics are used to tabulate information regionally, or for clear and cloudy conditions, inferring the contributing components from the scrambled egg is difficult. (The global mean surface albedo deduced to be 0.15, is made up of ocean albedo of order 0.07, vegetation-land albedo ranging form 0.10 to 0.35, and snow/ice albedo ranging from 0.3 to 0.9.)

    As crude analysis of the KT97 figure, I would take their 67 W/m2 atmospheric absorption (mostly due to water vapor, ozone, oxygen, but also including cloud and aerosol absorption) as occurring in the top part of the atmosphere, so that there is 342 – 67 = 275 W/m2 incident flux over both the clear (0.38) and cloudy fractions (0.62). This would imply 0.85 x 275 x 0.38 = 88.8 W/m2 clear-sky fractional absorption by the ground, and 0.15 x 275 x 0.38 = 15.7 W/m2 clear-sky fractional reflection.

    In the cloudy fraction, the incident flux is 275 W/m2; the reflected portion must be 77/0.62 = 124.2 W/m2 (to account for the 77 W/m2 reflected by clouds and aerosols), so the 150.8 W/m2 is the flux incident on the ground. The flux absorbed by the ground in the cloudy fraction is then 0.85 x 150.8 x 0.62 = 79.5 W/m2, to combine with the 88.8 W/m2 from the clear fraction to produce the stated 168 W/m2 absorbed by the ground, globally.

    The flux reflected by the ground in the cloudy fraction is 0.15 x 150.8 x 0.62 = 14 W/m2, to combine with the 15.7 W/m2 from the clear fraction to produce the stated 30 W/m2 reflected by the ground, globally.

    The above numbers are representative, not definitive. They are qualitative, and not quantitative. And certainly not sufficient to infer cloud feedback information. Absorption by atmospheric gases, clouds and aerosols, is intermingled by scattering from clouds, aerosols, and atmospheric Rayleigh scattering, along with reflection from different types of surface types. We have not yet performed a detailed attribution of what contributes to the nominal 0.30 global albedo of Earth comparable to the attribution that we did for the thermal greenhouse effect. Solar radiation is a bit more complicated and messier to deal with than thermal radiation.

    KT97 get a planetary albedo of 0.313. The likely value is probably somewhere between 0.28 and 0.32. Also, KT97 use a solar constant of 1368 W/m2, although the newer calibration results suggest it might be as low as 1360 W/m2. Since climate change results are inferred by differencing experiment and control runs, the absolute calibration, while important, is not a critical factor in deducing climate sensitivity.

    • A. Lacis,
      Thank you for the information. I will be digesting it for a while. One thing I didn’t see was your sensitivity to forcing – deg C per W/m^2 .

      BTW, I do have some problems with a number of things including the cloud discussions in hansen 2004.

  105. I have a real problem with static conceptions of climate. It runs consistently through this thread with quoted values of solar irradiance, albedo and radiant imbalance. All of these are dynamic of course and interrelated.

    Solar irradiance changes in the solar cycle and perhaps a little more over the longer term. Lockwood et al (2010) suggest a longer term drift in solar UV with implications for ozone warming and ‘top down’ forcing effects on the troposphere. Judith Lean (2008) commented that ‘ongoing studies are beginning to decipher the empirical Sun-climate connections as a combination of responses to direct solar heating of the surface and lower atmosphere, and indirect heating via solar UV irradiance impacts on the ozone layer and middle atmospheric, with subsequent communication to the surface and climate. The associated physical pathways appear to involve the modulation of existing dynamical and circulation atmosphere-ocean couplings, including the ENSO and the Quasi-Biennial Oscillation. Comparisons of the empirical results with model simulations suggest that models are deficient in accounting for these pathways.’

    Zhu et al (2007) found that cloud formation for ENSO and for global warming have different characteristics and are the result of different physical mechanisms. The change in low cloud cover in the 1997-1998 El Niño came mainly as a decrease in optically thick stratocumulus and stratus cloud. The decrease is negatively correlated to local SST anomalies, especially in the eastern tropical Pacific, and is associated with a change in convective activity. ‘During the 1997–1998 El Niño, observations indicate that the SST increase in the eastern tropical Pacific enhances the atmospheric convection, which shifts the upward motion to further south and breaks down low stratiform clouds, leading to a decrease in low cloud amount in this region. Taking into account the obscuring effects of high cloud, it was found that thick low clouds decreased by more than 20% in the eastern tropical Pacific. ‘In contrast, most increase in low cloud amount due to doubled CO2 simulated by the NCAR and GFDL models occurs in the subtropical subsidence regimes associated with a strong atmospheric stability.’

    ENSO varies on decadal timescales over the same period as the PDO – known as the Pacific Decadal Variation. The cool mode is associated with larger and more frequent La Niña and more low level stratiform cloud forming over cool sea surfaces in the eastern Pacific in particular. Conversely, a warm mode is associated with more and intense El Nino and less cloud. (e.g. Vernon and Franks 2006). Amy Clements and colleagues reconstructed cloud cover observations in the north eastern Pacific and show positive cloud feedback to sea surface temperature associated with PDO modes. ‘The overall slow decrease of upwelling SW flux (in the ISCCP-FD record) from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements. http://isccp.giss.nasa.gov/projects/browse_fc.html It should go without saying that the relative changes in radiative fluxes are known with greater precision than absolute values.

    ISCCP-FD, ERBS and CERES show large changes in radiative flux associated with the Pacific Ocean especially – although there would appear to be a global system that is dynamically linked. Tsonis et al (2007) constructed a numerical network model from 4 observed ocean and climate indices – the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the Pacific Northwest Anomaly – thus capturing most of the major modes of climate variability in the period 1900–2000. This network synchronized around 1909, the mid 1940’s and 1976/77 – after which the climate state shifted. A later study (Swanson and Tsonis 2009) found a similar shift in 1998/2001. They found that where a ‘synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability.’ Thus climate behaves, not surprisingly, as a complex and dynamic system in theoretical physics.

    There is much bigger story here than just the infrared trapping. The planet warms and cools dynamically from year to year and from decade to decade. By definition this means that the radiant imbalance changes dynamically from negative to positive and vice versa as the planet warms and cools.

    We are in a cool Pacific mode and these tend to last for 20 to 40 years. I keep saying that neglecting the dynamic changes will come back to haunt the politics of global warming.

    references can be found here
    http://www.earthandocean.robertellison.com.au/

    • Interesting presentation.

      Have you seen Erl Happ’s work??

      http://climatechange1.wordpress.com/

      • I can’t understand Erl’s work. In general I think much of might be unprovable and speculation is fun.

        Cheers
        Robert

      • Darn. I don’t think he is the last word on climate, but, I thought he has some valid points in his work. Oh well.

      • One of his more interesting speculations/proposals is that the planet’s flora eat the atmosphere’s CO2 down to their starvation levels, and hunker down and await the next big injection (basalt flood, etc.) to add a few thousand ppm so they can do it over again.

        :)

        Makes sense, actually.

    • “I keep saying that neglecting the dynamic changes will come back to haunt the politics of global warming.”

      Your admonition is unarguable, Robert. However, I don’t think climate science is ignoring the dynamics of internal climate variability, even though they are incompletely understood. They do appear in climate models allowed by run in the absence of forced variability from greenhouse gases, solar contributions, or aerosols.

      The relative quantitative role of the various climate components – internal and forced – can’t be neglected either. On less than centennial timescales, ENSO, AMO, and PDO clearly affect global temperature trends, but they tend to average out over the century, leaving the observed warming trend.

      Over shorter intervals, the best correlated with multidecadal modulation of the long term trends is the PDO – PDO . The AMO is poorly correlated and at times, anticorrelated. However, when the warm phases of the PDO transition to cool, temperature increases decline in slope or become flat, but the temperature remains higher than before, reflecting the progression of the CO2-forced trend. No multidecadal temperature declines have been observed.

      The situation is further complicated by evidence that the PDO itself is not totally independent of external greenhouse-gas forcing and/or forcing from aerosol changes (particularly in the mid-century record, when negative aerosol forcing contributed to temperature flattening). Rather, it appears probably to represent a combination of inherent variability and variability in response to temperature changes imposed by external forcing. The evidence is not yet conclusive and the quantitation is uncertain, but the results suggest that to some extent, the PDO has been a result rather than a cause of climate change over the past 100 years. A relevant reference is Inherent and Forced Variability . It’s behind a paywall, but a recorded presentation of the same material can be found at Meehl – Pacific Climate Shift

  106. Fred,

    The sulphate influence is one that is commonly raised – and it certainly can’t be discounted entirely. Chylek et al 2009 have an interesting observation. The temperature decline after the mid 1940’s was most pronounced in the Arctic and there is no reason why the aerosol effects should be amplified there by a factor on 10 or more.

    You continue to have no evidence but only rationalisations and I have worn out my polite patience with you. Even your one reference in Meehl relies only on some computer model or other. I’ve seen it before and took it with a grain of salt then. This is not evidence but numerical speculation. The physical evidence, as opposed to unfalsifiable models, is the important thing. There is a modern disease of thinking that anything that can be rationalised must be true – discarding any element of scientific scepticism or enlightened self doubt.

    It is just nonsense to claim that something can be put in a model that we don’t understand fully. This for instance.

    One potential cause of Pacific Ocean variability is shown by Lockwood et al (2010). ‘During the descent into the recent exceptionally low solar minimum, observations have revealed a larger change in solar UV emissions than seen at the same phase of previous solar cycles. This is particularly true at wavelengths responsible for stratospheric ozone production and heating. This implies that ‘top-down’ solar modulation could be a larger factor in long-term tropospheric change than previously believed, many climate models allowing only for the ‘bottom-up’ effect of the less-variable visible and infrared solar emissions. We present evidence for long-term drift in solar UV irradiance, which is not found in its commonly used proxies.’

    Judith Lean (2008) commented that ‘ongoing studies are beginning to decipher the empirical Sun-climate connections as a combination of responses to direct solar heating of the surface and lower atmosphere, and indirect heating via solar UV irradiance impacts on the ozone layer and middle atmospheric, with subsequent communication to the surface and climate. The associated physical pathways appear to involve the modulation of existing dynamical and circulation atmosphere-ocean couplings, including the ENSO and the Quasi-Biennial Oscillation. Comparisons of the empirical results with model simulations suggest that models are deficient in accounting for these pathways.’

    I thought you were done talking to me? What a disappointment.

    Cheers
    Robert

    • The evidence that negative aerosol forcing played a substantial role in mid-century temperature flattening is based on observational data demonstrating that night-time temperatures rose while daytime temperatures fell – Wild -GRL . This strongly suggests a reduced solar exposure at the surface despite an underlying atmospheric warming and a relatively stable TSI. It would be nice to have more direct data, but we can’t go back 50 years and directly measure aerosol optical depth.

      Regarding the role of CO2 or other forcings as a component of PDO (and possibly ENSO as well), the evidence is less conclusive but still plausible. Rather than for us to continue disagreeing on this point, I think it might be worthwhile for interested readers to visit the two references I linked to in my previous commment to make their own judgments.

      The temperature record of the past century contains bumps and dips, but no long term cooling trend at any point, despite mid-century PDO negativity and the increasing role of aerosols. The future is likely to be one where pollution controls will constrain any rise in aerosols while CO2 continues to rise. Based on the past, it is reasonable to predict that temperatures, rather than remaining flat, will rise significantly despite short term up and down variations. There may be surprises ahead that change that expectation, but it would be prudent not to depend on them.

  107. NAS? Abrupt climate change: inevitable surprises? http://www.nap.edu/openbook.php?isbn=0309074347

    I keep referring to this post at realclimate – but by all means read the 2007 and 2009 both papers.

    http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

    Even 10 more years of no warming would have huge political implications. Ignoring the potential is political suicide for carbon reduction.

  108. Fred Moolten | January 27, 2011 at 9:39 pm :

    Been out of town, but now saw this statement:

    ” I can’t confirm your statement. Jim D appears to be right in stating that Phoenix cools more from day to night than Atlanta.”

    Well! Does this surprise you, since the rate of heat loss involves the fourth power!

    q = ε σ (Th^4 – Tc^4) Ac

    I don’t think your response was relevant. What WOULD be relevant is some actual empirical data that demonstrates the “atmospheric greenhouse effect.” Radiation cartoons are not empirical evidence. You still need to explain why the surface of All planets with significant atmospheres are much warmer than they “should be,” based on the “atmospheric greenhouse effect” and the SB equations.

    • Jae -No, it doesn’t surprise me, because it is what Jim D correctly predicted.

      I believe your equation can’t legitimately be applied to total cooling achieved, but only to initial rates. Even so, it still fails to account for the lower cooling rate in Atlanta (try July numbers as an example). Jim D turns out to be right; the more powerful greenhouse effect in Atlanta moderates the temperature change. This is true even though the relative humidity in Phoenix, while lower than in Atlanta, is not extremely low.

    • The data show a typical temperature range in Phoenix is 15 degrees C, while in Atlanta it is 11 degrees C. There are months where Phoenix has a higher maximum and lower minimum than other months Atlanta. The Atlanta night-time cooling is moderated by the GHE of having more water vapor in the column.

      • I would add that this is not quite fair because we really need to look at clear nights only in both locations, and the data pointed to does not provide that information.

      • It’s true that water vapor exerts a greenhouse effect that moderates temperature, but the moderating effect of clouds is also due to the greenhouse effect of cloud water. Interestingly, perhaps a better comparison for Atlanta than Phoenix is Reno, Nevada, which has about the same maximum temperature as Atlanta but much lower minimums, consistent with its lower humidity.

  109. Fred:

    “Phoenix is Reno, Nevada, which has about the same maximum temperature as Atlanta but much lower minimums, consistent with its lower humidity.”

    Nope. Too big a diff. in elevation (also latitude). It is these HIGH deserts that have led to the “cold desert night” myth.

    • Just for fun I checked Death Valley. Below sea level but about the same latitude. Humidity appears to be reasonably close between the two. Should have less anthro influence also. Night time and day times temps a few degrees F above Vegas. I imagine if you could find a temp record in the open desert Vegas would show even cooler, but, Phoenix has anthro also.

      • Check out Daggett, CA, if you want to get away from anthro. It’s a little higher than Phoenix, but the latitude is close.

      • Las Vegas is already higher than Phoenix. Going higher just makes it worse!! 8>)

        Daggett gets about the same rain as LV but twice Death Valley, maybe 4″ /yr!! Yup, 3 locations with temps seemingly tied to elevation as they get about the same insolation and lack of humidity!!

  110. “Claes Johnson”.

    I’ve let both John Nicol and Roger Taguchi know about this thread and hope that they’ll make time to contribute.

    It is not only the opinions expressed by the authors of “Slaying the Sky Dragon” (who appear to reject the “greenhouse effect” out-of-hand) that require such open review. Those of people like John (Climate Change (A Fundamental Analysis of the Greenhouse Effect) – http://www.middlebury.net/nicol-08.doc) and Roger (Net Feedback Analysis 2009_11_29 – copy passed to you ) need similar review and I think that your blog is one of the best places to do it because exchanges that I have seen on it are far more civilised than on places like Real Climate, Brave new Climate, Sceptical Science, Stoat, Climate Progress, Rabett Run, etc.

    Best regards, Pete Ridley

  111. Sorry about the previous comment. Here’s the full thing.

    Hi Judith, you said in the header of your “Slaying a greenhouse dragon” thread (https://judithcurry.com/2011/01/31/slaying-a-greenhouse-dragon) that “Several of these individuals on John O’Sullivan’s email list actually agree with my assessment, even though they regard themselves as staunch AGW skeptics”. You may be including me in that group so let me make it clear that I am not a “staunch AGW sceptic” and neither are many other sceptics that I exchange opinions with. We are sceptical about the claimed significance of the small amount of warming that might arise from our use of fossil fuels. I repeat what I said to you in one of my E-mails yesterday (3rd) “We sceptics also reject the claims of the UN’s IPCC regarding the extent of radiative forcing in favour of the analyses of John Nicol (www.middlebury.net/nicol-08.doc), supported more recently in Roger Taguchi’s “Net Feedback Analysis” (attached) showing approx 1C for a doubling of CO2”. We see no convincing evidence that this small amount of global warming will lead to significant changes to the different global climates.

    On that thread several references have been made to “lurkers” (I assume that, unlike “deniers”, that term includes those on either side of the debate) and yesterday you talked about teasing them out. I’m sure that there are a lot of lay people like myself who would love to be “lurking” in the background learning more about this vexed (and poorly understood) matter of CACC but are unaware of the very worthwhile exchanges taking place here. What I see happening here is (at long last) relatively open peer review of sceptical interpretations of relevant scientific (mis)understanding, but I only came across it by chance when Googling – “Judith Curry” “Claes Johnson”.

    I’ve let both John Nicol and Roger Taguchi know about this thread and hope that they’ll make time to contribute.

    It is not only the opinions expressed by the authors of “Slaying the Sky Dragon” (who appear to reject the “greenhouse effect” out-of-hand) that require such open review. Those of people like John (Climate Change (A Fundamental Analysis of the Greenhouse Effect) – http://www.middlebury.net/nicol-08.doc) and Roger (Net Feedback Analysis 2009_11_29 – copy passed to you ) need similar review and I think that your blog is one of the best places to do it because exchanges that I have seen on it are far more civilised than on places like Real Climate, Brave new Climate, Sceptical Science, Stoat, Climate Progress, Rabett Run, etc.

    Best regards, Pete Ridley

  112. Over on the Climate Clash thread, our friend Al Tekhasski says in the final comment:

    RP: “radiation in the portion of the spectrum affected by CO2 escapes to space from the cold, dry upper portions of the atmosphere, not from the warm, moist lower portions. Also, as displayed in the inset to figure 2, the individual water-vapor and CO2 spectral lines interleave but do not totally overlap. That structure limits the competition between CO2 and water vapor.”

    AT response: good professor forgets here that the clouds are made of liquid water and absorb-emit with continuous spectrum. So they do compete with CO2 wings, and overwhelmingly. More, radiation in “CO2-affected” regions does escape from dry cold areas, but 95% of this region emits from really-really “upper portion”, which is stratosphere, where higher==warmer, so the effect of CO2 increase is opposite to “global warming”.

    In conclusion, the entire article is a good sales pitch to unsuspecting climatards.

    Are any of our resident physicists going to take this one on?
    Looks like a bit of a clincher if Al T is correct.

  113. We are cooling, folks; how much it’s got to do with CO2 even kim doesn’t know.
    ====================

    • A contributor to my blog has performed an empirical experiment to determine the degree to which back radiation slows the rate of cooling of the ocean surface.

      http://tallbloke.wordpress.com/2011/08/25/konrad-empirical-test-of-ocean-cooling-and-back-radiation-theory

      • Tallbloke,

        The experiment is reasonable, but the conclusion wrong. The situation on real oceans corresponds much more closely the second experiment with cover than the one, where evaporation is allowed.

        How can I justify this claim?

        The reason is that in the case of the real ocean surface the comparison may be defined in either of two ways.

        1) The most straightforward case is that where we compare the energy balance, when temperatures are kept constant and we are looking only at the net energy balance. In this case everything else is fixed, and the only thing that changes is DLR.

        2) The second alternative is to allow all temperatures as well as the moisture of the atmosphere settle to a new stationary state, where the incoming and outgoing energy fluxes are equal. This can be reach only by letting the surface and the lower atmosphere warm. Most importantly the absolute humidity of air near surface is higher in this case.

        The experiments performed doesn’t correspond to either one. The room air and the moisture are practically the same and probably relatively low with and without the IR warming, because the room is big enough to maintain a constant state. In that case the cooling by evaporation is rather strong and remains so throughout the experiment as the room moisture doesn’t rise significantly due to increased evaporation. That alone is enough to invalidate the conclusion that the case A would represent real ocean situation.

        The case B corresponds reasonably well to the first alternative way of looking at the real case.

        Developing the case B further to allow slow escape of the moisture would be perhaps an even better fit, but that would require a much more complex experiment where the moisture near surface is measured continuously and the ventilation regulated to keep the relative humidity constant at a high value as it is in the immediate proximity of the ocean surface.

        A better experiment could be built by adding heating at the bottom of the container and controlling it so that the temperatures remain constant in both cases. The difference would then appear in the power needed for heating with and without IR warming of the top. This experiment would correspond well to the case 1 mentioned at the top of this comment. In real oceans the solar SW does the heating from below.

      • Pekka, many thanks for your time in commenting on this experiment. I will reproduce what you’ve said on the thread. You are of course welcome to join the discussion if you can find the time.

      • Pekka,
        As you said in your reply, the humidity near the surface of the ocean is very high. That means there is a high density of water vapour molecules just above the ocean surface. I think this means that when the upward radiation form the sea surface is measured, a lot of the 390W/m^2 which is thought to be radiating from the sea surface is actually radiating from the water vapour, not the contiguous ocean surface.

        Would you agree?

      • The situation that you describe means only that the concepts of downward and upward must be defined close enough to the surface to really have the upwards radiation to originate from the surface and the downward radiation to be the same that the surface receives, when the measuring device is removed. Something like 10 cm would be good enough for that, but 1 m perhaps too much.

        On the other hand the result of the measurement is not expected to change, if the surface temperature of the water and the temperature of air between the surface and the measuring altitude are practically equal. If this is true, the device may detect photons radiated from the atmosphere below, but the intensity of such photons just compensates for the absorption of photons from the surface. This compensation is precise for identical temperatures even, when the absorption is very strong. (This is the essence of Kirckhoff’s law.)

      • Doesn’t that assumption neglect the energy of the latent heat of evaporation?

      • The latent heat of evaporation is one component in the total energy flux from the surface to atmosphere. Thus it affects the surface temperature similarly with the other flux components (DLR and OLR IR, convection and the heat transfer inside ocean that brings almost all the energy of solar SW to the surface). Both convection and IR interact with the lowest atmosphere keeping its temperature close to the ocean skin temperature. The latent heat is a cooling component, but that cooling is rapidly distributed to both the ocean below the skin and to the lowest atmosphere.

        The more there’s latent heat transfer, the less there’s convection, because convection is the component that adjusts most easily to fill the gap left by other less flexible mechanisms of energy transfer. Temperature differences are simultaneously adjusted to correspond to the strength of convection and to the rate of heat transfer on the ocean side of the skin.

        Latent heat from evaporation passes mainly through the lowest atmosphere and has less direct influence on its temperature. (The continuous exchange of water molecules from sea to air and back contributes, however to the heat transfer very close to the skin, where conduction is also important as it’s almost always at very small distances.)

  114. Thanks again Pekka, you’ve given me plenty to think about there. Now, what do you think of Al Tekhasski’s argument above?
    https://judithcurry.com/2011/01/19/pierrehumbert-on-infrared-radiation-and-planetary-temperatures/#comment-104918

    • Clouds are certainly important, but low lying clouds are not very different of the surface from the point of view of the energy balance. When their net heat loss is reduced that affects the fluxes between the surface and the clouds in a way that has finally a very similar warming effect than the change in the radiative balance of the surface. High lying clouds and CO2 don’t combine to quite so similar outcome.

      Most of these arguments apply more to the simple explanations of the effects than to the full calculations that can be done as a first step in estimating the consequences of additional CO2 using the climate models. When this first step is restricted to the calculation of radiative forcing, none of the main problems of climate problems enters the calculation. The model is used only to provide input data on the state of the present atmosphere, i.e. data from observations, not simulations. When this is done, all the issues brought up by Al Tekhasski are taken care in a way based on empirical data on the present atmosphere.

      I have found every now and then that Pierrehumbert is not always very careful in his formulation, which may lead to statements that are not strictly correct. There’s perhaps a little of that also in the quote you present.

      • PP;
        that 1st paragraph is in almost incomprehensible English. Get a native speaker friend to help rephrase it, please. I think it might be important, but I can’t tell.

  115. I’m really enjoying the design and layout of your website. It’s a very easy on the eyes which makes
    it much more enjoyable for me to come here and visit more often.
    Did you hire out a developer to create your theme?
    Exceptional work!

  116. Pretty! This was an extremely wonderful article. Many thanks
    for supplying this info.