CO2 residence time discussion thread

by Judith Curry

There was some discussion of this topic in the context Murry Salby’s talk, but it has been suggested that this topic deserves its own thread.


This post is motivated by the following email from Hal Doiron:

Hello Dr. Curry,
.
In my review of climate change literature related to atmosphereic CO2 sources and sinks, I have run into a wide range of opinions and peer reviewed research conclusions regarding the following specific question that I think is central to the CAGW debate.
.
How long does CO2 from fossil fuel burning injected into the atmosphere remain in the atmosphere before it is removed by natural processes?
.
Sources of confusion in answering this question:
.
1.  In responses to one of my comments at Climate, Etc., Fred Moolten has claimed the answer to this question is about 100 years.  http://judithcurry.com/2011/08/18/should-we-assess-climate-model-predictions-in-light-of-severe-tests/#comment-101642     “To focus on the most relevant element in this situation, CO2, the salient feature is the exceedingly long lifetime of any atmospheric excess we generate from anthropogenic emissions – there is no single decay curve, but the trajectory of decline toward equilibrium concentrations can be expressed as a rough average in the range of about 100 years, with a long tail lasting hundreds of millennia. In other words, the CO2 we emittomorrow, or refrain from emitting, is not something we can take back if we later decide we shouldn’t have put it up there. It will warm us for centuries.”
.
2.   From an ESRL NOAA website http://www.esrl.noaa.gov/gmd/education/faq_cat-1.html#17 , I found:
·   What will happen to Earth’s climate if emissions of these greenhouse gases continue to rise?

Because human emissions of CO2 and other greenhouse gases continue to climb, and because they remain in the atmosphere for decades to centuries (depending on the gas), we’re committing ourselves to a warmer climate in the future. The IPCC projects an average global temperature increase of 2-6°F by 2100, and greater warming there after. Temperatures in some parts of the globe (e.g., the polar regions) are expected to rise even faster. Even the low end of the IPCC’s projected range represents a rate of climate change unprecedented in the past 10,000 years.
.
3.   I believe in listening to Dr. Murry Salby’s audio lecture at Climate, Etc., his research led him to the conclusion that the atmospheric residence time of CO2 from fossil fuel burning emissions was only a few years.  This was related to investigation of the trends of the ratio of Carbon 12 to Carbon 13 isotopes in the atmosphere.
.
4.   There is previous published literature, also based on the ratio of Carbon 12 and Carbon 13 isotopes that Dr. Salby discussed, that concludes the atmospheric residence time of CO2 from fossil fuel burning is about 5 years.  This literature is reviewed and cited by my former NASA colleague, Apollo 17 astronaut, and former US Senator, Dr. Harrison “Jack” Schmitt, in his essay on CO2 at: http://americasuncommonsense.com/blog/category/science-engineering/climate-change/4-carbon-dioxide/#r4_14

.

As I monitor the debate on CAGW, it seems to me that if this particular recommended thread topic could be settled with high confidence, then much of the CAGW alarm could be moderated and refocused on a broader range of climate change issues.  I suggest it should also be a key research topic for further investigation in an attempt to answer the posed question with high confidence.

Sincerely,
Hal Doiron
.
JC comment:  I don’t have a good answer to the question Hal raises.  Below are some online references that I’ve spotted, from across the spectrum.
And finally an exchange between Freeman Dyson and Robert May in the NY Review of Books:
.
I don’t have time to dig into this issue right now, so I’m throwing the topic open for discussion, hoping for some enlightenment (or at least confusion) from the Denizens.

1,192 responses to “CO2 residence time discussion thread

  1. John Carpenter

    Judith,

    The Freeman Dyson/Robert May links are not working.

  2. Only a few years. For the most part it’s even shorter – the most of it is removed locally and immediately. Atmospheric CO2 is driven/determined by global climatic factors, whatever it means (SST, sea ice extent…). Think H2O.

    • Edim – I don’t see the relevance of that article to CO2 residence time. I read the full article, not just the abstract, and found it interesting in its analysis of ion-mediated nucleation rates involving sulfuric acid. It did not address the question of how many nuclei could be induced to grow to a size of climate significance for low cloud formation, although earlier data have shown this probably to be relatively small and the results reported here are consistent with that possibility..

  3. Time to read up on Cosmic rays and Climate see NATURE latest edition. The game is up

  4. Hal Doiron has written:

    “As I monitor the debate on CAGW, it seems to me that if this particular recommended thread topic could be settled with high confidence, then much of the CAGW alarm could be moderated and refocused on a broader range of climate change issues.”

    I find the abbreviation CAGW to be somewhat objectionable in its misrepresentation of mainstream views, but that is a minor quibble and peripheral to the main topic here. Rather, I would like to ask Hal Doiron a question, because I’m not sure how to interpret his statement.

    Hal – At what duration for the residence time of excess CO2 would you perceive that interval to be long enough to be of concern? How many years specifically for an interval representing an average residence time for the excess? For clarity, I’m referring to the time necessary for an excess over an equilibrium concentration to return to baseline, where this can be approximated as a “half-life” although that would not be accurate in the formal sense because there is no single exponential decay curve.

    If it takes X years for the excess concentration to decline halfway, what value of X would be worrisome for you?

    • Harold H Doiron

      Fred,

      I believe there are clear, proven and well-known beneficial effects of CO2 in the atmosphere with regards to increased rates of plant growth, better crop yields, etc. needed to support the growing population of the planet. I don’t know what a harmful level of CO2 in the atmosphere would be. I have read that US submarines are allowed to have 8,000 ppm CO2 before there is any concern about health related effects (more than 20 times current levels). I don’t know what the optimum level of CO2 in the atmosphere would be, all things being considered, but it is somewhat likely that the level would be higher than it is now, and I have factored this into my thinking about the current situation.

      If the human activity related CO2 atmospheric residence time is closer to 5 years as some scientists claim, and not the 100 or so years that you and NOAA apparently believe, then the growth rate of CO2 in the atmosphere from human related causes isn’t at such a high growth rate (compared to natural causes of CO2 sources and sinks that I don’t know how to control) that I should need to take immediate, potentially harmful action with unknown and unintended consequenses to restrict human related CO2 emissions, (your medical triage example suggested in a previous thread on decision making with limited data) as many climate scientists have called for. That is, if we can answer the question of the current thread closer to the 5 year residence time mark, then we can all agree we are not in a triage situation regarding control of CO2 emissions and that we have more time to work the problem of climate change with perhaps different decisions and action plans. After suggesting some “first steps” to take in response to your suggestion for defining such “first steps” in a previous thread http://judithcurry.com/2011/08/18/should-we-assess-climate-model-predictions-in-light-of-severe-tests/#comment-101642 , it occurred to me that if we could answer the question of this present thread, then that would change the crisis atmosphere that many climate scientists believe they are working in, and that makes them worry so much about inaction in the face of such dire, but uncertain predictions of their unvalidated models.

      • “If the human activity related CO2 atmospheric residence time is closer to 5 years as some scientists claim, and not the 100 or so years that you and NOAA apparently believe”
        Again this is the elementary misunderstanding that Judith seems to make no effort to dispel. Both figures are correct. Individual molecules are exchanged on a timescale of five years or so. And the increase in total CO2 takes a century or more to go away.

        It’s a bit like worrying about the Government printing money (OK, in pre-electronic days). In fact a huge number of notes are printed and destroyed each year. Any excess printed is small compared to circulation.

        But circulation, like exchange of molecules, is just that. What counts is change in the aggregate.

      • Marlowe Johnson

        I share your surprise Nick as this is a fairly common and relatively easy-to-dispel source of confusion. Aren’t you a climate scientist Judith? If you can’t take the time to answer simple questions like this, then what is the point of your blog?

      • Like I said in my main post, I suspect that this whole issue is much more complex than you are making it out to be. Your question, while simple, does not have a simple answer, IMO.

      • Marlowe Johnson

        While the particulars of estimating equilibrium response times are governed by multiple processes (as noted by Fred and Chris Colose downthread), it’s nevertheless trivial to point out that the atmospheric lifetime of an individual molecule isn’t the relevant issue…something which you bizarrely failed to do. Is your goal with this blog to sow confusion or dispel it?

      • The goal of the blog is to discuss scientifically relevant issues, which we are doing.

      • There’s again a mixture of simple things and less simple things.

        That we have two different residence times belongs to the simple things. Estimates on the level sufficient to conclude that the increase in the atmospheric CO2 over last 50 years (and also over the last 100 years) is predominantly of human origin belongs also to rather simple things.

        More precise estimates of the uptake of carbon from atmosphere to the other reservoirs is not anymore simple, because none of the subprocesses is accurately known. The balance between surface ocean and the atmosphere is the best understood of all, but even that is influenced strongly on details of the buffering of pH in oceans.

        One question that I have not found data on the level that I have been looking for is the role of deep oceans as a reservoir. The total amount of carbon in deep oceans is typically given to be of the order of 40 000 GtC, which is 50 times the amount in the atmosphere. The role of that in the uptake over long periods has not been discussed in papers that I have found. Archer skips the discussion in his papers stating only the the overall conclusion that 20-35% of CO2 remains until removed by sedimentation and weathering and presents the 1990 model analysis of Maier-Reimer as reference, which is not convincing. A Revelle factor of 10 would lead to the value 17%, when a balance has been obtained with the ocean, but is the Revelle factor for deep ocean 10. The value depends on the present pH and on the nature of buffering. Reaching full balance with deep oceans takes also a lot of time, but Archer seems to include that in the faster processes.

        On the other hand I don’t see either the importance of the level of the very long tail. I would rather think that, what happens after the excess CO2 concentration has dropped to one half of it’s peak value, is not likely to be a problem for the further future of the Earth and people of those periods. They may very well think that the decreasing trend is then a new problem and the lesser problem the lower it is.

      • Search under the term “biological pump” and “carbon cycle”

        It is really interesting stuff.

      • It is indeed.The partitions between the pumps (solubility and biological) are asymetrically distributed around 1/3 for the former and 2/3 for the latter.

        The effects in peturbation experiments are interesting if we shut the Biological pump off completely, we can observe from the preindustrial an increase from around 280ppm to 450 ppm over similar timescales eg Sarmiento et al 2011.

      • A study of decay of bomb created radioisotopes
        http://nzic.org.nz/CiNZ/articles/Currie_70_1.pdf
        suggests a primary decay half life of 13 years.

      • A. E. Ames, It was only about four months before the background radiation levels moved to a new, sustained level (see: gamma energy range 10)

        http://epa.gov/radnet/radnet-data/radnet-billings-bg.html

        & does ‘who’s or what’s, make any difference when considering ‘half life’?

      • Tom, I don’t know for sure, but I expect that the “non replacement removal time” for all CO2 isotopes should be the same within the accuracy of any experiment to measure it. (Molecular weight can matter in packing effects and substrate interactions.) Thanks for the interesting radnet site.

      • Interesting answer as Marlowe’s questions were

        1. Aren’t you a climate scientist Judith?

        Obviously a more complex question than Marlowe thought and

        2. If you can’t take the time to answer simple questions like this, then what is the point of your blog?

        We HAVE been wondering about that.

      • Eli, you and Marlowe seem to be confusing a climate scientist with someone who parrots the IPCC consensus.

      • Eli was just pointing out that you are avoiding the questions. This appears to be a sensitive point.

      • I get 500 comments here per day. I try to do a post per day. Not to mention my two day jobs. I only answer comments/questions that I can respond to within 60 seconds, and I have to be selective in which ones I answer. If someone raises something really interesting, I might do an entire post on it. People that are trying to play “gotcha” with me and want to tell me how I should be doing my job (here or more generally) typically get ignored by me.

      • Judith,
        The point at issue has nothing to do with any IPCC concensus. It is a matter of elementary physical chemistry. Freeman Dyson, hardly a IPCC parroter, set it out simply and explicitly in your second link:
        “He says that the residence time of a molecule of carbon dioxide in the atmosphere is about a century, and I say it is about twelve years.

        This discrepancy is easy to resolve. We are talking about different meanings of residence time. I am talking about residence without replacement. My residence time is the time that an average carbon dioxide molecule stays in the atmosphere before being absorbed by a plant. He is talking about residence with replacement. His residence time is the average time that a carbon dioxide molecule and its replacements stay in the atmosphere when, as usually happens, a molecule that is absorbed is replaced by another molecule emitted from another plant. “

        If FD finds it easy to resolve (and it is) why is it so hard here?

      • Nick, atmospheric residence time is NOT a simple issue of physical chemistry! I didn’t see any physical chemistry at all in your argument.

      • Nick, if you agree with Freeman Dyson’s skeptism and it’s already resolved by him, then why do you keep asking?

      • Judith,
        The elementary physical chemistry concept involved is dynamic equilibrium. The nett result of a forward and a back reaction. The time period quoted resulting from isotopes is the one-way process of CO2 molecules being absorbed by plants or whatever. As Dyson points out, there is a back reaction – that CO2 gets back into the atmosphere. If you want to know how total CO2 will diminish, you have to consider the result of both forward and back. That, as Dyson spells out, is the simple basis for the two different figures.

      • Nick, you are forgetting about the dynamics of the system, that is where the complexity lies

      • couldn’t the same be said about the greenhouse effect re complexity of the system

      • Nick, you are thinking of it as a black box system. Some want to consider what is happening inside it, some may know it as a clear box. Why is this a hard concept for you?

      • Kermit,
        I am not talkin g about any sorts of boxes. The issue is clear. Two different figures have been quoted for CO2 residence time. Hal and others ask whether the IPCC is wrong. The answer, as Dyson explained, is simple. It has nothing to do with dynamics, boxes or anything else. They are talking about different definitions. If someone wants to make an issue of it, they need to explain how the figures are comparable.

      • Which “dynamics” would those be, exactly?

        Nick, you are forgetting about the dynamics of the system, that is where the complexity lies

        If you’re talking about the carbon cycle, those would be dynamics about which, in your own words, you do not “have any expertise!”
        Judith Curry: “The Earth’s carbon cycle is not a topic on which I have any expertise.”
        Or has Murry taught you all about the carbon cycle, within the month of August?

      • Yes Nick, we understand there are 2 definitions being used for residences time. One is large and one is small, both are valid. Which one sounds better to the IPCC? the larger scarier one or the smaller one? This is exactly why you are going on and on about it. Its just another shell game “trick” environmentalists use.

      • 1. You expect people to answer rhetorical questions?

        2. If this blog has no point, then why are there so many commenters?

      • Not that your opinions about the carbon cycle count for anything.

        Judith Curry: “Your question, while simple, does not have a simple answer, IMO.”

        With all due respect, you have already begged ignorance on this question and should “remain silent and thought a fool” rather than assert things you just don’t know, “and remove all doubt.” Really. Because by your own admission, you’re not qualified to make that judgment, as your opinion is not expert.

        Judith Curry: “The Earth’s carbon cycle is not a topic on which I have any expertise.”
        (emphasis added due to your repeated refusal, despite numerous requests, to offer any scientific justification of your assertion that “it is sufficiently important that we should start talking about these issues.“)

        All that you can truthfully say now about the carbon cycle is that you do not understand it. Or, you can revise your previous statement that you do not “have any expertise.” That of course is up to you. But what you cannot do, at least not consistently (in case you care about that), is claim 100% ignorance while you’re promoting Salby, but then within the time of a month (hardly enough time to have become expert if you weren’t already!), declare with any authority that anybody else’s analysis of same is wrong.

        You didn’t have the expertise to articulate any reason then that Salby’s analysis is right or even likely right, therefore you also don’t have the expertise now to say whether anybody else’s is wrong.

        Not having “any expertise” on this question, it’s just the analysis you can provide right here, right now, versus theirs. To prove Eli, Marlowe and Nick wrong, you must show that at least one term they neglect, or the sum of terms they neglect in their analysis, have magnitude equal to (at a minimum) or greater than the terms that they include in their analysis. This is not a matter of opinion. As a scientist (former?), you know that “IMO” doesn’t cut it. Your opinion counts for nothing. Either you can show that other terms invalidate their analysis, or you cannot and in that case you have no basis to fault their analysis.

      • All humanity is divided into two classes; those who don’t understand the carbon cycle and those who don’t understand that they don’t understand the carbon cycle.
        =============

      • No. It isn’t complex. Let me list a quick series of proofs:

        1) The ice core record clearly shows that the recent increase in CO2 concentrations is, to use an overused word, unprecedented in the Holocene period, and, indeed, in the last 800,000 years, and non-ice core approaches show that the current CO2 may exceed levels seen for the last 20 million years (Tripati, Science, 2009).

        But, maybe you choose to throw out paleoclimate records because… well, because they don’t fit your preconceived notions that human emissions are too small to be meaningful. So…

        2) The Revelle factor: straightforward bicarbonate buffer chemistry known since 1957-1958 (Revelle & Suess 1957, Bolin and Eriksson 1958) shows that “a 10% increase in the CO2-content of the atmosphere need merely be balanced by an increase of about 1% of the total CO2 content in sea water” (Sabine et al., Science, 2004 estimate that the uncertainty of this factor ranges from 8 to 16). This is the key element that means that the ocean CANNOT absorb all the CO2 proportionally the way that Henry’s law might suggest, and that therefore a decent percent of any CO2 emission will stay in the atmosphere for thousands of years until sedimentation processes have sufficient time to react (See Archer et al., Annu. Rev. Earth Planet. Sci. 2009, for a review of millenial processes). This is a key concept that many short residence timers have never figured out – they just look at the total gigatons of carbon in the ocean and figure that it can easily soak up any increase in the atmosphere, but they’re wrong.

        3) Carbon cycle models have done a pretty good job of explaining the difference between “residence time” and “adjustment time”, dating back at least as far as Rodhe and Björkström in 1979 (turns out that this decoupling is a result of this bicarbonate buffering). This (among other things) is what trips up people like Essenhigh and Segalstad.

        4) Carbon cycle models also do a decent job of explaining trends in isotope levels in the atmosphere. Eg, Stuiver et al. Earth and Planetary Science Letters 1981, or Stuiver et al. GRL, 1998. See also, “Suess effect”. People like Salby and Spencer do cute little regressions, but they don’t have real physical models that actually show how and why concentrations and isotope levels have changed the way they have: nor do they test their regressions against existing carbon cycle models – if they did, they’d see that the existing models produce the right signatures. Essenhigh has a model, but first he handwaves the increase in concentrations based on a simplistic understanding of the CO2 solubility-temperature relationship (Revelle and Suess realized that the magnitude of this relationship was insufficient to explain observed CO2 changes back in 1957), and Essenhigh derives lifetimes for 12C and 14C that are different by a factor of more than THREE!!! I still can’t believe that any competent reviewer with a basic knowledge of chemistry could have let that pass – isotope separation is HARD (see Uranium separation): kinetic isotope separation for seawater is on the order of a couple tenths of a percent, and for plant matter is maybe a couple percent at best, so where does a factor of 300% come from?! (also, the carbon cycle field realized that the ocean wasn’t perfectly stirred at least 35 years ago: see Oeschger et al, Tellus, 1975 for an early example of switching to a more realistic diffusive model).

        5) These things have been debunked before. I recommend O’Neill B, Measuring Time in the Greenhouse, Climatic Change, 1997.

        Does this mean that we understand the carbon cycle, or that our models are perfect? Not hardly. I recommend Doman AJ; van der Werf GR; Ganssen G; Erisman, JW; Strengers B, A Carbon Cycle Science Update Since IPCC AR-4, A Journal of the Human Environment 39(5-6):402-412, 2010 as a review of what the actual interesting questions in carbon cycle science these days are.

        So, to review: your suspicions are totally off-base. If this was a biology blog, this kind of post would be the equivalent of wondering whether the “missing link” disproves evolution. If it was an astronomy blog, it would be equivalent of giving press-space to the guys who claim that the double shadow from a flag prove that the Moon landing was faked. By sticking to this position that this is a scientifically relevant topic of discussion, your status as a “meta-expert” is cast in doubt since you are apparently not “able to distinguish a genuine expert from a pretender or a charlatan.” I grant you that you at least have figured out the Sky Dragons are charlatans – but, to go back to my astronomy blog analogy, that’s like figuring out the guys who claim the Moon is made of green cheese are charlatans. Given your impressive publication record, you should be able to do a lot better than this. And maybe this should make you wonder whether you are a little too hasty to give credence to the “not-IPCC” crowd. And yes, the IPCC is certainly not perfect, but that doesn’t mean that if the IPCC says that a clear mid-day sky looks blue, you should go around believing people who claim that actually it is more like a purple.

        -M

        (and yes, I do get frustrated at having to debunk stupid myths over and over and over again. There are plenty of real, interesting uncertainties regarding an issue as complex as climate change, and especially with regards to appropriate mitigation or adaptation measures, so it is really frustrating that the debate keeps getting bogged down in questions that were solved 50 years ago)

      • M,

        The Appeal to Evolution is always a curiosity, but it’s irrelevant to questions of climate, even the 1,000,000th time it’s tried.

        Andrew

      • It may be irrelevant to questions of climate, but it is very relevant to questions of self-delusion. I note that you did not address a single one of my technical arguments. I could also keep going:

        6) The northern-hemisphere/southern-hemisphere gradient, and the relationship of that gradient to increasing CO2 emissions (see graph in IPCC AR4 Chapter 7).

        7) The mass argument that many here have used before.

        8) Surface ocean pH is increasing faster than pH at depth (indication of diffusion, and of which direction the CO2 is going).

        I’d also point out Engelbeen’s webpage.

        Anyway, have fun with your delusions,

        -M

      • M,

        would you like to point out the empirical data that were used to show the Revelle Factor??

        OK, how about the experiments that were done??

        Well, what have you got other than assertions??

        Please copy from the papers copiously as I am very ignorant.

        (and yes, I do get tired of having to debunk the same old tired Junk Science over and over again every time someone doesn’t actually read and COMPREHEND the papers they link.)

      • Good job, kuhnkat – recognizing your own ignorance is the first step to wisdom!

        If you were to read Sabine et al., you’d find out that they used data from the World Ocean Circulation Experiment (WOCE) and the Joint Global Ocean Flux Study (JGOFS) to measure inorganic carbon. The Revelle factor can be calculated as proportional to the ratio between DIC and alkalinity. Therefore, Sabine et al. were able to produce a global map of the Revelle factor, ranging from 16 in cold Antarctic waters to 8 in warm tropical basins.

      • M,

        you just made the mistake of using bald assertions again. I told you, copy a lot. I do not expect people to remember stuff I wouldn’t if I were on the other side. I DO expect them to actually support their assertions with more than more arm waving. You simply state they say you can do it.

        SHOW ME!!! Show me why THEIR assertions are meaningful!!

      • Sigh. I’m not a carbon-cycle expert myself, but I have read enough of the literature to be able to give you somewhat of a tutorial.

        First: in pure H2O, CO2 is not very soluble, but it would follow Henry’s law. But seawater isn’t pure, there’s a lot of buffer in there, and that buffer allows seawater to dissolve a _lot_ more carbon than pure H2O would be able to.

        Second: The total carbon in seawater is equal to the carbon in the dissolved CO2/H2CO3 plus the HCO3- plus the CO3=.

        Third: Adding CO2 to the solution increases the acidity, driving the equilibrium towards CO2/H2CO3.

        Fourth: Therefore, the ratio of CO2/H2CO3 to (HCO3- plus CO3=) increases with added CO2.

        Fifth: The CO2 in the atmosphere is in Henry’s law equilibrium only with the CO2/H2CO3 in the solution. So, if we were to add a strong acid to the solution, increasing the ratio in part 4, that would decrease the total carbon in the ocean. Of course, when adding CO2 we’re adding both an acid and CO2 at the same time, and so the increased acidity merely means that the increase in total oceanic carbon is _less_ than the 1:1 you’d expect by pure Henry’s law rather than leading to a net loss of carbon.

        Okay: so that’s the theory. We can measure DIC (that’s Dissolved Inorganic Carbon, see part 2) experimentally. Bolin and Eriksson found that [CO2] = 0.0133 mmol, [HCO3-] = 1.9 mmol, and [CO3=] = 0.235 mmol. We can also measure alkalinity experimentally, which is important because alkalinity = [HCO3-] + 2[CO3=], and Bolin and Eriksson found alkalinity equal to 2.37 mval. You can combine that with the disassociation constants of H2CO3 and the solubility of calcium carbonate – k1/[H+] = 143 and k1k2/[H+]^2 = 18, the calcium concentration of seawater [Ca++]=10 mmol, and with Henry’s law you can solve for the Revelle factor and you can get about 12.5.

        And now I’m tired of typing stuff in, but I’ve found a non-paywalled reference for you: http://ocean.mit.edu/~mick/Papers/Omta-Goodwin-Follows-GBC-2010.pdf. I will note one caveat to my above data, which is that Egleston (2010) referenced in Omta (2010) show that inclusion of borate in the alkalinity equation changes the answer somewhat, especially as the pH of the solution approaches 7.5. If you want more experimental data, Egleston uses alkalinity and DIC from the GLODAP project and temperature and salinity from the World Ocean Atlas (because disassociation constants are functions of temperature and salinity).

        Sadly, I suspect that you are a troll, and that my work here is therefore meaningless, but perhaps others can learn something…

      • Thank you, M!
        It’s good stuff, what we are calling multi-physics these days.

      • “The quantity O has turned out to be particularly useful for a
        number of reasons. First of all, it is much more constant than
        the Revelle buffer factor that has been extensively applied in
        theories of the ocean‐atmosphere carbon partitioning.”

        So, this paper that finds the Revelle buffer factor to be less useful you think I should accept as proving the Revelle buffer factor??

      • Ah, yes, the subtle* difference between “this approach is better” and “the old approach is not useful”.

        Omta et al: “Unfortunately, R is not constant: it varies between approximately 8 and 15 at the ocean surface [Watson and Liss, 1998]. Furthermore, the globally averaged Revelle buffer factor depends strongly on the total amount of carbon in the ocean‐atmosphere system [e.g., Goodwin et al., 2007]. We now derive an alternative index that is more constant than R.”

        Translation for the purposes of my original argument: The Revelle factor is large (in this case, greater than 8). That’s all that’s needed for the use of the Revelle factor in support of the fact that CO2 has a long residence time in the atmosphere. You’ll note that in my original post I cited Sabine et al who also discussed this range of 8 to 16. Omta et al. also point out that the Revelle factor will increase with increasing emissions, up to a factor of 19 at 1000 ppm CO2, which just increases the proportion of CO2 which stays in the atmosphere. Nowhere does Omta say that the Revelle factor is wrong, merely that they prefer their new “O” factor because its behavior is less sensitive to changes in emissions and other factors (and that behavior is monotonic, even past 1000 ppm).

        So yes, this paper supports my point quite well. It also demonstrates the difference between real science (“let’s figure out how the Revelle factor changes over space and different scenarios, and whether there might be other factors that could be useful descriptions of the system”) and junk science (“the Revelle factor doesn’t exist, and the historic CO2 concentration increase might be due to magic fairies rather than human CO2 emissions, despite the fact that several completely independent methods all demonstrate otherwise”).

        -M

        *This is meant to be sarcastic, by the way. I realize you might not do well with subtle.

      • M,

        sorry, your sarcasm doesn’t work very well.

        Let’s start with the basics. Where is the definition of the Revelle Factor/ That is, what are the components, the constant(s) if any, the equations, relationships?

        Next, what are the obnservations or experiments that these are based on?

        Sadly, you are typical of the knowledgeable types who deal with Climate Science regularly and never question the basics. They do not exist in your explanations and do not exist in that paper. Whether they actually exist in the papers referred to by you and the paper you linked I do not know.

        Here is a discussion about the same issue by Jeff Glassman and Pekka Pirilla:
        http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-99490

        http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-99650

        Again, there are no basics establishing that the Revelle Factor was EVER fundamentally established as a part of science. Pekka, like you, points to many issues that very well could have been a part of a Revelle Factor IF IT HAD EVER BEEN ESTABLISHED!!!

        It wasn’t. It is one of several myths of Climate Science that modern scientists are working around and filling in. Notice that Pekka states that the Revelle Factor could actually range to 1. So we have a mythological factor that can range from 1 to over 16 that is a kind of buffer effect for Co2 and water. Gee whiz. Color me impressed by all the hard science going on!! Any idea what the temperature curves are like? How about what actually causes the buffering and its curve or linear relationship? Yup, I am just overwhelmed by all the data you have inundated me with about the Revelle Factor.

        This is sarcasm in case you didn’t notice.

      • Let’s try a little reading comprehension here, Kuhnkat:

        You say: “Notice that Pekka states that the Revelle Factor could actually range to 1.”

        Pekka states: “Thus Henry’s law remains valid for fixed pH. The Revell factor is 1.0 in that case.”

        Um. Note the conditional in that sentence (you do understand conditionals, right? I don’t want to overtax your small brain, as you have previously admitted that you are “very ignorant”). If we keep pH fixed, then the Revelle factor is 1.0. But, in the real world, pH is not constant, and adding CO2 will make a solution more acidic. If you want to test that, extract some red cabbage juice (which makes a good pH indicator), and you can show that adding dry ice (or even just waving a juice-soaked towel in the air to absorb CO2) will increase the acidity. If you read my post starting at “First:” you’d see that the chemical theory is clear. Yes, the Revelle factor depends on the buffering, dissolved carbon, and temperature of the solution. That doesn’t make it “mythological”, it just makes it dependent on conditions. Is gravity mythological because it is 9.8 m/s^2 here, but 0 m/s^2 in interstellar space far from any massive bodies? I think not. I actually gave you every single constant and equation you needed (assuming you know what a disassociation constant is, but then, I’m not here to teach you high school chemistry, though you could clearly use a refresher). So the Revelle factor can be experimentally measured, and it has been, with the answer being “between 8 and 16 depending on where in the ocean you look”.

        Heck, you could dissolve some sodium bicarb in solution and measure Revelle factors yourself.

        You, my dear troll, are “very ignorant”. And this is why Curry’s attachment to her “e-salon” is of little value. It is saturated with people like you. Which is why the best blogs use moderation to keep the signal to noise ratio at some reasonable level.

      • You, my dear troll, are “very ignorant”. And this is why Curry’s attachment to her “e-salon” is of little value. It is saturated with people like you. Which is why the best blogs use moderation to keep the signal to noise ratio at some reasonable level.

        Curry’s E-salon is of high value because people here are very willing to learn from experts who drop by and impart knowledge, and occasionally the experts acknowledge something they pick up here which they were not previously aware of.

        Of course, the ‘experts’ running blogs which censor anything which threatens their apparent intellectual superiority won’t gain in this way, because experts who behave like arrogant pricks are generally unlikely to create an ambience in which they can teach well, or indeed learn.

      • M,

        how many other laws do climate scientists deal with that say they are only applicable at a fixed temp or at equilibrium, say Stefan-Boltzman, yet, they are used anyway KNOWING that the temp and pressure and flux are continuously changing?? Sorry, don’t want to tax your brain too much. You obviously have much more important things to do than try and educate this ignorant person.

        Now, do you think that every point in the ocean is changing quickly enough 24/7 that there is NEVER a time that the Ph actually stays constant for a measurable length of time? Even if it doesn’t, wouldn’t the 1 be a LIMIT of the range??

        It was a nice try though, distracting from the issue that there is no support for an actual Revell Buffer Factor presented here or in the papers linked other than many scientists using or referring to it without definition or derivation. I will accept this as your acquiescence that YOU and Pekka do not have this to hand. Maybe you can research it and provide the information?

      • M

        I am humbled.

        Thank you.

      • M wrote “and yes, I do get frustrated at having to debunk stupid myths over and over and over again.”

        I have submitted a comment to Energy and Fuels explaining the error in Prof. Essenhigh’s paper; I did so in the hope of limiting the spread of this particular error, which does neither side of the debate any good. Prof. Essenhigh is right that residence time is about five years, but this is entirely uncontraversial, the IPCC put the figure at about four years. However the rise and fall of atmospheric CO2 is not governed by the residence time, but the adjustment time, and hence the conclusion is incorrect. My paper uses a one-box model, essentially identical to that used by Essenhigh, to explain the difference between residence time and adjustment time (amongst other things) and demonstrates that the observations are completely consistent with the generally accepted anthropogenic origin, but not with a natural origin. The paper has been conditionally accepted, I am just working on the corrections at the moment.

        This particular aspect of the carbon cycle is not straightforward and it is perhaps not surprising that this confusion of residence time and adjustment time should occur. I wouldn’t say this was a stupid myth, the solution wasn’t immediately obvious to me before I looked into it. Part of the reason why it has peristed is perhaps it has been deemed to basic to have been discussed in detail in the peer-reviewed litterature (until now).

      • The paper has been conditionally accepted, I am just working on the corrections at the moment.

        Congratulations on your paper!

        I am glad that you and am M have used “adjustment time” as the proper name to give to the impulse response settling time. As a part-time researcher, I haven’t run across this before, and didn’t realize that Rodhe defined this in 1979.

        Do you give a single value to the adjustment time or do you give it a range of values?

        This particular aspect of the carbon cycle is not straightforward and it is perhaps not surprising that this confusion of residence time and adjustment time should occur. I wouldn’t say this was a stupid myth, the solution wasn’t immediately obvious to me before I looked into it. Part of the reason why it has peristed is perhaps it has been deemed to basic to have been discussed in detail in the peer-reviewed litterature (until now).

        This is exactly what I continuously discover. Casual readers see the results being presented and they assume that some unverified software program is spewing nonsense because they don’t get the fundamental explanation. Once you start apply a compartment model (or box model as you refer to it) to the problem space, the the answer becomes obvious.

        Take a look at the comment in this thread I made a couple of days ago concerning my own attempt at a box model for sequestration:
        http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-106310
        I bet that it supports your findings, and I did the analysis because I was having a hard time coming up with a fundamental understanding of the fat tail response curve. My only disagreement is that I do think it is straightforward, because it is the same thing I would do to model diffusion of dopants and other low concentration particles in a semiconductor material. Electrical engineers and material scientists consider that a straightforward problem, and this understanding is what enables all computers that we are using today (as we proceed to type away).

      • Mr. Marlowe Johnson, solve this problem, with the IPCC product? Where: (GI+GO)=0

      • That’s easily solved with a simple change of variables:
        Tom = G[arbage]

      • settledscience, We all know that AGW science has shown absolutely no problem changing the variable, to suit themselves. Once again, your crowd has no intention to address the question I put forward: How may we ‘solve this problem, with the IPCC “product”:) which is= 0.’

      • Stupid analogy. The it’s not just the difference that is important. Knowing where the money is going and what it is doing provides insightful.

      • Kermit says, “Knowing where the money is going and what it is doing provides insightful.” Sad to know that there is nothing left & that’s not right. What differnce does it make now, thats the question, we don’t know? What a problem a day makes…

        http://www.kitco.com/ind/willie/aug252011.html

        Hope, this will help.

      • I’ll waste a little bit of time on some basic chemistry here. Hopefully nobody else raised these points.

        The “residence time” is a pretty nebulous concept and doesn’t have much to do with the chemistry of the CO2 in and on the earth. The climate concept is that humans have injected(are injecting) a significantly large amount of CO2 into the atmosphere over a relatively short period of time. This excess CO2 does react and does not all stay in the atmosphere. So the concept of residence time would be “how long will it take for all the human-induced CO2 to be removed from the atmosphere and the level go back to the 280 ppm or so equilibrium”.

        The first question is “has CO2 EVER been in equilibrium in the atmosphere? The paleo records all indicate that it has not been. The level has varied widely in different eras, generally lagging behind changes in temperature. Equilibrium chemistry is a very chancy thing. The best way to determine equilibrium is by determining the free energies of the reactions involved, the amounts of reactants, and calculate what the equlibrium values are in a phase diagram. This is totally impoosible to do because we do not know the reactions involved or their rates- simple solution/dissolution mass flow in the oceans, uptake of CO2 as carbonate into plants and animals, rate of release from decay, rate of carbon accumulation in sediments, rate at which sediments are being subducted at the plate boundaries, etc.

        The second question is “who cares?”; The amount of CO2 in the atmosphere fluctuates quite a bit on many time scales. Any notion of a residence time for CO2 in the atmosphere will depend almost entirely on the assumptions used to calculate it. Given the small numbers involved(CO2 in the atmosphere<<CO2 in the oceans<<CO2 and C in mantle and surface rock) it is only a small blip in the noise. Part of the who cares is that the climate doesn't distinguish between molecules or atoms, only the concentrations involved, and to a tiny extent the isotopes involved. Once it is there, it goes where it goes and does what it does.

      • George M

        Wouldn’t your 280 ppm equilibrium be more properly 230+/-50 ppm steady state, seasonally and geographically normalized, if we’re talking paleo (to six sigma, and then allowing for uncertainty)?

        Though I doubt this much alters your argument.. whatever it is.

      • Hal – Since it will become clear to you from the various responses below that the actual decline of excess CO2 toward baseline occurs over centuries, rather than 5 years, with a long tail requiring many thousands of years, I would be interested in your response to the original question – how much should that concern us?

      • can you please not sidetrack the discussion before it starts

      • Stephen – I don’t think it is a sidetrack to refer back to Hal’s email that served as the reason for this entire post. The final paragraph of his email captured the essence of his point – if the residence time is very short (5 years), there is little reason to be concerned. He never answered whether there was a reason for concern if the residence time is much longer, which it is. That is a legitimate question to ask in my view.

      • Correct. Your comment is clearly on topic, not a sidetrack.

      • By “long tail” you are referring to a probability distribution, aren’t you?

        Some subsequent comments make reference to that phrase without seeming to understand it. Or maybe you’re using it differently than I expect. Please clarify.

      • That is nothing but unscientific hand-wavy claptrap.

        I believe there are clear, proven and well-known beneficial effects of CO2 in the atmosphere with regards to increased rates of plant growth, better crop yields, etc. needed to support the growing population of the planet.

        That does not address the perfectly clear, direct question you were asked, nor in your rambling, off-topic comment, did you ever directly, responsively answer that perfectly clear, direct question you were asked.

        Fred Moolten asked you:

        At what duration for the residence time of excess CO2 would you perceive that interval to be long enough to be of concern? How many years specifically for an interval representing an average residence time for the excess?

        “I don’t know” would have been a perfectly respectable, honest answer, Harold.

        Faking it is not respectable.

        In fact, you did even state categorically that you don’t know.

        I don’t know what the optimum level of CO2 in the atmosphere would be…

        But you should have been honest and just stopped there instead of trying to pretend you know something relevant to the subject, by babbling on about meaningless factoids.

        I have read that US submarines are allowed to have 8,000 ppm CO2 before there is any concern about “health related effects.”

        You mean “poisoning.” The level of CO2 that’s considered poisonous is 100% irrelevant to informed discussion of the CO2 greenhouse effect. It’s just a means of faking knowledge, which you don’t have.

        I don’t know what the optimum level of CO2 in the atmosphere would be…

        No, you don’t, nor do you know anything relevant to making informed estimates of the probabilities of longer or shorter residence times.

        … all things being considered, but it is somewhat likely that the level would be higher than it is now, and I have factored this into my thinking about the current situation.

        Oh, is it really “somewhat likely” Harold? Based on what scientific expertise can you make that assertion? None. Can you quantify “somewhat likely?” No, you cannot. You don’t even know what quantities to compute, so you really have nothing of value to contribute — except for name-dropping, which obviously serves Curry’s interest in increasing her notoriety.

        This literature is reviewed and cited by my former NASA colleague, Apollo 17 astronaut, and former US Senator, Dr. Harrison “Jack” Schmitt …

        Funny that “scientist” is nowhere on his résumé! Less funny is that neither you nor Curry care that he has no scientific credentials whatsoever.

        That is, if we can answer the question of the current thread closer to the 5 year residence time mark, then we can all agree we are not in a triage situation regarding control of CO2 emissions and that we have more time to work the problem of climate change with perhaps different decisions and action plans.

        You just advocated fudging the science (“closer to the 5 year residence time mark”) in favor of a particular policy outcome (“different decisions and action plans”), the exact thing that you climate science deniers are always falsely accusing all the legitimate scientists of doing. I’m very sure that from his political days, your old pal “Jack” can explain to you what “unintentional truth-telling” means. :-)

        Amateur.

      • Harold H Doiron

        I believe there are clear, proven and well-known beneficial effects of CO2 in the atmosphere with regards to increased rates of plant growth, better crop yields, etc. needed to support the growing population of the planet. I don’t know what a harmful level of CO2 in the atmosphere would be. I have read that US submarines are allowed to have 8,000 ppm CO2 before there is any concern about health related effects (more than 20 times current levels). I don’t know what the optimum level of CO2 in the atmosphere would be, all things being considered, but it is somewhat likely that the level would be higher than it is now, and I have factored this into my thinking about the current situation.

        That’s quite the credo. Been worshipping at the temple of Idsos, have we?

        I’ve considered your opinion for a week, as I wanted to give it careful thought.

        I see Chief Hydrologist has punctured and lampooned your faith, and he needs no help from me.

        However.

        For on the scale (we have good cause to believe by the paleo record, extrapolations, and SWAG) of ten million years, the CO2 level of the atmosphere has been ergodically steady at 230 ppm +/- 50 ppm. We’re over 44% above that mean now, which appears to be unprecedented on the span of a geological epoch.

        Certainly the ice core record of the past 800,000 years indicates this range to a high degree of certainty, as confirmed by the stomata count of plant fossils and myriad other evidences.

        Where we substitute our own judgement of ‘best’ for what has been the dominant mode of a principle component of our complex, dynamical, spatiotemporal world-spanning climate for a span of time an order of magnitude longer than the existence of our species, we exhibit what can only be called arrogance.

        Where we do it based on spinmeistering and public relations, we display profound folly.

        CO2 at levels above 200 ppm up to about 2500 ppm functions as an analog to plant hormones, not as a ‘nutrient’ or fertilizer.

        It’s not so very different to plants from steroids in professional athletes. It alters primary and secondary sexual characteristics, modifies structures, and results in additional mass in some parts of the plants.

        Over seasons and generations, plants adapt to higher CO2 levels in some ways, gradually tapering off in the ‘benefits’ realized, but retaining negative effects longer than they keep the benefits.

        That’s why plants as a group flourished quite as well at 180 ppm as at 280 ppm and at 380 ppm.

        The purported benefits of CO2 elevation have only one medium term experiment in the field that I know of, and although it demonstrates selective benefits in field conditions in terms of favoring some species over others, it doesn’t match the levels seen in hothouse conditions with unlimited nutrient and ideal growing conditions.

        In short, there is nothing clear, proven, or — if well-known — especially correct in mad schemes to profit from higher CO2.

        It won’t end world hunger, as Lord Lawson suggested in his book based on nothing more than speculation and wishful thinking.

        It’s a silly opinion. You’re welcome to hold it, but please don’t claim it’s clear or proven.

        Please factor the Uncertainty of your beliefs into your thinking.

        Because to my thinking, a Perturbation on unprecedented scales in a Chaotic system will tend to disturb ergodicity in unpredictable ways, which increases the cost of climate-related Risks to me.

        Those costs are real, and translate into money taken from me.

        And I don’t recall consenting to your CO2-worship picking my pocket.

  5. And let us not over look the role of freshwater bodies as carbon sinks.
    This is apparently much larger than previously recognized by the climate science community.
    I would sugest that before we start diverting the topic, however nicely, into predictions about “X”, we should define the behavior of CO2 in the atmosphere more clearly.

  6. Tim Ball: “Pre-industrial levels were 50 ppm higher than those used in the IPCC computer models. Models also incorrectly assume uniform atmospheric distribution and virtually no variability from year to year. Beck found, “Since 1812, the CO2 concentration in northern hemispheric air has fluctuated exhibiting three high level maxima around 1825, 1857 and 1942 the latter showing more than 400 ppm.” Here is a plot from Beck comparing 19th century readings with ice core and Mauna Loa data…

    “Elimination of data occurs with the Mauna Loa readings, which can vary up to 600 ppm in the course of a day. Beck explains how Charles Keeling established the Mauna Loa readings by using the lowest readings of the afternoon. He ignored natural sources, a practice that continues. Beck presumes Keeling decided to avoid these low level natural sources by establishing the station at 4000 meters up the volcano. As Beck notes “Mauna Loa does not represent the typical atmospheric CO2 on different global locations but is typical only for this volcano at a maritime location in about 4000 m altitude at that latitude.” (Beck, 2008, “50 Years of Continuous Measurement of CO2 on Mauna Loa” Energy and Environment, Vol 19, No.7.) Keeling’s son continues to operate the Mauna Loa facility and as Beck notes, “owns the global monopoly of calibration of all CO2 measurements.” Since Keeling is a co-author of the IPCC reports they accept Mauna Loa without question.”

    (Time to Revisit Falsified Science of CO2, December 28, 2009)

    • Wagathon, the Beck paper is a joke. What they measure at Mauna Loa is a consistent record, same place year on year, clear of urban influences. Beck’s results are from multiple records, often close to industrial CO2 sources, using multiple analytical methods. Notice how the variability suddenly falls when more precise methods are adopted.

      • Consistent?

        From May 15th to 21st CO2 went up 1.84ppm

        From July 17th to 23rd CO2 went down 1.34ppm

        ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_weekly_mlo.txt

        There was a even a 1.93ppm jump in 7 days recently.

        No consistency there.

      • wow thats an entirely new class of retarded argument

      • Your lack of curiosity never surprises me.

        If CO2 goes up 1ppm in a year it is a sign of man-made catastrophic climate change.

        If it goes up almost 2 ppm in 7 days it is a sign of the natural in and out breathing of the earth …

        “The sawtooth pattern represents the natural carbon cycle. Every summer in the northern hemisphere, grass grows, leaves sprout, and plants flower. These natural processes draw CO2 out of the air. During the northern winters, plants wither and rot, releasing their CO2 back into the air. This sawtooth pattern shows the planet breathing.”

        http://www.terrapass.com/blog/posts/science-corner

        The above explanation is a joke when you look at the weekly data.

      • Bruce,

        You miss the point entirely. While there may be short term fluctuations of the same order as the yearly increase, those fluctuations do not scale over time. So while you may indeed get an average reading on one day which is equal to the lowest reading from a year or so before, you are not going to get one which is equal to the lowest reading from a decade earlier, still less from several decades earlier, and it is this which reveals the trend, clearly and unmistakeably.

      • Your point is to ignore the explanation that explains the sawtooth component of the Mana Loa graph because it is a joke.

        Why are their short term fluctuations? Science would explain them. Propaganda explains them away with joke “causes”.

        If the “natural carbon cycle” can be 2ppm over 7 days, why not 2ppm over 365 days?

        CO2 follows temperature historically.

      • Chief Hydrologist

        wow that’s a not entirely a new class of numbnut argument

      • Bruce,

        Take those variations, and divide them by the average atmospheric concentration over the relevant period, and tell me what sort of variation you get in percentage terms. It is going to be well under 1% in all cases.

        This would seem more that sufficiently consistent for the purposes it is being used for.

      • OK, here is the graphical representation of that data.
        http://img197.imageshack.us/img197/9555/co2.gif

        Read again what jimbo said above: “… tell me what sort of variation you get in percentage terms ..”

      • “tell me what sort of variation you get in percentage terms.”

        Ok.

        AGW = .5% change in ppm per 365 days

        Jubany = .5% change in CO2 per 1 day

        But more importantly, this “sawtooth” pattern explanation seems awfully bogus since daily changes can be 2ppm.

        ““The sawtooth pattern represents the natural carbon cycle. Every summer in the northern hemisphere, grass grows, leaves sprout, and plants flower. These natural processes draw CO2 out of the air. During the northern winters, plants wither and rot, releasing their CO2 back into the air. This sawtooth pattern shows the planet breathing.”

        I mean … really! If you only looked at the yearly graph you might be naive to believe such an explanation.

      • Bruce, Why can you not just look at the data impartially? If that curve charted my pay rate in $/hour based on working commissions, I would not be complaining about the fact that there was a ripple and some noise in the overall upward slope.

        You may be having a problem with understanding signal processing mathematics, and in particular its digital counterpart. What oscillations you see and their amplitude are really a matter of the impulse response caused by a forcing function. Depending on the frequency response, daily fluctuations can be filtered out, yearly fluctuations filtered out less, and the longest periods filtered out the least. The fact that samples are taken at the same time each day points out the stark reality of the Nyquist criteria.

        The Nyquist criteria states that digital sampling at the same rate as a naturally occurring frequency will fold that value over to look like a constant value. You actually have to sample at twice the natural frequency, i.e. the Nyquist frequency, or measure the numbers twice per day to see the daily fluctuations. We electrical engineers had this burned in our skulls during school so really see nothing weird about the data. I am not sure what your background is.

      • If your pay changed daily, and Human Resources explained it was because of yearly variations in the business cycle you would be very, very suspicious.

      • If your pay changed daily, and Human Resources explained it was because of yearly variations in the business cycle you would be very, very suspicious.

        Bruce, Do you have a problem with reading comprehension too? Note that I crafted my analogy to state that I worked based on commissions. Do I have to explain to you that commissions are susceptible to random fluctuations?

      • Paul B – perhaps the Beck paper is a joke – I haven’t read it. But Wagathon has touched on a real difficulty with the Mauna Loa data. The site is not ideal for assessing baseline CO2 in the atmosphere as it lies on the flank of a volcano which itself emits large and uncontrolled amounts of CO2. The only way I can think of to allow for such sporadic local contamination is to use the minimum values recorded within each time period, presumably daily, and dump the rest. Perhaps the resulting data are a reliable and meaningful guide to global CO2 levels. Indeed I’m fairly happy to accept that they are, although I would prefer to see a demonstration of this.

      • This oft-repeated canard about Mauna Loa being an unsuitable site for CO2 measurements is dealt with here:
        http://www.skepticalscience.com/mauna-loa-volcanoco2-measurements.htm

      • Could you check your link FiveString. I could not get it to work. Thanks.

      • The link is missing a dash between volcano and co2. Perhaps WordPress software removed it. It should be this

      • Wish this site had an ‘edit’ function… Thanks for the correction Pekka.

      • “It is true that volcanoes blow out CO2 from time to time and that this can interfere with the readings. Most of the time, though, the prevailing winds blow the volcanic gasses away from the observatory. But when the winds do sometimes blow from active vents towards the observatory, the influence from the volcano is obvious on the normally consistent records and any dubious readings can be easily spotted and edited out”

        They edit the data based on supposition? Not good.

        When you said the “canard” “is dealt with” I thought you might be talking scientifically not craptastically.

      • Now that your whining that by your speculation they might use some seemingly spurious measurements has been debunked, you’ve started whining that they should use known spurious measurements.

        You do know that 1984 was not intended as an instruction manual, don’t you?

      • CO2 to from underwater volcanoes

        http://nwrota2009.blogspot.com/2009/04/getting-gas.html

        Estimated number of underwater volcanoes? 3 million+

      • Where?

        If this was anything but Plimmer’s pipe dream we would see interesting profiles of various stuff like HCO3- in the oceans. We don’t

  7. Arfur Bryant

    Hal,

    Lets have a closer look at the 100-year lag idea…
    .
    The IPCC states that humans started to introduce CO2 and other dry GHGs ‘markedly’ in 1750:
    .
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-human-and.html
    .
    [“Global atmospheric concentrations of carbon dioxide, methane and nitrous oxide have increased markedly as a result of human activities since 1750.”]
    .
    So, by 1850 (100 years later) we should have been feeling the ‘full effect’ of the initial input. Thereafter, every year would see an increase in the ‘full effect’ of CO2. This would, logically, lead to an acceleration of the effect of CO2 and, according to the cAGW theory, an acceleration in global warming. So now we can look at the temperature data since 1850:
    .
    http://www.woodfortrees.org/plot/hadcrut3gl/from:1850/to:2011
    .
    The question is: Do you think that shows an acceleration? If you apply enough smoothing you may do so but then you are adjusting how you interpret the data. The raw data graph shows no warming at all in the last 13 years (if anything a cooling) which itself would cast serious doubt on any interpretation of ‘acceleration’. In addition, any reason given for the cooling – such as ‘natural variation’ – has to consider that any ‘natural variation’ has to be powerful enough to combat not only the initial CO2 effect, but also the acceleration in the initial effect. In that case, natural factors easily outweigh any CO2 effect.
    .
    Personally, I think a more important question is: What is the contribution made by CO2 to the Greenhouse Effect? Until we can answer that question correctly, any theory/hypothesis/assertion regarding the supposed warming effect of CO2 is flawed as being based on an assumption.
    .
    I agree with hunter that the behaviour of CO2 needs to be ascertained more clearly.
    .
    Thanks for an interesting post.

    • Arfur – There is no reason necessarily to expect an acceleration simply because CO2 is increasing. That is because any warming reduces the radiative imbalance caused by the increased CO2 and thereby reduces the tendency for further warming. Depending on the rate of CO2 rise, we could see a steady warming, an acceleration, or warming at a reduced rate, although as long as CO2 is rising, the temperature trend averaged over multiple decades can be expected to be upward. Over shorter intervals, the effects of other, short-term climate drivers will modify the long term trend, as can be seen by examining the behavior of global temperature anomalies over the past 100 years, with their fluctuating ups and downs overlying an upward trend.

      The relevance of the residence time function is that it tells us how long a given CO2 concentration will continue to exert warming effects if the planet is out of balance. In general, that will be centuries.

      • Arfur Bryant

        Fred,

        Not according to the IPCC…
        http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter3.pdf
        See FAQ 3.1 Fig 1 and note:
        [“Note that for shorter recent periods, the slope is greater, indicating accelerated warming.”]
        .
        So its not just the CO2 increase, but the slight acceleration in the increase that supports my argument. If the increased CO2 leads to a radiation imbalance which leads to warming which leads to a reduction in the radiation balance which leads to a reduction in warming – as you state – then what is the problem? The trend you speak of is 0.06 C per decade since 1850 and is not increasing from 1850! That size of trend is not even mildly threatening, let alone ‘catastrophic’ (IPCC term, not mine)!
        .
        CO2 may ‘exert warming effects’ but these effects are not significant, as demonstrated by observed data. I repeat, until you know what the CO2 effect is, you are basing your argument on an assumption.
        .
        Off to bed now, will have to leave any further reply until tomorrow…
        Regards,

      • The recent temperature record is consistent with continued warming of about 0.15C/decade. Which in turn is consistent with AGW.

        That’s all there is to it. And where precisely has the IPCC used the specific phrase “catastrophic”?

      • So if the temperature falls 4C to the depths of the LIA, thats natural, but if it recovers by .8C, it is mans fault?

      • Arfur Bryant

        And using any other start point other than the IPCC chosen start date of ‘accurate data’, ie, 1850, is misleading. What is your definition of ‘recent’? How about this one…?
        http://www.woodfortrees.org/plot/hadcrut3gl/from:1998/to:2011/plot/hadcrut3gl/from:1998/to:2100/trend
        .
        That’s ‘recent’, so where is your 0.15 C per decade trend there? If you move the start date to the right of 1850, you can get all sorts of trends but not one of them is the ‘overall trend’. So I just stick to the overall trend. That way, if the temperature starts to increase in an accelerative fashion WILL increase. In fact, the trend today is much lower than it was in 1880. Give me a call back when you can find the trend increasing above that one!
        .
        Oh, and the IPCC used the term ‘catastrophic’ here:
        http://www.ipcc.ch/publications_and_data/ar4/wg3/en/ch2s2-2-4.html

      • The 0.15C/decade trend is still there even in HadCRUT past 1998, it’s just masked by variation (solar cycle and ENSO). See: http://tamino.wordpress.com/2011/01/20/how-fast-is-earth-warming/

      • trend is still there
        it’s just masked

        Lol. Wot?

        how-fast-is-earth-warming?

        It isn’t. And hasn’t been since 2003.

      • There have been cooling impacts since about 2003 (2003 was solar max! there’s a been a low solar minimum recently!). Plus 2003-2007 was pretty much el nino after el nino whereas since 2007 there have been 2 quite strong La Ninas.

        The earth is still warming, it’s just that since 2003 the above-mentioned cooling impacts have suppressed it.

      • Arfur Bryant

        lolwot,

        Faith is a truly powerful debating tool. If you say…
        [“There have been cooling impacts since about 2003 (2003 was solar max! there’s a been a low solar minimum recently!). Plus 2003-2007 was pretty much el nino after el nino whereas since 2007 there have been 2 quite strong La Ninas.”]
        …Does that mean that there were no La Nina events between 1910 and 1940 or 1970 and 1998? What caused those warmings? You want to use CO2 to explain a warming but that it is overcome by natural forcings during a cooling period. That denies the likelihood that natural forcings can work both ways.

      • Arthur, any period of time which starts with El Ninos and ends with La Ninas (eg 2005 to 2011) will have a cooling bias due to ENSO that has nothing to do with the longterm warming trend.

        I mean what you are doing is little more sophisticated than saying temperature dropped from 1998 to 2000 and claiming this contradicts global warming. It does not. 1998 to 2000 was El NIno to La Nina. That’s short term noise able to overwhelm the longterm trend.

        2005 to 2011 has about a 0.1C cooling impact from falling ENSO. It also has a cooling impact from the solar cycle.

        In short that’s why the period 2005-present is kind of flat. It’s not because the longterm warming has stopped, it’s because ENSO and the solar cycle happen to line up over that period to cancel it out.

      • “any period of time which starts with El Ninos and ends with La Ninas (eg 2005 to 2011) will have a cooling bias due to ENSO that has nothing to do with the longterm warming trend.”

        Excellent point, so how much did the 30 year run of positive PDO starting with more la ninas and ending with more el ninos 1975-1998 contribute to the longterm warming trend?

      • About 0.1C warming

      • Arfur Bryant

        So. lolwot, what caused the other 0.5 C warming in that period? You are condensing your ENSO argument into a short period and appear unwilling to consider that the same natural causes you claim work against the cAGW theory in the short term can also exist during the warming periods that you appear to attribute to CO2. If, as you state above, ENSO only contribute 0.1C in that period, are you suggesting that CO2 is powerful enough to contribute another 0.5 deg C?
        .
        If that is your argument, what caused the warming from 1910 to 1945, and what caused the subseuquent cooling?

      • “If, as you state above, ENSO only contribute 0.1C in that period, are you suggesting that CO2 is powerful enough to contribute another 0.5 deg C?”

        It maybe even more powerful than that. Without aerosol emissions the amount of warming would have been greater than 0.5C
        .
        “If that is your argument, what caused the warming from 1910 to 1945, and what caused the subseuquent cooling?”

        Solar activity increased in the early 20th century. I think that plays a significant part of it. Between that and the reliability of the records back then I am not sure there is a problem there. Global temperature went just about flat after the 1940s until it started rising again during the 70s.

      • lolowt,

        If the records were so unreliable back then, how do you know the increase since 1970 are unusual? The HadCRUt dataset goes back to 1850 and clearly shows three distinct warming periods. You want to discount the first two and concentrate on the latter one. The you want to discount the levelling/cooling after the latter one. How many goalposts do the warmists want to move?

      • http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf

        “El Niño–Southern Oscillation is a strong driver of interannual global mean temperature variations. ENSO and non-ENSO contributions
        can be separated by the method of Thompson et al. (2008) (Fig. 2.8a). The trend in the ENSO-related component for 1999–2008 is +0.08±0.07°C decade–1, fully accounting for the overall observed trend. The trend after removing
        ENSO (the “ENSO-adjusted” trend) is 0.00°±0.05°C decade–1, implying much greater disagreement
        with anticipated global temperature
        rise.”

      • Arfur – The IPCC site you linked to reinforces the points I made above,but if you have a question about a specific item, I’ll try to explain the reasoning behind it.

      • Arfur Bryant

        Fred,

        You could start with the bit that explains why the slight acceleration in CO2 can be construed as both leading to a reduction in warming and an acceleration in warming depending on how you feel at the time.
        .
        I repeat, the overall trend of 0.06 C per decade is NOT increasing. What is the problem?

      • How does a line “increase”? If you stick a line of best fit through date 1850 onwards then of course the line is going to be straight. That is what a line is.

      • the overall trend of 0.06 C per decade is NOT increasing

        That’s t-r-e-n-d Lolwot, not l-i-n-e

      • Of course it’s not increasing. It’s a straight line you’ve fit through all the data. What do you expect when you apply a line of best fit? That the end part will curve upwards?

        The fact is that recent warming is greater than the “overall trend of 0.06C” anyway.

      • What recent warming?

        Oh, you mean that warming, last millenium.

        :)

      • If you correct for ENSO and the solar cycle the warming has continued with no pause. You should correct for these things as they are noise not signal.

      • Arfur Bryant

        Ok lolwot, I’ll explain.

        Draw a hockey-stick curve (one that shows an acceleration in warming). Draw a straight line from the origin of the curve to a point a short way along the x-axis. Then draw successive lines, each starting from the origin, to successive points further along the x-axis. Each successive ‘line’ will be showing an increasing ‘trend’ because each successive line will be steeper than the last. Got it?
        .
        In terms of global temperature, IF the radiative forcing theory was correct, then the overall trend (drawn from the 1850 origin) would show a relatively consistent INCREASE.
        .
        It doesn’t. Its really that simple. The observed data does NOT support the theory. The overall trend from 1850 to 2011 is 0.06 C per decade. The overall trend in 1998 was 0.067 Cpd. The overall trend in 1944 (another peak) was 0.06 Cpd and the overall trend in 1888 (the first peak) was 0.17 C pd!

      • Where you are wrong is that climate models don’t show your acceleration. Look at 20th century hindcasts. They simply do not show the acceleration you claim and yet I am 100% sure they show AGW.

      • Arfur Bryant

        Fine lolwot, you believe in your models and I’ll believe in observed data.

      • Fine lolwot, you believe in your models and I’ll believe in observed data.

        Humans have an intuitive sense of the way a model works. Take the place of an outfielder who is trying to catch a fly-ball. He will instantaneously model the trajectory based on minimal information. If the fielder followed your advice, he would wait for the baseball to land on the grass and then run to it. Lot of good that will do.

        And yo can laugh at this analogy but that’s the way we work. The only difference is that the practical human will innately use a model based on heuristics and acquired knowledge, whereas the scientist will use math and the corpus of scientific evidence.

      • Arfur Bryant

        You’re kidding me, right? How many fly-balls would the catcher have caught if he’d used the IPCC models instead of his own models based on his own historical evidence (ie. practice)?

      • You’re kidding me, right? How many fly-balls would the catcher have caught if he’d used the IPCC models instead of his own models based on his own historical evidence (ie. practice)?

        Well they wouldn’t catch any if they used Postma’s or the skydragon’s models.

      • Arfur Bryant

        Precisely!

        Which is why I don’t believe either set :)

      • They would be looking at where the average baseball lands and then they would try to catch the average every time. They would also overestimate the hitters ability due to the amount of co2 in the air

      • I know why they call them deniers. At every chance they try to deny another person from actively thinking and advancing an argument. The majority (with a couple of exceptions from the truly skeptical POV) will never actually help you and offer constructive criticism. Instead they just stomp around.

      • Arfur Bryant

        WHT,
        .
        [” The majority (with a couple of exceptions from the truly skeptical POV) will never actually help you and offer constructive criticism. Instead they just stomp around.”]
        .
        If that is directed at me, then I beg to disagree. I have engaged with several warmists on this thread alone and I think I have done so in a respectful and reasonable manner. I will be happy to discuss CO2 (the thread subject) with you if you so wish.
        .
        How do you think it is possible for a trace gas at less than 0.04% concentration to significantly affect global temperature?
        .
        If you don’t wish to discuss this basic premise of the cAGW debate, just let me know. I will assure you of my respectfulness as long as you do the same.
        .
        Regards,

        Arfur

      • Arfur, so if the 2000’s was 0.15 C warmer than the 1990’s and the previous average was only 0.06 C, you don’t count that as an acceleration? Could you clarify your definition of acceleration?

      • Arfur Bryant

        Jim D,

        You are averaging out in decades? Yes, the 2000s were averagely warmer. But cAGW was sold to Joe Public as an impending catastrophe on the basis of not-averaged data. If the cAGW theory is correct, the MBH98 curve would still be increasing today. The FACT that the temperature has not increased above 1998 by itself effectively disproves the theory.
        .
        You can hide all sorts of significant data if you smooth out enough. Look here:
        http://www.woodfortrees.org/plot/hadcrut3gl/from:1990/to:2010
        .
        Do you see an acceleration between the 90s and the 00s? Imagine you were a climber who walked from the start of the graph to the finish. You would have had to climb to the highest peak in 1998 but you would have spent a longer time at an averagely greater height in the 00s. By your argument of using decades, the climber would say he was ‘higher’ in the 00s, whereas history will show that he climbed the ‘highest’ in 1998.

      • We have just had the warmest decade by 0.15 degrees. Why would that not be a sign of warming faster than your average 0.06 degrees/per decade. It is an acceleration by any definition, and the projections have it accelerating further to 0.3 or more degrees per decade if the CO2 increase continues to accelerate.

      • Arfur Bryant

        Jim D,
        .
        Please read what I wrote.
        .
        Or try this… If the temperatures stay flat for the next two hundred decades (or more), you will still be able to say that each decade ‘is the (equal) highest decade’! Unfortunately, there will have been no temperature increase and no acceleration. The climber will be walking across a very large, flat and high plateau…
        .
        Of course, if the temperature increases, we may reach a point where the overall trend increases above what it is today. That will show an increased trend but not necessarily an acceleration. For an acceleration each successive overall trend has to be increased over the previous..
        .
        Using short-term, intermediate trend lines (as the IPCC did), is irrelevant, as each short-term trend can change greatly. Only the overall trend counts.

      • I would probably try to account for volcanoes and ENSO.

        http://www.drroyspencer.com/wp-content/uploads/UAH_LT_current.gif

      • Arfur, my advice is to not look at anything less than a decade average when talking about climate. You can easily get confused by the up and down fluctuations of internal and short-term solar variability that are meaningless as they cancel out on the climate scale.

      • Arfur Bryant

        And my advice to you is not to believe in fairies, little green men from Mars and the genuinely ridiculous notion that a bunch of trace gasses existing at a combined total of less than 0.04% has the potential to significantly effect the global temperature.
        .
        Its a shame that no-one on the MBH98 team didn’t mention that the data should have had a ten year smoothing when they sold the ‘rapid and accelerating’ idea to the politicians. Maybe a caveat that the next thirteen years would be likely to show no further warming would have given them a little more credibility?

      • I don’t understand the obsession with MBH98 here. That was in the AR3 report ten years ago now, and since then we have had an AR4 report in 2007 that overrides their time series, but anti-AGW people seem to cling to AR3 as being easier to criticize than AR4. Can you update your criticisms to AR4 at least, or don’t you have any?

      • “the genuinely ridiculous notion that a bunch of trace gasses existing at a combined total of less than 0.04% has the potential to significantly effect the global temperature”

        A 3.7wm-2 forcing per doubling is not insignificant. And CO2 levels being so low just makes it a hell of a lot easier to double it.

        When so many of your number so often make these ridiculously garbage arguments no wonder climate “skeptics” have no credibility outside their own community.

      • Jim D, it is one thing when people can’t agree on the facts. It is quite another when they can’t agree on the logic by which they reason about the facts. It is obvious that Arfur uses different rules of reasoning from you. As long as that remains the case I predict no resolution of your differences.

      • lolwot and Vaughan,

        The reference to MBH98 was solely for the purpose of arguing against your use of ten-year smoothing. There was no suggestion of ten-year smoothing when the graph was used to ‘sell’ cAGW to the public. There was no suggestion of ENSO cycles, there was just hype. To further your point of smoothing, it is interesting to note that warmists are now using the ‘smoothing argument’ to avoid the lack of warming since 1998. I wonder, if there is not further warming for the next eight years, whether you will insist on the use of a twenty-year smoothing?
        .
        As to my logic and the effectiveness of the radiative forcing theory, lets have a closer look, shall we?
        .
        [“A 3.7wm-2 forcing per doubling is not insignificant. And CO2 levels being so low just makes it a hell of a lot easier to double it.”]
        .
        Ok, so first of all, is W/m-2 a unit of heat? No. Is radiation the same as heat? No. Is the subject we are discussing called the cAGW or cAGR? well, its cAGW. Ok, so unless you can prove that the 3.7Wm-2 figure is translated into some units of heat, and quantify it, your point is a pathetic attempt to whitewash the real problem. So, go on, tell me how much HEAT is attributed from CO2. I’ll ask again the question that NO warmist want to discuss – what is the contribution of CO2 to the Greenhouse Effect?
        .
        Instead of Vaughan chipping in with his usual and puerile use of snark, why don’t you ask yourselves why you don’t want to answer this simple question? What is it you’re afraid of? While you’re at it, maybe you could answer this question, since you seem to like the use of radiation so much – what was the Wm-2 figure in 1850 before the CO2 addition of 3.7?
        .
        Vaughan, if you can’t play nice, don’t play at all.

      • Arfur Bryant

        That last post should have started with ‘JIm D, lolwot and Vaughan…’

      • This notion of acceleration in AR4 was a blatant bit of manipulation by the IPCC. You simply can’t compare trends over different periods. Begging the question that they shouldn’t be drawing linear trends for nonstationary data.

      • Arfur Bryant

        George M,
        Agreed.

    • How many humans?

      The IPCC states that humans started to introduce CO2 and other dry GHGs ‘markedly’ in 1750

      How ‘industrialized’ was society, prior to 1950? 1900?

      Come on science deniar, think. Just this once. Criminy!

      • Arfur Bryant

        settledscience
        So, complain to the IPCC, not me!

        And I think you’ll find its ‘denier’, not ‘deniar’…

      • Have you forgotten the context of your own comment?

        http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104326

        The IPCC states that humans started to introduce CO2 and other dry GHGs ‘markedly’ in 1750:
        .
        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-human-and.html
        .
        [“Global atmospheric concentrations of carbon dioxide, methane and nitrous oxide have increased markedly as a result of human activities since 1750.”]
        .
        So, by 1850 (100 years later) we should have been feeling the ‘full effect’ of the initial input.

        Wrong. Obviously, there has been more anthropogenic warming since 1850 because more anthropogenic greenhouse gases have been polluted since 1850 than from 1750 to 1850.

      • Well, it’s nice of you to try to sneak a post in after this thread has gone quiet, but you are the one who is wrong.

        The point IS within the context. It doesn’t matter if the contribution of humans is greater later in the century, the fact is that – using the 100 year lag theory – by 1850 we should have been seeing all the lag which started in 1750. Every year after that, and every year the anthropogenic contribution increase, we should be seeing an increasing warming effect (ie accelerating) because of that lag (if it exists).

        Unfortunately for you, and the other warmists, the lack of acceleration in the temperature datasets is a strong indication that the 100-year lag theory is wrong! If the lack of increased waring is due to ‘natural variation’, then this natural variation is not only capable of overwhelming the ‘radiative forcing’ of CO2 but the hypothesised increase in the ‘CO2 effect’ caused by the lag!

        That was my point and it IS in context. I politely suggest it is YOU that needs to think…

        And it’s still ‘denier…’.

  8. “Stomata data on the right show higher readings and variability than excessively smoothed ice core record on the left. The stomata record aligns with the 19th century measurements as Jaworowski and Beck assert. A Danish stomata record shows levels of 333 ppm 9400 years ago and 348 ppm 9600 years ago.

    EPA declared CO2 a toxic substance and a pollutant. Governments prepare carbon taxes and draconian restrictions crippling economies for a completely non-existent problem. Failed predictions, discredited assumptions, incorrect data did not stop insane policies. Climategate revealed the extent of corruption so more people understand malfeasance and falsities only experts knew or suspected. More important, they are not rejected as conspiracy theorists. Credibility should have collapsed, but political control and insanity persists – at least for a little while longer.” ~Dr. Tim Ball

    • Tim Ball’s credibility collapsed long ago

      • AGWers do like smearing people.

      • “Climategate revealed the extent of corruption so more people understand malfeasance and falsities only experts knew or suspected.”
        Right on, man!
        “Tim Ball’s credibility collapsed long ago”
        AGWers do like smearing people.

      • ClimateGate did show corruption. When people get caught hiding declines, or making one tree in Yamal a global treemometer or keeping papers that are inconvenient out of the journals, that is corruption.

        And we get to point it out. And you get to try and cover it up as per your usual modus operandi.

        What has Tim Ball done to you Nick?

      • Right, because we don’t hear about “The Team” or “Mike Mann” on every occasion?

        The difference is Tim Ball actually deserves to be smeared

      • Chris,
        You keep missing the chance to treat your betters with civility.

      • Ha you’ve got to be kidding. I for one am sick of the likes of Ball talking utter crp and getting a free pass from you guys either because you are all so damn ignorant you don’t even see the blatant errors of your “betters” or because you guys are applying one of the most brazen double standards.

      • Mike,
        Can you delete any emails you may have had with Keith re AR4? Keith will do likewise… Can you also email Gene [Wahl] and get him to do the same? I don’t have his new email address. We will be getting Caspar [Ammann] to do likewise.
        Cheers, Phil

      • I actually don’t give a crp about emails.

        The thing is skeptics are fine at understanding errors that are mundane like Al Gore getting the Sun the wrong temperature. And they are fine at understanding complex errors in details of paleoclimate studies.

        But amazingly, and this is very hard to believe it isn’t deliberate, they utterly fail to understand the errors in stuff midway between the two, eg the errors throughout people like Tim Ball’s gishgallops.

      • lolwot,
        The great thing about true believers is how they can pretend that calls to hide data and act corruptly is OK.
        Chris,
        Why does anyone deserve smearing? Smearing is when someone is damaged by people lying about them.
        I guess smearing makes sense for believers.

      • the thing is I understand why they did those things. And it wasn’t because they were corrupt. It was because they were dealing with malicious fools and they didn’t want to give them an inch.

      • Yes Lolwot, they probably were thinking along the lines you so vivdly describe.

        How scientific.

      • Dealing with “skeptics” is not about science. The skeptics dabble in rumour and spin. It’s more like politics than science. Skeptics couldn’t even interpret the CERN cloud paper correctly without spinning it for example.

        The scientists who realize how anti-science “skeptics” are simply cared not to give them any time and that’s all there is to it.

        Of course as a “skeptic” you are blinded about that.

      • As evidenced by the Climategate emails, they also ‘cared’ enough about their cause to support it with dodgy statistical methods, manipulated data and a determined effort to control the journals and the peer review process. Followed by a determined effort to delete the evidence of their malfeasance and slander the people exposing their nefarious activities.

      • The biggest skeptic lie about climategate is the idea that it was the scientists abusing the journals, when in actual fact it was the “SKEPTICS” who abused the journals and subverted peer review.

        I think in psychology they call it “projection”. Or at least it is a matter of blaming scientists for the crimes of the “skeptics”

        I mean seriously some of the crud that skeptics save through the peer review system and promote (G&T, the Beck paper) would be embarrassing if skeptic’s double standards on the matter weren’t so reckless to human civilization.

      • Simple yes/no question for you, bloke.

        Did Stephen McIntyre have any right to the data for which he enjoined all the readers of his ‘blog to inundate CRU with FOI requests? If “yes,” then on what basis did Stephen McIntyre have any right to those data?

      • settledscience | August 28, 2011 at 12:38 pm |

        Simple yes/no question for you, bloke.

        Did Stephen McIntyre have any right to the data for which he enjoined all the readers of his ‘blog to inundate CRU with FOI requests?

        Yes

        If “yes,” then on what basis did Stephen McIntyre have any right to those data?

        On the same basis that the information commissioner used to force the CRU to release the data to another party who requested it: Freedom of information.

        I kinow there are lots of reasons why *some scientists* resist the principle, most of them run contrary to the scientific method. However, the law is on the side of those who wish to examine their claims.

      • tallbloke,

        you left out the part where the research was paid for with public funds extracted from us through taxes. Remember that minor bit where Jones received money from the US too?? Then there are the EU Regulations that make ALL climate data public.

      • Wrong, tallbloke. McIntyre has no right to data that is protected by a legally binding confidentiality agreement. He is also not a scientist and has no right to be treated as one.

        Regulation 12(5)(f) applies because the information requested was received by the University on terms that prevent further transmission to non-academics
        Regulation 12(1)(b) mandates that we consider the public interest in any decision to release or refuse information under Regulation 12(4). In this case, we feel that there is a strong public interest in upholding contract terms governing the use of received information. To not do so would be to potentially risk the loss of access to such data in future.
        I apologise that not all of your request will be met but if you have any further information needs in the future then please contact me.

        http://climateaudit.org/2009/07/24/cru-refuses-data-once-again/

        This all has to do with intermediate computations which McIntyre pitched such a fit about not having given to him, because he is not competent to perform the same computations on the raw data. He was allowed access to the raw data all along, but he just isn’t smart enough to figure out what to do with it.

    • Yes, Wagathon, we can all agree that Tim Ball talks nonsense a lot. I assume that was your point?

  9. Dyson makes a lot of sense, though I may have picked some other name than carbon eating plants. Land use changes have an impact that may be underestimated. After reading the reference links I can confidently say the residence time is between a few years and a few hundred years. Based on the IPCC science by consensus methodology, their estimate is “likely” on the high side :)

  10. Judith Curry 8/24/11, CO2 residence time

    The question of CO2 residence time reaches into raft of IPCC errors, already well covered in Climate Etc.

    The residence time of CO2 in the atmosphere is taught in 11th year public school physics as the leaky bucket problem. IPCC’s formula in its TAR and AR4 glossaries are correct, but, alas, used nowhere in those IPCC Reports. It’s 1.5 years if you include IPCC’s leaf water, but if you ignore leaf water as IPCC does, it’s 3.5 years. Jeff Glassman response to Vaughan Pratt, “Slaying the Greenhouse Dragon. Part IV” thread, 8/15/11, 3:56 pm.

    See my response to Joel Shore, who teamed with Eschenbach on WUWT, to mistakenly propose that the residence time of a slug or pulse of CO2 was somehow different than their freshly minted lifetime of a molecule of CO2.Id., 6:24

    For IPCC’s other formula, the Bern formula, see my response to Bruce, “Time-varying trend in global mean surface temperature” thread, 7/16/11, 12:39 pm.

    Or my response to Fred Moolten, “Energy imbalance” thread, 4/20/11, 12:33 pm re the physically unrealizable model for the uptake of CO2.

    Or my response to Pekka Pirilä, “Radiative transfer discussion” thread, 1/9/11, 11:20 am, and the ensuing, embedded discussion.

    IPCC needs a slow uptake of CO2 to make it well-mixed in the atmosphere, thereby justifying its shift of the MLO CO2 record from regional to global, and thus to attribute the 50 year CO2 bulge to man’s emissions. A fallout beneficial to IPCC’s plan to scare the public was that CO2 would acidify the surface layer according, all to the chemical equations with equilibrium stoichiometric coefficients. IPCC reported the chemical equations (AR4, Eqs. 7.1, 7.1, p. 529), and used the equilibrium coefficients for its approximate ratio CO2:HCO3^-:CO3^– of 1:100:10, id., a solution given in the literature by the Bjerrum Plot, but along with the coefficients, never mentioned by IPCC.

    The Bern formula outlined in these references (AR4, Table 2.14, p. 213, fn. a.) is an attempt to justify a slow uptake of CO2 in the ocean, quantifying four fates for ACO2: (1) some to the Solubility Pump, (2) some to the Organic Carbon Pump, (3) some to the CACO2 Counter Pump, and (4) the remainder remaining in the atmosphere. AR4, ¶7.3.1, pp. 511 ff., Figure 7.10, p. 530. The formula implies the existence of partitions or channels to feed the three different processes and the null process. Partitions or channels don’t exist in the real world. Instead, all atmospheric CO2 is accessible to the surface layer via the solubility pump until Henry’s Law (omitted by IPCC). Contrary to the belief of IPCC and its supporters posting here, the surface layer is never in equilibrium. Also the two biological pumps can’t work from CO2_g but needs ionized CO2_aq. The Revelle Buffer didn’t work when Revelle and Suess first tried it in 1957, and its resurrection by IPCC is a scientific failure, but may prove a political success. In attempting to measure the Revelle Buffer, IPCC accidentally measured Henry’s Law, which the lead author in review concealed so as not to confuse the readers.

    And looking into IPCC’s model with a little more depth, one finds that IPCC has a flux of ± 90 GtC/yr or so between the atmosphere and the ocean of natural CO2, a net of zero including the terrestrial flux, while only ACO2 is subject to IPCC’s residence time bottleneck. This is another physical impossibility. The two species of gases differ only in their isotopic mix, 12CO2:13CO2:14CO2, and no assignment of absorption coefficients for these three forms of molecules can begin to satisfy IPCC’s uptake model.

    To get an intuitive feel for how long it takes water to absorb CO2, see Marshall Brain’s video, at blogs.howstuffworks.com/2010/09/17/diy-how-to-carbonate-your-own-water-and-save-big-bucks-on-club-soda/.

    Henry’s Law is instantaneous on even weather scales, much less climate scales. Henry’s Coefficients depend on temperature and pressure first, and salinity a distant third. IPCC’s notion that the coefficients depend on the carbonate state or pH of the surface layer is novel physics.

    • I have described the relationship between Henry’s law and Revelle factor in this comment

      http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-99650

      The claims made in the message of Jeff Glassman are not correct. The Revelle factor is a result of well known chemistry and not in contradiction with Henry’s law, which applies to undissociated CO2 in seawater, not to the total solubility including bicarbonate ions, whose share is more than 99% of the total solubility. The value of pH of the oceans is really essential and salinity has its effect through its influence on pH. The Revelle factor is not present if pH remains constant, but additional CO2 leads unavoidably to some decrease in pH and the Revelle factor is a manifestation of this change.

      His criticism of “the Bern formula” is also erroneous and appears to tell of not understanding the basics of the approach.

      It’s unfortunate that I don’t know good comprehensive descriptions of all essential arguments. Many sources contain valid partial descriptions, but not the whole argument. One reason for that may be that the basics have been known so long that repeating the arguments has appeared unnecessary and scientifically uninteresting. Another reason is that the knowledge of the subprocesses remains highly inaccurate. Recent publications tell clearly that very little is known with high accuracy.

      On the annual level the uncertainties are tens of percents of the average increase in atmospheric CO2, but over longer periods, the relative uncertainties are reduced, when constraints based on knowledge on the reservoirs of carbon get gradually tighter. Absolute proofs may remain unachievable, but the reasons to believe that the overall picture is well understood are strong. That means in particular that the impulse response to a sudden pulse of CO2 to the atmosphere has been estimated with fairly good accuracy for periods up to 100-200 years and for levels of concentration within a factor of two from the present.

      What has been published on the longterm response over hundreds of years is less convincing as there are less constraints on that from the empirical data and as it remains dependent of poorly known dynamics of oceans. Similarly are also arguments on the reduction in uptake to oceans with increasing CO2 amounts based on less well known phenomena.

      • Pekka Pirilä 8/25/11, 4:52 am CO2 residence time

        Revelle and Suess (1957) postulated that a peculiar buffer mechanism existed in sea water that created a bottleneck against dissolution of CO2. It was not based on chemistry but was a mere assumption. R&S applied it to the annual amount of industrial CO2 added to the atmosphere, that is, the brunt of anthropogenic CO2. It was the conjecture needed to justify the Callendar effect, now AGW, by having ACO2 accumulate in the atmosphere. This was desirable in 1957 to support the Keeling Curve, just getting underway by Charles Keeling, Revelle’s protégé.

        R&S found,

        It seems therefore quite improbable that an increase in the atmospheric CO2 concentration of as much as 10% could have been caused by industrial fuel combustion during the past century, as Callendar’s statistical analyses indicate. It is possible, however, that such an increase could have taken place as the result of a combination with various other causes. The most obvious ones are the following:

        1) Increase of the average ocean temperature of 1ºC increases PCO2 by about 7%. …

        That increase due to temperature is a result of Henry’s Law, which was mentioned neither in R&S nor by IPCC in TAR or AR4.

        The paper concluded,

        Present data on the total amount of CO2 in the atmosphere, on the rates and mechanisms of CO2 exchange between the sea and the air and between the air and the soils, and on possible fluctuations in marine organic carbon, are insufficient to give an accurate base line for measurement of future changes in atmospheric CO2. An opportunity exists during the International Geophysical Year to obtain much of the necessary information.

        R&S(1957) was actually a pitch for IGY funding.

        R&S were unable to set the parameters for their buffer conjecture to satisfy their boundary conditions. When IPCC reported on attempts to measure the Revelle Buffer factor, it produced a graph showing the Revelle Buffer varied with temperature. AR4, Second Order Draft, Figure 7.3.10 (a). That graph was a linear transformation of Henry’s Law. Nicolas Gruber, a reviewer, said that the Revelle Buffer has almost no temperature sensitivity. Gruber’s distinction was that made by R&S: the increase anticipated for CO2 could have been due to temperature because of the other cause, that associated with Henry’s Law, ¶1), above. The Revelle Buffer is no more and no less than a conjecture for the modification of Henry’s Law for the absorption of CO2. When IPCC attempted to resurrect the Revelle Buffer, it rediscovered Henry’s Law. IPCC’s editor wrote,

        The buffer factor has a considerable T dependency (see Zeebe and Wolf-Gladrow, 2001). However, it is right that in the real ocean, this T dependency is overridden often by other processes such as pCO2 changes, TAlk changes and others. The diagram showing the T dependency of the buffer factor was omitted now in order not to confuse the reader. The text was changed. Bold added, 6/15/06.

        For a complete discussion, see On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, … or CO2: “WHY ME” by following the link at my name in the header.

        In other words, the Revelle Buffer is in dispute as to its temperature dependence, and when measured it looks like Henry’s Law rescaled. It can’t be pinned down because it is a phantom. Regardless, IPCC concealed these discrepancies in its final version of AR4.

        While reasonable doubt exists as to the validity of the Revelle Buffer conjecture, a more important issue is that R&S and IPCC applied this conjecture to ACO2 and not to natural CO2. This failure to apply lacks any support in physics, and must be deemed an error. Moreover, the magnitude of that error is scaled by the ratio of natural CO2 outgassing from the ocean, about 90 GtC/yr, to ACO2 emissions, about 6 GtC/yr.

        PP: The Revelle factor is a result of well known chemistry and not in contradiction with Henry’s law, which applies to undissociated CO2 in seawater, not to the total solubility including bicarbonate ions, whose share is more than 99% of the total solubility. The value of pH of the oceans is really essential and salinity has its effect through its influence on pH. The Revelle factor is not present if pH remains constant, but additional CO2 leads unavoidably to some decrease in pH and the Revelle factor is a manifestation of this change.

        (1) Notice that Pekka provides no citations. He recites from memory.

        (2) On the solubility of gases in water, and including CO2, the Handbook of Chemistry and Physics, 72d Ed., says,

        Solubilities of those gases which react with water, … carbon dioxide, … are recorded as bulk solubilities, i.e., all chemical species of the gas and its reaction products with water are included. P. 6-3.

        This formulation of gas solubility, unaffected by the postmodern physics of AGW, contradicts Pekka Pirilä’s undissociated CO2 assertion.

        (3) Pekka Pirilä could have found support for his undissociated model in Zeebe & Wolf-Gladrow’s Encyclopedia of Paleoclimatology and Ancient Environments, 2008, (available on line). However, Z&W-G define Henry’s Law only for thermodynamic equilibrium, and then in proportion to the sum of the concentrations of the two molecules of CO2_aq and H2CO3.

        (3.1) In thermodynamic equilibrium, the ratios are known by the solution to the carbonate equations along with the stoichiometric equilibrium constants. The solutions are given by the Bjerrum plot. Wolf-Gladrow, D., CO2 in Seawater: Equilibrium, Kinetics, Isotopes, 6/24/06 (available on line), taken in part from Zeebe and Wolf-Gladrow, 2001, IPCC’s source for carbonate chemistry (AR4, ¶7.3.4.1 Overview of the Ocean Carbon Cycle, p. 528, excluding the Bjerrum plot. Since for thermodynamic equilibrium all the reaction products are in a known proportion, changing from one species, e.g., undissociated CO2, to any mix of species, is just a matter of a known scale factor.

        (3.2) The surface layer of the ocean is quite turbulent, contains entrained air, and undergoes thermal exchanges with the atmosphere and the deep ocean. It is never in equilibrium, which Pekka Pirilä, IPCC, and others ignore. The undissociated form of Henry’s Law is not applicable to the real world.

        (4) CO2 is highly soluble in any water, and dissolution always occurs. It proceeds instantaneously on weather to climate scales, but accelerated by wind. It does not wait, that is, it is not buffered, for the state of equilibrium to adjust. Solubility does not in the first, second, or third order, depend on the chemical state of the water, meaning expressly either its pH, its alkalinity, or its DIC ratio, even though Henry’s coefficient might be estimated differently. That ratio is the partial pressure of CO2_g to the concentration of CO2, whether bulk, molecular, or some other mix, in the water. Pekka Pirilä’s phrase Henry’s law, which applies to undissociated CO2 in seawater would be correct if he meant that in some formulations, Henry’s law coefficient refers to the concentration of undissociated CO2 in seawater. As written it implies that dissolution is regulated by the concentration of undissociated CO2, which is false. The ratio of dissolution has no effect on the flux between CO2_g and CO2_aq.

        PP: His criticism of “the Bern formula” is also erroneous and appears to tell of not understanding the basics of the approach. [¶] It’s unfortunate that I don’t know good comprehensive descriptions of all essential arguments.

        Pekka Pirilä is undoubtedly skilled at relating physical processes to their algebraic representations. He is just not applying that skill, and as a result reasoning incorrectly. His missing good comprehensive description is this. The Bern formula has four coefficients, a_i, which total to 1, and which represents the mass of the pulse of CO2 put into the atmosphere. The Bern formula assigns values to those coefficients (21.7%, 25.9%, 33.8%, and 18.6%). That assignment is the algebraic equivalent of creating four reservoirs in the atmosphere to hold CO2 for the four processes. Those reservoirs do not exist. The Bern formula is incompetent.

        PP: That means in particular that the impulse response to a sudden pulse of CO2 to the atmosphere has been estimated with fairly good accuracy for periods up to 100-200 years and for levels of concentration within a factor of two from the present.

        Not by the Bern formula is it known! The Solubility Pump time constant is 1.186 years in the Bern formula, and that represent Henry’s Law uptake of CO2 from the atmosphere by dissolution. Two centuries is 168 time constants. That pulse will be reduced to 5.8E-74 in 200 years, known with fairly good accuracy.

      • Puff, the Magic Dragon
        Lived by the Sea.
        ===========

      • I have not studied, how well Revelle and Sues succeeded in their analysis, as that’s of historical interest, not scientific interest.

        The phenomenon that is described by Revelle factor is true. It’s not new speculation or a new hypothesis, but an inescapable consequence of chemistry, more specifically of the equations of chemical balance.

      • Pekka Pirilä 8/25/11, 12:57 pm CO2 residence time

        What is significant is that the Revelle Factor or Revelle Buffer was (a) not based as you claimed on chemistry and (b) never worked, not originally in 1957 and not when IPCC tried to resurrect it.

        Do you have any references substantiating your claims (a) that the Revelle Buffer is true and (b) that it is based chemistry?

      • It’s solid science. Your claims are totally without merit as is your insistence of single exponentials both for the removal of CO2 and transmission of radiation.

        I’m amazed that you can perpetuate all that nonsense, while you seem to understand many other things well.

      • Pekka Pirilä 8/25/11, 2:49 pm CO2 residence time

        PP: It’s solid science. Your claims are totally without merit as is your insistence of single exponentials both for the removal of CO2 and transmission of radiation.

        I’m amazed that you can perpetuate all that nonsense, while you seem to understand many other things well.

        When you make claims like these, sans references, inaccurate, and flying by the seat of your pants. This is not science, and you are not participating in a serious scientific dialog. Moreover, you manage to draw conclusions (e.g., nonsense) from this muck of errors.

        JAC: The goal of the blog is to discuss scientifically relevant issues, which we are doing. curryja 4/25/11, 9:53 am, CO2 residence time

        I doubt she meant the goal is to discuss scientifically relevant issues subjectively, off the top of the head, shooting from the hip, though that’s too often the result.

        1. I said that the solution to IPCC’s resident time formula in AR4 Glossary was one exponential. 8/24/11, 9:48 pm. If you have a mathematics reference that says it is otherwise, please provide it. It must be postmodern math.

        2. I never wrote anything so silly as radiation transmission could be represented by a single exponential. I did write that radiation absorption was represented by a single exponential, and showed you how.

        3. Nearly a year ago, I gave you a derivation of the Beer half of the Beer Lambert law resulting in a single exponential. Radiative transfer discussion, 12/23/10, 3:40 pm. You responded,

        PP: Your derivation is mathematically the same one that everyone gives [not one citation], but it applies only to monochromatic radiation [no citation]. It does not help that you state that you do not make that assumption, as it is hidden in your derivation as well [no citation] – unless you claim that all wavelengths are absorbed equally strongly. Bold added, id., 4:48 pm.

        In the ensuing dialog, I asked you to explain how that assumption was hidden in my derivation. Id., 5:35 pm. You failed to do so.

        You did agree that a single exponential was appropriate for radiation absorption. But as shown in this quotation, you continued to say incorrectly that it was valid only for a single wavelength.

        4. Previously, I had written to you

        The Beer-Lambert Law applies an empirical coefficient. Precisely speaking, it applies to a complex spectrum of light for which all the spectral components have the same empirical coefficient, and not restricted just to the same frequency. What we agree(?) on thread, 5/27/11, 10:35 am.

        Later, I criticized the Georgia Tech course EAS 8803 for representing the Beer Lambert law incorrectly as monochrome, explaining that it was unnecessarily restrictive. Planetary energy balance, 8/20/11, 2:47 pm. Your response was this reversal of your position:

        PP: Unfortunately for your case the Beer-Lambert law is valid only, if the absorption coefficient is the same for all wavelengths present, and that requirement is not satisfied in any general case. How serious the error is depends on the case, but it may be very large, as it indeed is in the Earth atmosphere. Id., 3:00 pm.

        Not only did you, once again, supply no citations, but you failed to provide my intervening teachings asserting that same position to you. You write as if you were correcting me when by your writings make plain, it is you who has been corrected.

        You are learning, but it’s hard to dig out from your citation-free, zig zag posts. This seat-of-the-pants style from yourself and others here is a major cause of the failure of the topics on this blog to converge as Dr. Curry seems to wish.

      • My comments are based on the fact that “you showed” something that’s mathematically absolutely wrong.

        For the removal of CO2 you presented a very similar error in most basic mathematics.

        In this case you don’t accept that the existence Revelle factor is based on very basic chemistry. You stated also that pH doesn’t matter although it tells directly the most important factor that influences the total solubility of CO2 in seawater, the value of Dissolved Inorganic Carbon (DIC). DIC is the value of interest for the carbon storage as its quantity is more than 100 times larger than the solubility of CO2.as gas.

        I have written myself that the Revelle factor is not easy to determine accurately, because it depends on the buffering of the seawater. It’s, however, known to be of the order of 10 and thus very important.

        Your continuing dismissal of simple mathematics as well as most basic and most reliably known physics and chemistry is still incomprehensible to me. Scientific references are not needed, when the conclusions are based directly on knowledge available in every basic textbook of physics or chemistry, or on elementary properties of exponential functions and when these arguments are given in full.

        I have presented my arguments in sufficient detail and many of them several times also in my answers to your comments. If you are in denial mode, the amount of detail doesn’t make any difference. The validity of your claim that I don’t participate in scientific discussion can be judged from my various comments on this site. Sometimes the discussion has proceeded to the point, where my comments become more blunt.

      • Pekka Pirilä, 5/27/11, 9:27 Am, CO2 discussion

        Your reply is as empty as your citation list. What error in mathematics are you talking about?

        PP: I have written myself that the Revelle factor is not easy to determine accurately, because it depends on the buffering of the seawater. It’s, however, known to be of the order of 10 and thus very important.

        You claim! Where is your reference? Who got to read it? We need to see it because what you describe here doesn’t make sense. The Revelle Factor doesn’t depend on the buffering of seawater – it IS the buffering of seawater. Revelle and Suess (1957) suggested that it might be of the order of 10, under equilibrium conditions at constant alkalinity. P. 22. IPCC measured it between 8 and 13 (AR4, ¶7.3.4.2, p. 531), another large range, and in the open ocean (neither constant alkalinity nor equilibrium). IPDD measured it with a disputed, temperature dependent co-parameter, which it concealed so as not to confuse the reader.

        You claim the Revelle factor is … very important, but give no reference to where it was ever used, or to what the result of its application might have been. The RF was discussed in IPCC Reports, but never used, and its effects contradicted, as I cited previously for you.

      • You can easily find thousands of references related to the Revelle factor.

        One example of lecture material explaining it is here.

        You didn’t only criticize the Georgia Tech course based on the irrelevant comment that Beer-Lambert law is valid also for multichromatic radiation, when the absorption coefficients happen to be the same, but you implied clearly that this trivial extension would change essentially the outcome. Furthermore you claimed that changes to the presentation of Beer-Lambert law would be done to justify somehow suspect conclusion on the behavior in the atmosphere. This proves that your point was not only irrelevant sophistry, but you really tried to perpetuate wrong conclusions.

        If you are now willing to admit that the Beer-Lambert law does not at all work for the total IR radiation in atmosphere, that’s fine. Is that your present view?

        The controversy on the Bern model is very similar in it’s nature. You try to discredit it based on false arguments.

      • Mr. Pekka Pirilä, You seem to have all the answers to the infinitely complex questions dealing with the workings of planet Earth. You even were quick to model a relativity experiment using ‘small balls’. Earth’s rotational speed was put forward by you as a difficult mathematical problem that would need much thought…

        I suggested Peter’s model out of hand using the Bible as a possible proxy
        for the Earth’s rotational speed for your experiment, at: 4.167 rps…
        —————————————————————————————
        “David Wojick, Perhaps the fisherman Peter, was a scientist too.
        Yesterday…

        Pekka Pirilä | August 20, 2011 at 10:56 am | Reply
        Using scale models is a well known and useful tool in engineering. The
        models agree never fully on the real system, but in many cases it’s possible
        to construct models that behave in a very similar way. That requires always
        that several different properties are matched. In fluid dynamics Reynold’s
        number is usually the most important parameter that must have the same value in the model as in the real system. Prandl’s number is another common important parameter.

        In the example described by DocMartyn some critical numbers must also be matched to get results of any significance. Without going to the details, it appears totally clear that the small balls must be made to turn much faster than the Earth turns. I cannot tell, what would be the most appropriate length of day, but it might well be closer to 1 min than to 24 hours.

        Such an experiment tells practically nothing unless it’s supported by a
        careful theoretical analysis that tells the right combination of parameters.

        Tom | August 20, 2011 at 12:04 pm | Reply
        What if?…
        II Peter 3:8 But you must not forget this one thing, dear friends: A day is
        like a thousand years to the Lord, and a thousand years is like a day.

        1000 years, times 360 days= 360,000 night-day cycles (NDC)
        Divided by 24 heavenly hours= 15,000 NDC per HH
        Divided by 60 heavenly minutes=250 NDC per HM
        Divided by 60 heavenly seconds=4.167 NDC; as revolutions per second. for your model?

        David, what do you think of the model Peter gives us, above? Is my math
        sound? Would this rotation speed work on the ‘small balls’ representing the
        Earth? Even if we don’t have all the answers, it is helpful to know where to
        look for the answer.”
        ——————————————————————————–

        Pekka Pirilä, what do you say? If this number works for you, what would that mean for our relative ‘size’ to the observer? Fun, to think outside the box.

      • I’m amazed that you can perpetuate all that nonsense, while you seem to understand many other things well.

        Of these two categories, I would put Jeff’s point that “The residence time of CO2 in the atmosphere is taught in 11th year public school physics as the leaky bucket problem” in the latter. Although my high school physics didn’t get into atmospheric physics, I consider the leaky bucket analogy a helpful one for understanding what would happen if total CO2 emissions were to drop by more than 4 GtC/yr. I replied to Jeff’s dragon-part-iv post just now at

        http://judithcurry.com/2011/08/13/slaying-the-greenhouse-dragon-part-iv/#comment-106478

        I’ll reserve judgement on the rest of Jeff’s recent posts, there’s only so many arguments one can usefully engage with at one time.

      • Pekka Pirilä 8/25/11, 12:57 pm CO2 residence time

        A PS to my last:

        I have just re-read the AR4 §7.3.4.2 on the Revelle Factor. It concludes nothing relevant to it. The section has a beautiful diagram of the distribution of the measured RF, Figure 7.11, but it’s a half measurement: the old part (a) showing its disputed temperature dependence is gone. The conclusion to Section 7.3.4.2 Ocean Carbon Cycle Processes and Feedbacks to Climate includes this:

        Future changes in ocean circulation and density stratification are still highly uncertain. Both the physical uptake of CO2 by the ocean and changes in the biological cycling of carbon depend on these factors.

        Thus the physical uptake of CO2 by the ocean is still highly uncertain, analysis of the Revelle Factor notwithstanding. The RF got IPCC nowhere. It’s past time for it to learn about Henry’s Law, the concealed chart, and to apply it instantaneously to the surface ocean.

      • And of course you have read and thoroughly comprehended all of the references on that page as well. Right?

        I have just re-read the AR4 §7.3.4.2 on the Revelle Factor.

        Some of the required reading from just that sub-section of a section of Chapter 7 of AR4 is:

        Oceanic carbon exists in several forms: as DIC, DOC, and particulate organic carbon (POC) (living and dead) in an approximate ratio DIC:DOC:POC = 2000:38:1 (about 37,000 GtC DIC: Falkowski et al., 2000 and Sarmiento and Gruber, 2006; 685 GtC DOC: Hansell and Carlson, 1998; and 13 to 23 GtC POC: Eglinton and Repeta, 2004)…

        Seawater can, through inorganic processes, absorb large amounts of CO2 from the atmosphere, because CO2 is a weakly acidic gas and the minerals dissolved in the ocean have over geologic time created a slightly alkaline ocean (surface pH 7.9 to 8.25: Degens et al., 1984; Royal Society, 2005)… Gas exchange rates increase with wind speed (Wanninkhof and McGillis, 1999; Nightingale et al., 2000) .. (Zeebe and Wolf-Gladrow, 2001; see Box 7.3).

        That’s just from the first two paragraphs of 7.3.4.1.

        You whine like a child that Pekka won’t get into scientific details with you, but I don’t believe you have read any real climate science beyond the IPCC’s summary. For the rest of your “information” you’re just regurgitating what you’ve passively absorbed from climate science denial ‘blogs like wuwt, climateoddity and in at least 50% of its content, climateetc. But if I’m wrong about that, if you have any original thoughts on the matter, then you can pick the one paper cited in section 7.3.4 of AR4 most relevant to your claims about ocean chemistry, and you can tell us
        (1) exactly what it gets wrong
        &
        (2) what’s the right answer.

        That is, if you’re at all serious. Otherwise, just continue the hand-wavy nonsense, and keep pretending that it’s Dr. Pirilä’s burden to explain to you the chemistry of buffers to your satisfaction, as if your failure to understand and then apply first-year college chemistry makes all the legitimate scientists wrong.

      • settledscience, 8/28/11, 1:12 pm, CO2 discussion

        ∅: And of course you have read and thoroughly comprehended all of the references on that page as well. Right?

        Who are you who names himself “settledscience”, the empty set, ∅, and why do you ask unimportant questions? Are you taking a poll, writing a term paper for summer school?

        If you ever get into science, you’ll find that when researching the accuracy of a paper, reading every citation is not required.

        For your term paper, the answer in general to your question is “no”, because

        (1) most of IPCC’s references are behind a paywall,

        (2) many have proved irrelevant to IPCC’s claim for them, and

        (3) citing references without quotations violates a basic principle for writing scientific papers, one that provides that references should be for the purposes of validating claims and not a divergence for the reader to validate the paper through independent, collateral research .

        By contrast, note I made several reference to ¶7.3.4.2 for propositions stated. If you want to participate in a scientific dialog instead of just adding noise, check that reference for yourself and raise any errors you might perceive.

        ∅: Some of the required reading from just that sub-section of a section of Chapter 7 of AR4 is: [quotations] That’s just from the first two paragraphs of 7.3.4.1.

        How might you know what is or is not required on a subject for which you have expressed and can express no opinion?

        You are not on the same page with yourself.

        Those two paragraphs you cite contain the following nine references. A little analysis provides a nice object lesson in how IPCC abuses referencing.

        1. Degens, E.T., S. Kempe, and A. Spitzy, 1984: Carbon dioxide: A biogeochemical portrait. In: The Handbook of Environmental Chemistry [Hutzinger, O. (ed.)]. Vol. 1, Part C, Springer-Verlag, Berlin, Heidelberg, pp. 127–215. [$249.50; 10 pdf chapters at 24.95 per chapter.]

        2. Eglinton, T.I., and D.J. Repeta, 2004, Organic matter in the contemporary ocean. In: Treatise on Geochemistry [Holland, H.D., and K.K. Turekian (eds.)]. Volume 6, The Oceans and Marine Geochemistry, Elsevier Pergamon, Amsterdam, pp. 145–180. [$107]

        3. Falkowski, P., et al., 2000: The global carbon cycle: A test of our knowledge of Earth as a system. Science, 290(5490), 291–296. [$15]

        4. Hansell, D.A., and C.A. Carlson, 1998: Deep-ocean gradients in the concentration of dissolved organic carbon. Nature, 395, 263–266. [$32]

        5. Nightingale, P.D., et al., 2000: In situ evaluation of air-sea gas exchange parameterisations using novel conservative and volatile tracers. Global Biogeochem. Cycles, 14(1), 373–387. [Downloaded, superseded and not read.]

        6. Royal Society, 2005: Ocean Acidification Due to Increasing Atmospheric Carbon Dioxide. Policy document 12/05, June 2005, The Royal Society, London, 60 pp., http://www.royalsoc.ac.uk/document. asp?tip=0&id=3249. [Downloaded; acidification wrongly considered real to rely on Bjerrum solution.]

        7. Sarmiento, J.L., and N. Gruber, 2006: Ocean Biogeochemical Dynamics. Princeton University Press, Princeton, NJ, 503 pp. [$75]

        8. Wanninkhof, R., and W.R. McGillis, 1999: A cubic relationship between air-sea CO2 exchange and wind speed. Geophys. Res. Lett., 26(13), 1889–1892. [Down loaded & reviewed. Relies on pCO2 from Takahashi, which produces incorrect result.]

        9. Zeebe, R.E., and D. Wolf-Gladrow, 2001: CO2 in Seawater: Equilibrium, Kinetics, Isotopes. Elsevier Oceanography Series 65, Elsevier, Amsterdam, 346 pp. [$101].

        Are you pretending to have read these?

        Six of your own citations are behind a paywall: purchase price for all six is $587.50, which would have to be paid in the blind in the hope of answering an impudent, incompetent, and irrelevant inquiry. Major bits of reference 9 are available in a separate publication by Dieter Wolf-Gladrow, which I have read and on occasion cited. IPCC should be required under the US and UK Freedom of Information Acts to provide accessible, text readable copies of all its citations.

        The other three I have downloaded. Two are incompetent, one relying on the Takahashi method, which produces an incorrect result (see Takahashi method, rocketscientistsjournal.com, On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, etc., ¶5), and the other relying on the Bjerrum solution for a surface layer in equilibrium, which does not exist (see id). The third I have not read because IPCC deemed its results contradicted by ref. 9.

        ∅: I don’t believe you have read any real climate science beyond the IPCC’s summary. … But if I’m wrong about that, if you have any original thoughts on the matter, then you can pick the one paper cited in section 7.3.4 of AR4 most relevant to your claims about ocean chemistry, and you can tell us (1) exactly what it gets wrong & (2) what’s the right answer.

        1.) Why do you imagine anyone cares about your opinion?

        2.) Yes, you are wrong about that.

        3.) If any serious reader, or as improbable as it might be, yourself, is interested, I have, above, provided sufficient reasons with links to complete answers. I have no reason or desire to restrict myself to “one paper” as you ask. Short answers include

        (a) the surface layer is not in equilibrium so the Bjerrum solution to the carbonate chemical equations for equilibrium are not applicable,

        (b) the Takahashi diagram provides a small fraction of the CO2 flux reported elsewhere by IPCC, so gets the carbon cycle wrong,

        (c) IPCC wrongly applies the imaginary CO2 buffer to ACO2 and not to natural CO2, a physical impossibility, and

        (d) the right answer is that the surface layer stores excess CO2_aq instead of the atmosphere storing excess CO2_g. And of course none of it is relevant to the basic climate question because Earth’s surface temperature follows the Sun with a simple transfer function.

        ∅: Otherwise, just continue the hand-wavy nonsense, and keep pretending that it’s Dr. Pirilä’s burden to explain to you the chemistry of buffers to your satisfaction, as if your failure to understand and then apply first-year college chemistry makes all the legitimate scientists wrong.

        I was waving bye-bye to ∅.

        I pretend nothing. Dr. Pirilä is quite wrong about the Revelle Factor, and most recently about climate science invalidating the Beer-Lambert Law. He is stuck supporting AGW, a belief system, thereby inheriting all IPCC’s many errors, and leaving himself to argue sans references.

      • Clearly Jeff has not read any of the original research dealing with the topic of his assertion, and has thus failed to even attempt what is his burden, to disprove legitimate scientific findings about dissolved CO2.

        (d) … And of course none of it is relevant to the basic climate question because Earth’s surface temperature follows the Sun with a simple transfer function.

        And of course, now that I know that Jeff denies the Settled Science that is the Greenhouse Effect, I need not bother speaking to it ever again.

      • Jeff it appears does not believe that excess CO2 actually exists in the atmosphere. I say this because the title of the article he cites is called:
        “On Why CO2 Is Known Not To Have Accumulated in the Atmosphere”.

        Do I have that interpretation right?

      • settledscience bloviates,

        “Clearly Jeff has not read any of the original research dealing with the topic of his assertion, and has thus failed to even attempt what is his burden, to disprove legitimate scientific findings about dissolved CO2.”

        Here is your chance to be the big man on campus. Simply link the appropriate studies and/or papers from the literature to prove your point.

        I would suggest you reread Mr. Glassman’s post so that you actually understand what he is saying also.

      • settledscience 9/5/11, 7:47 pm, CO2 residence time

        ∅ [empty set]: Clearly Jeff has not read any of the original research dealing with the topic of his assertion, and has thus failed to even attempt what is his burden, to disprove legitimate scientific findings about dissolved CO2.

        I suppose you don’t care about self-respect judging by your anonymity. But if you have any, and don’t want any longer to be seen as bloviat[ing] [thanks to kuhnkat, 9.8.11, 3:42 pm], here’s what you need to do in trying to participate in an intelligent discussion, to say nothing of science.

        (1) Make a point, e.g., describe something you think is not in accord with original research.

        (2) State your point completely, quoting from any sources as necessary. Don’t make the reader do your research for you.

        (3) Provide references, never behind a pay wall, so the reader can check your interpretation of the source.

        Otherwise, you come off as noise, and as a sycophant — someone who is trying to gain respect by association, in this case, the AGW movement.

      • WebHubTelescope 9/5/11, 10:45 pm, CO2 residence time

        WHT: Jeff it appears does not believe that excess CO2 actually exists in the atmosphere. I say this because the title of the article he cites is called:

        “On Why CO2 Is Known Not To Have Accumulated in the Atmosphere”.

        Do I have that interpretation right?

        No.

        Johnny Carson had a routine called the Great Carsoni in which he would conjure the contents of a sealed envelope by pressing it to his turbaned head. It isn’t working for you. Why don’t you go beyond the title and actually read the article?

        The article answers the question of why the CO2 that IS in the atmosphere is not an accumulated backlog. It begins,

        The Acquittal [of Carbon Dioxide shows that carbon dioxide did not accumulate in the atmosphere during the paleo era of the Vostok ice cores. If it had, the fit of the complement of the solubility curve might have been improved by the addition of a constant. It was not. And because the CO2 presumably still follows the complement of the solubility curve, it should be increasing during the modern era of global warming in recovery from Earth’s various ice epochs. These conclusions find support in a number of points in the IPCC reports.

        The remainder of the paper contains 18 enumerated reasons, in gory detail, including tangential considerations of the fact that CO2 does not accumulate. Read on, see if you disagree with any, and then follow my advice to skepticalscience on 9/8/11, 6:31 pm.

      • (1) Make a point, e.g., describe something you think is not in accord with original research.

        That’s all I do is original research. If it’s not original it’s not challenging and therefore not much fun.

      • WebHubTelescope 9/9/11, 3:16 am, CO2 Residence Time

        WHT: That’s all I do is original research.

        You answer criticism of your empty post with another empty post. Besides, you contradict yourself. It’s not all you do – you also submit empty posts.

        Even if what you claim were true, that you do original research, how is that relevant to anything here? History is littered with respected scientists, much less anonymous posters, whose pottery is thoroughly cracked (crazed) outside their narrow confines. Climate Etc. is surfeit with contributors, some who use a real name with doctor attached, who think AGW exists as a matter of belief to defend it tooth-and-nail, of course, like WHT, liberally spiced with ad hominem, off-point, and sans references.

      • like WHT, liberally spiced with ad hominem, off-point, and sans references.

        thanks for your support.

      • Jeff, sorry for the late contribution. Wasn’t aware of this discussion during my absence, but here is a nice empiric proof of the Revelle factor. Total carbon (DIC) is measured in seawater at several places, the longest series are in Hawaii and Bermuda. The latter can be found here:
        http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
        The period displayed is 20 years, 1983-2003.
        The atmospheric CO2 levels in that period increased about 10%, the pCO2 (that is free dissolved CO2) of the oceans increased about 8% (which obeys Hemry’s Law, with some delay), but DIC increased only with 0.8%, about a factor 10 compared to the free CO2 in the oceans, even more for atmospheric CO2.

        As the total amount of carbon in the atmosphere is about 800 GtC and in the ocean’s surface about 1000 GtC, it is obvious that a 10% increase of CO2 in the atmosphere only gives less than a 1% increase in total carbon in the ocean’s surface.

        The CO2 exchange between the atmosphere and the upper part of the oceans is very fast (e-folding time about 1.5 years) but is limited to not more than 10% of any change in the atmosphere. The rest of the 50% of what disappears from the human emissions (as mass, not as individual molecules) is absorbed by much slower sinks in other reservoirs (deep oceans, more permanent storage in the biosphere).

      • Ferdinand Engelbeen 9/9/11, 4:57 pm, CO2 Residence Time

        Your calculation about a factor of 10 for the Revelle Factor is a single calculation for a parameter that IPCC says ranges from 8 to 16 over the ocean. AR4, Figure 7.11(b).

        http://www.rocketscientistsjournal.com/2007/06/_res/F7-11.jpg

        Your link is to an anonymous article from the Bermuda Institute of Ocean Sciences (BIOS) that features a chart prepared expressly for AR4. IPCC used that chart in part, but rejected it in relevant part. See AR4, Figure 5.9, p. 404. IPCC kept data from BATS and ESTOC, added data from HOT, but rejected data from ALOHA and Hydrostation S. More importantly, IPCC rejected the BIOS curve of normalized DIC, the parameter on which you rely for your calculation.

        You have neither proof nor evidence of the Revelle factor. BIOS does not mention the Revelle factor. Where IPCC discussed the Revelle factor (AR4, &7.3.4, pp. 528 et seq.), it did not mention the BIOS data.

        The most complete picture of the Revelle factor appeared in AR4 Second Order draft as Figure 7.3.10.

        http://www.rocketscientistsjournal.com/2007/06/_res/F7-3-10.jpg

        The global map of the Revelle factor IPCC attributes to Sabine et al. (2004a), co-authored by Nicolas Gruber. The Revelle factor variation with temperature (Figure 7.3.10a), attributed to Zeebe and Wolf-Gladrow (2001) ($101), is a simple linear transformation of the solubility curve. Gruber objected, and his objection resulted in concealment of the temperature sensitivity curve:

        Comment: “buffer factor decreases with rising seawater temperature…” This is a common misconception. The buffer factor itself has almost no temperature sensitivity (in an isochemical situation). In contrast, the buffer factor strongly depends on the DIC to Alk ratio. The reason why there is an apparent temperature sensitivity is because of the temperature dependent solubility of total DIC (note that (a) is not isochemical, it is done with a constant pCO2, i.e. DIC will decrease with increasing temperature). In the ocean, surface ocean DIC and Alk are controlled by a myriad of processes, including temperature, so it is wrong to suggest that the spatial distribution of the buffer factor shown in Figure 7.3.10c is driven by temperature . [Nicolas Gruber (Reviewer’s comment ID #: 307-70)]

        [Editor’s] Notes: Taken into account. The buffer factor has a considerable T dependency (see Zeebe and Wolf-Gladrow, 2001). However, it is right that in the real ocean, this T dependency is overridden often by other processes such as pCO2 changes, TAlk changes and others. The diagram showing the T dependency of the buffer factor was omitted now in order not to confuse the reader. The text was changed. Bold added, AR4, Second-Order Draft.

        The Revelle Buffer was a conjecture, a mere relationship between anthropogenic CO2 parameters in the ocean and in the atmosphere. The authors, Revelle & Suess (1957) wanted to validate the Callendar Effect, a conjecture that manmade CO2 would build up in the atmosphere and cause global warming. To do so, they expressed the ratio of CO2 from fossil fuels going into the atmosphere, r, to that going into the ocean, s, as constant factor, λ, times the ratio of total carbon in the atmosphere, A_0, to that in the ocean at equilibrium, S_0. r/s = λ*A_0/S_0. The factor λ is the Revelle buffer factors, and they guessed it might be about 10. However they were unable to find a realistic set of parameters to produce a factor of 10, and left the problem as one for IGY funding.

        IPCC tried to rehabilitate the buffer for the same reason. If the ratio proved to be even approximately a constant over the ocean, meaning that r and s were correlated, then a physicist might discover a cause and effect relationship. The ratio is not even approximately constant, but varies by ±50%. Furthermore, one set of experts, Zeebe & Wolf-Gladrow, say the buffer is no different than solubility, while another, Gruber, denies it.

        The Revelle factor is useless. All sorts of ratios can be postulated, and some might someday be meaningful. The Revelle buffer factor is not one of them.

        The analyses from Revelle & Suess; Sabine, et al; Gruber; IPCC, Zeebe & Wolf-Gladrow, and BIOS all rely on thermodynamic equilibrium in the surface layer. Surface layer equilibrium is ludicrous on its face. The relationship between the carbonate components when not in equilibrium is unknown.

        Lastly, although this is not the end of the nonsense, the idea that the ocean buffers against dissolution of anthropogenic CO2 at 6 GtC/yr but not at all against natural CO2 at 90 GtC/yr is equally ludicrous.

        The ocean absorbs CO2 according to Henry’s Law. The absorption depends on the partial pressure of CO2_g and SST, and somewhat on salinity, according to an unquantified solubility curve. It does not depend on surface layer pH or ionization, and it does not accumulate in the atmosphere.

      • Jeff, that is a long exposure. It seems that there still is a lot of discussion about the exact height of the Revelle factor. But that is not the point in discussion (it doesn’t matter if the factor is 8 or 16, it matters that there is a factor).
        I haven’t looked at the AR4 references yet, but regardless of the reasons why some results were rejected, nDIC or DIC without normalizing is measured in many places and all show a much slower increase than pCO2(aq) or pCO2(atm). According to you that should be impossible, as DIC should increase at the same rate as pCO2 in air and water…

        But the main points of discussion are in your last paragraphs:

        Lastly, although this is not the end of the nonsense, the idea that the ocean buffers against dissolution of anthropogenic CO2 at 6 GtC/yr but not at all against natural CO2 at 90 GtC/yr is equally ludicrous.

        You are misinterpreting the facts: The 90 GtC/yr is the effect of temperature on CO2 solubility: partly causing continuous CO2 releases from the tropic oceans (mainly the Pacific deep ocean upwelling) and continuous CO2 sinks near the poles (mainly the THC sink in the NE Atlantic); partly the warming and cooling of the mid-latitude oceans over the seasons. The net effect of this is zero CO2 change over a full seasonal cycle at zero average temperature change over a year (16 microatm in the water or ~16 ppmv in the atmosphere for 1°C temperature change).
        In contrast, any increase in the atmosphere pushes more CO2 in average in the ocean surface at the same temperature. That is where and when the Revelle factor is working.

        The ocean absorbs CO2 according to Henry’s Law. The absorption depends on the partial pressure of CO2_g and SST, and somewhat on salinity, according to an unquantified solubility curve.

        That is right and wrong: the ocean absorbs CO2 according to Henry’s Law. But Henry’s Law only is about free CO2 in the waters. For a 10% increase of CO2 in the atmosphere and the same temperature, the amount of free CO2 in the ocean surface waters will increase with 10%. If there were no following reactions, that increase of free CO2 results in an increase of total carbon in solution with 0.1%, as free CO2 is less than 1% of total carbon in solution.

        But there are following reactions. Free CO2 is converted into carbonic acid that dissociates into bicarbonate and carbonate ions and hydrogen ions. Thus if you add more CO2 to the ocean waters, the pH lowers. But a lower pH means that the dissociation reaction is going the other way out, back to bicarbonate and free CO2. With other words, the amount of total CO2 in solution increases beyond the increase in 1% free CO2, because of further dissociation reactions but not to the full extent of the increase in the atmosphere, because of the counteraction caused by a lowering of the pH. See further:
        http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf

        It does not depend on surface layer pH or ionization

        Simple proof that you are wrong: make a solution of soda or bicarbonate and add some acetic acid. Lots of CO2 are bubbling up, because the amount of free CO2 in the solution gets far beyond Henry’s Law “normal” concentration.

        it does not accumulate in the atmosphere

        The increase in the atmosphere is measured, the human emissions are double that. But if you assume that some natural cause follows the human emissions at such an incredible fixed ratio, but the human emissions disappear somewhere without leaving a trace in the atmosphere, that is your opinion, not based on any observed facts.

      • Ferdinand Engelbeen, 9/10/11, 7:06 pm, CO2 Residence Time

        Did I say welcome back to the discussions? Well, welcome back.

        FE: It seems that there still is a lot of discussion about the exact height of the Revelle factor. But that is not the point in discussion (it doesn’t matter if the factor is 8 or 16, it matters that there is a factor)

        One could make a ratio out of any two random variables, but the fact that dividing one by another produces a number is meaningless. Sometimes RVs will appear correlated suggesting to an investigator that measuring their correlation might be fruitful. That is not the case with the fossil fuel CO2 emissions because they cannot be measured separately from the natural CO2. Of course, if anthropogenic CO2 were directly measurable, no one would need the Revelle factor.

        FE: nDIC or DIC without normalizing is measured in many places and all show a much slower increase than pCO2(aq) or pCO2(atm).

        and previously,

        FE: The atmospheric CO2 levels in that period increased about 10%, the pCO2 (that is free dissolved CO2) of the oceans increased about 8% (which obeys Henry’s Law, with some delay), but DIC increased only with 0.8%, about a factor 10 compared to the free CO2 in the oceans, even more for atmospheric CO2. 9/9/11, 4:57 pm.

        The parameter pCO2(aq), which I assume is the same as your pCO2 … of the oceans, doesn’t exist. CO2 has no partial pressure when dissolved. Instead, pCO2(g), g for the gas in thermodynamic equilibrium with the water, is deemed to be pCO2(aq). I have not reviewed the measurements or methods by which investigators have estimated pCO2(aq). I do know, however, that they report a difference between pCO2(atm) and pCO2(aq), and that that difference, coupled with wind speed, has been taken as the cause of the air-sea flux of CO2. Takahashi used these data to produce his map of CO2 flux across the globe. Unfortunately, the sum of his positive and negative fluxes is about an order of magnitude less than the positive and negative fluxes estimated by other means and reported by IPCC and others. For a full discussion, see ¶#5, and especially Figures 1 and 1A, and Equation (1), On Why CO2 Is Known Not To Have Accumulated in the Atmosphere …,

        http://rocketscientistsjournal.com/2007/06/on_why_co2_is_known_not_to_hav.html

        Takahashi’s results from application of pCO2(aq) deflate any confidence one might assign to pCO2(aq). His map depends on the differential pressure of pCO2(g) and pCO2(aq), where the latter does not exist except as it might be created in the laboratory procedures.

        FE: You are misinterpreting the facts: The 90 GtC/yr is the effect of temperature on CO2 solubility: partly causing continuous CO2 releases from the tropic oceans (mainly the Pacific deep ocean upwelling) and continuous CO2 sinks near the poles (mainly the THC sink in the NE Atlantic); partly the warming and cooling of the mid-latitude oceans over the seasons.

        I agree with your geography, and in fact I think the role of the THC in CO2 was first postulated in The Acquittal of Carbon Dioxide in October, 2006. If you have an earlier reference, please let me know.

        However, I don’t put any stock in the seasonal effects, nor in the warming and cooling, with the attendant breathing of CO2, in the ocean gyres. These would have a net effect of zero in the +92 GtC/yr and -90 GtC/yr annual air/sea fluxes. The upwelling caused primarily by Ekman pumping appears to be the exit of the THC, where deep, cold, CO2-saturated waters are dumped on the surface to warm and outgas.

        FE: The net effect of this is zero CO2 change over a full seasonal cycle at zero average temperature change over a year (16 microatm in the water or ~16 ppmv in the atmosphere for 1°C temperature change).

        For air-sea CO2 flux, I model surface oceanography as the superposition of three components: seasonal effects, gyres, and the year-long transport of surface waters from the exit of the THC to its headwaters at the poles. As I said, the first two should contribute a net of zero to the annual air-sea CO2 fluxes of about 90 GtC/yr. The mechanisms of outgassing and recharging are quite different, though both depend on Henry’s Law. That outgassing occurs continuously in bulk at the Eastern Equatorial Pacific and at a couple of other hot spots. The recharging occurs distributed over the entire ocean surface due to the cooling in the transport current, the return path of water for the THC. This action causes a high volume river of CO2 to spiral around the globe. This river cannot be represented successfully by the net uptake of 2.2 GtC/yr, nor by parceling that net over the globe.

        FE: In contrast, any increase in the atmosphere pushes more CO2 in average in the ocean surface at the same temperature. That is where and when the Revelle factor is working.

        First, any increase in atmospheric CO2 would cause more to CO2 to be dissolved in the surface layer according to Henry’s Law. However, ACO2 is only about 6 GtC/yr, and if it weren’t for the flap adopted by IPCC that AGW exists, the anthropogenic contribution would be negligible. It only contributes about 3% to the total CO2 flux from land and ocean into the atmosphere, and about half that if you include the leaf water that IPCC introduced and dropped – all this in a model that has yet to be successful in estimating global warming within the first order of magnitude, and that is turning out to be invalid with respect to climate sensitivity based on satellite data.

        The AGW model raises the art of relying on small difference between large numbers to a new height by neglecting effects and putting aside any noise masking the signals, both through fantastic assumptions. The danger in relying on the small difference between large numbers used to be taught in grade school; now they don’t even teach grammar in the US. (Schooling here is in its fourth generation of teaching the Delicate Blue Planet Model, environmentalism, and self-esteem trumping everything else.)

        Examples of the incredible assumptions include these: (1) Cloud cover can be successfully parameterized as a statistical constant. (2) Shortwave and longwave radiations tend to balance. (3) Radiative transfer will produce sufficiently accurate longwave global results if only the atmosphere could be correctly modeled. (4) The surface layer is in thermodynamic equilibrium. (5) Amplification of solar variations does not exist. And with regard to your claim that the Revelle factor is working, (6) the ocean buffers against dissolution of anthropogenic CO2 but impedes natural CO2 not at all.

        FE: Henry’s Law only is about free CO2 in the waters.

        You are correct enough (because the surface layer is never in equilibrium), and that contradicts the Revelle factor. Formally, Henry’s law is about the CO2 that dissolves, that is, about DIC, and the Law is not dependent on the decomposition of DIC as CO2(aq) + HCO3(-) + CO3(2-). On the other hand, Henry’s Coefficients are only tabulated for thermodynamic equilibrium, in which case the ratio would be known by the Bjerrum solution to the carbonate equations. However, CO2 is always highly soluble in water, equilibrium or not. In the dynamic situation, the ratio could be anything, (neither IPCC’s 1:100:10 nor Dr. King’s 1:175:21), with the formation of HCO3(-) being almost instantaneous but CO3(2-) lagging.

        FE: For a 10% increase of CO2 in the atmosphere and the same temperature, the amount of free CO2 in the ocean surface waters will increase with 10%. If there were no following reactions, that increase of free CO2 results in an increase of total carbon in solution with 0.1%, as free CO2 is less than 1% of total carbon in solution. But there are following reactions. Free CO2 is converted into carbonic acid that dissociates into bicarbonate and carbonate ions and hydrogen ions. Thus if you add more CO2 to the ocean waters, the pH lowers. But a lower pH means that the dissociation reaction is going the other way out, back to bicarbonate and free CO2. With other words, the amount of total CO2 in solution increases beyond the increase in 1% free CO2, because of further dissociation reactions but not to the full extent of the increase in the atmosphere, because of the counteraction caused by a lowering of the pH. Bold in original.

        Your rationale reads like IPCC’s. AR4, ¶7.3.4.1, p. 528, and Box 7.3, p. 529. IPCC relies on equations and on the Revelle factor (AR4, ¶7.3.4.1, p. 531), both attributed to from Zeebe & Wolf-Gladrow (2001) ($101). Revelle’s original factor was a conjecture that was the ratio two fractions, the numerator was the ratio of new fossil fuel emissions that goes into the atmosphere divided by that which goes into the ocean, and the denominator was the ratio of CO2 in the atmosphere to that in the ocean, i.e., as I wrote earlier, γ = (r/s)/(A_0/S_0).

        ZWG formulated the Revelle buffer differently. The numerator was the ratio of the change in [CO2] to the total [CO2], where [CO2] is the concentration of unionized CO2 in sea water, and the denominator was the ratio of the change in DIC to total DIC, i.e., (Δ[CO2])/([CO2])/(ΔDIC/DIC). One of the differences in the ZWG and IPCC formulation was that it removed the restriction to anthropogenic CO2 in R&S’s formula, making the Revelle factor applicable to natural and anthropogenic CO2 (ACO2). Then IPCC, and by implication ZWG, applied the Revelle factor only to ACO2 and NOT to natural CO2. Silly assumption (6), above.

        Moreover, ZWG’s formulation required Total Alkalinity to be constant, a condition IPCC ignored. IPCC also confused terms by changing DIC to [DIC] (the concentration of a concentration) and by calling Δ[CO2]/[CO2] the fractional change in seawater pCO2. As ZWG say, Doubling of pCO2 –> doubling of [CO2], NOT that “Doubling of pCO2 ⊧ doubling of [CO2]. Wolf-Gladrow, D., CO2 in Seawater: Equilibrium, Kinetics, Isotopes, 6/24/06, p. 56. ZWG is careful not to confuse pCO2 and [CO2(aq)].

        Most importantly, in all formulations, whether from Revelle & Suess, ZWG, or IPCC, the equations apply only in equilibrium. See silly assumption (4), above. IPCC specifies after re-equilibration. AR4, ¶7.3.4.2, p. 531. It continues, saying,

        Due to the slow CaCO3 buffering mechanism (and the slow silicate weathering), atmospheric pCO2 will approach a new equilibrium asymptotically only after several tens of thousands of years (Archer, 2005; Figure 7.12). Id.

        So according to IPCC, the Revelle factor applies only after several tens of thousands of years. This is all quite inane. As I have repeated and you have answered below, pCO2 (which is only atmospheric) and Henry’s Law are unaware of the sequestration of CO2 by the two biological pumps. Those two pumps are NOT connected to the atmosphere as IPCC shows here:

        http://www.rocketscientistsjournal.com/2007/06/_res/10AR4F7-10.jpg

        Those biological pumps feed on ions from the surface layer, not on air borne gaseous CO2. By never being in thermodynamic equilibrium, the surface layer acts to isolate Henry’s Law from ocean chemistry. The surface layer need not be in equilibrium, which IPCC assumes (a) in order to create a bottleneck, (b) creating the bulge in MLO CO2, which still (c) is inadequate to cause the observed global warming, so IPCC (d) makes CO2 initiate warming, releasing the potent greenhouse gas, water vapor, which (e) fortunately for AGW, has no effect on cloud cover. Fiction has a snow-balling effect.

        And the ultimate inanity perhaps, the surface layer was never in equilibrium and so cannot and will not re-equilibrate.

        Ferd, your calculations require you to establish first that the surface layer is equilibrium so that you can rely on the implications of the stoichiometric equilibrium constant, (ZWG) and after that you need to show the sensitivity of your calculations to whatever assumptions or calculations you desire for Total Alkalinity.

        As ZWG says,

        [CO2], [HCO3(-)], [CO3(2-)], and pH can be calculated from DIC and TA. Quoted in Wolf-Gladrow, 6/24/06, p. 50.

        In other words, you cannot calculate [CO2] from DIC without TA.

        FE: See further: http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf

        Your link at Warwick University is to 2004 class notes by Dr. G. P. King. He says,

        In contrast to nitrogen and oxygen, most CO2 of the combined atmosphere– ocean system is dissolved in water (98%). This is because CO2 reacts with water and forms bicarbonate (HCO−2 ) and ca[r]bonate (CO2− 3 ) ions. King, p. 2.

        The dissolution of CO2 in water compared to N2 and O2 has to do with their respective solubility coefficients and not because of the chemical equations of carbonates.

        King derives an expression for the Rayleigh Factor as RF = [DIC]/[CO3(2-)], an uninteresting ratio on the right hand side. See King, eqs. (25) and (26), p. 6. Because that ratio is uninteresting, King has found no inherent property or use for the RF. He just made RF appear algebraically in something after a number of assumptions.

        In his derivation, he says Total Alkalinity is approximately constant (Eq. (19), p. 5, which is close enough), and substitutes [DIC] (DIC) (Eq. (12)) after assuming [CO2] is negligible (Eq. (20)). He differentiates the result (Eq. (21), effectively assuming that because [CO2]_ml (ml for mixed layer) is small, its differential must be small! That is not sound mathematics.

        After assuming [CO2] in the ocean (his [CO2]_ml) is small, King says:

        Returning our attention to (18) and taking differentials and dividing by [CO2]_ml it is easily shown that … [Eq. (24)]. Bold added, King, p. 6.

        In this step, the author divides by something he assumed to be negligibly small! This is an incredibly delicate step, and without justification is unsound mathematics.

        The uptake factor expresses the increase in the concentration of total CO2 (i.e., DIC) in seawater corresponding to an increase in the concentration of CO2 (or partial pressure of CO2). See Fig 7.14 in the attached pages. King, p. 7.

        King seems to know pCO2 refers to the gas state. However, the attached pages are missing.

        It is known as the Revelle factor after Roger Revelle, who was among the first to point out the importance of this sensitivity for the oceanic uptake of anthropogenic CO2. King, p. 7.

        Just preceding this sentence, King derives the Revelle factor without regard to the species of CO2. King, ¶2.2.3-¶2.2.4, Eqs. (12) to (26), pp. 5-6. Now he arbitrarily attributes his derivation to ACO2. King is silent about why the Revelle factor might apply to 6 GtC/yr of ACO2 but not to 90 or 210 GtC/yr of natural CO2.

        Dr. King is an American physicist lecturing in climate to undergraduates in a school of engineering in the UK.

        FE: >>It does not depend on surface layer pH or ionization

        Simple proof that you are wrong: make a solution of soda or bicarbonate and add some acetic acid. Lots of CO2 are bubbling up, because the amount of free CO2 in the solution gets far beyond Henry’s Law “normal” concentration.

        What you have created is the chemistry of a water bottle bomb, also known as a bubble bomb, among other names. The acid working on the baking soda turns into salt water and carbon dioxide that wants to outgas. This can explode a closed household plastic bottle because the pCO2 generated will far exceed one atmosphere per Henry’s Law. Your experiment does NOT show that Henry’s coefficient depends on pH or ionization as I denied. Henry’s coefficient was the same before and after your experiment, a tabulated, known value if only the bottle were in thermodynamic equilibrium.

        As your own authority, King, says

        α is the solubility of CO2, which is a function of temperature and salinity. King, p. 7.

        That is, Henry’s Coefficient is not known to be a function of pH or ionization. If it is dependent on one of these parameters, it is a third order effect, and a fourth order effect in Henry’s Law after pressure.

        FE: >>it does not accumulate in the atmosphere

        The increase in the atmosphere is measured, the human emissions are double that. But if you assume that some natural cause follows the human emissions at such an incredible fixed ratio, but the human emissions disappear somewhere without leaving a trace in the atmosphere, that is your opinion, not based on any observed facts.

        During the time that the atmospheric CO2 increased 15%, my dog’s breath got 30% more foul. That coincidence does not mean that half the stink in my dog’s breath had any thing to do with the CO2 increase.

        I make no such assumption as you suggest. Your incredible fixed ratio does not exist anywhere in the real world. If it did, IPCC would have had evidence it was desperate to find for the MLO bulge to be manmade.

        Instead, IPCC manufactured other evidence to show that the increase in CO2 was caused by human emissions using two other methods than yours. First, it tried to show that the increase in CO2 paralleled the decline in O2, which was supposed to indicate that fossil fuel combustion reduced O2 by the same amount as it increased atmospheric CO2. Second, IPCC tried to show that the isotopic lightening measured in the increase of CO2 paralleled the rate of emissions from fossil fuel, which was supposed to emit an extra light isotopic ratio because ancient vegetation preferred 12C over 13C. See Fingerprints on SGW, by clicking on my name. IPCC’s claims are in this chart:

        http://www.rocketscientistsjournal.com/2010/03/_res/AR4_F2_3_CO2p138.jpg

        which is the product of chartjunk. If the right hand ordinates had been honestly drawn, the curves would have looked like these:

        http://www.rocketscientistsjournal.com/2010/03/sgw.html#III_

        and

        http://www.rocketscientistsjournal.com/2010/03/_res/CO2vO2.jpg

        IPCC’s visual correlation implied by parallel traces is incompetent on several levels. Rule 1: never rely on visual correlation. Rule 2: quantify results, here with a mass balance analysis, which IPCC never produced. Rule 3: even if the results tracked one another according to the mass balance analysis, that is merely correlation, which does not establish cause and effect any more than my dog’s breath did.

        Ferd, you need to do a mass balance analysis before you announce the discovery of an incredible fixed ratio, and then establish a cause and effect by showing that (1) the MLO data are global, and (2) eliminating all possible natural causes. Ultimately, (3) you will need to do the same thing once again, because even if ACO2 were the cause of a global pCO2 increase, that doesn’t show that CO2 causes the observed global warming. For that, you will also have to establish (4) that the Sun was not the cause. Lots of luck.

        IPCC’s results are either incompetent or out-and-out fraud, and I’m sorry to say that I don’t think the authors were all that incompetent.

      • Jeff Glassman | September 12, 2011 at 7:34 pm |

        Jeff, thanks for the welcome. And sorry for the delay in reply. Again you have a long exposure of arguments. I will try to respond only on those points where I think there are problems…

        About pCO2(aq):
        First an explanation how pCO2(aq) is observed: seawater is continuously sprayed in a small amount of air, so that that there is a rapid equilibrium with each other. CO2 levels are measured (semi-)continuously in the air. That is deemed pCO2(aq). This method was and is used on (currently lots of commercial) sea ships and fixed stations worldwide.

        If pCO2 in the atmosphere is higher than pCO2(aq), then the CO2 flux is net from air to water or reverse if pCO2(atm) is lower. One can discuss the local and total fluxes (which depend of wind/mixing speed), but the direction anyway is fixed by the pCO2(aq-atm) difference.
        E.g. for the Bermuda longer series, pCO2(aq) is higher than pCO2(atm) in high summer, the rest of the year the Atlantic Ocean around Bermuda (and most of the Atlantic Gyre) is a net sink for CO2.

        About seasonal effects:
        If you look again to the BATS graph at
        http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
        (BTW that is composed by Bates, from BATS) you will see that there is a huge seasonal variability in all variables. Although difficult to see in the scale of the graphs, the seasonal trend is:

        spring to fall: decrease in pH, increase in pCO2, decrease in DIC
        fall to spring: increase in pH, decrease in pCO2, increase in DIC

        The decrease in DIC during summer months is by increased biolife, which uses bicarbonate for shells and CO2 for organics. But at the same time CO2 is set free by the bicarbonate to carbonate shells reaction, which lowers the pH and increases pCO2.
        Here you see that pCO2 and DIC are decoupled by biolife, even to the effect that pCO2 increases and DIC decreases at the same time.
        Further, temperature increases pCO2(aq) with about 16 microatm/°C. If that leads to a sea-air flux, that will decrease DIC too, because CO2 must come from somewhere, while pCO2(aq) remains high as that is temperature dependent, bicarbonates and carbonates are far less (as long as not saturated in carbonates), but will supply CO2 to maintain free CO2 levels at elevated temperature.

        Anyway, you shouldn’t underestimate the fluxes back and forth caused by the seasons from the mid-latitude oceans. Based on the d13C decline, which is about 1/3rd of what can be expected from fossil fuel use, the continuous CO2 exchange via the THC down/upwelling is about 40 GtC/year, thus leaving about 50 GtC/year back and forth from seasonal changes in ocean temperature.

        About the contribution of humans:
        It only contributes about 3% to the total CO2 flux from land and ocean into the atmosphere, and about half that if you include the leaf water that IPCC introduced and dropped
        You make the same logic error as many before you: the flux is a back and forth flux, where inputs and outputs are near equal. That doesn’t contribute to any change in atmospheric CO2 levels, as long as total influx and total outflux are equal. It doesn’t matter if the total influx is 100 or 1,000 or 10,000 GtC/yr, only the difference between the influx and outflux is important. And that is easily calculated:

        increase in the atmosphere = natural inputs + human input – natural outputs
        or
        increase in the atmosphere – human input = natural inputs – natural outputs
        or
        4 GtC/yr – 8 GtC/yr = natural inputs – natural outputs = – 4 GtC/yr

        Thus at this moment, and in the past 50 years, the natural outputs are/were 1-7 GtC/yr larger than the natural inputs, including natural variability no matter what the real heights of the natural inputs and outputs were, see:
        http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em.jpg

        Thus we know the difference between total CO2 inputs and outputs quite exactly, including the noise. Both are smaller than the human emissions.

        About free CO2 in water:
        Henry’s law is about the CO2 that dissolves, that is, about DIC, and the Law is not dependent on the decomposition of DIC as CO2(aq) + HCO3(-) + CO3(2-).

        Jeff, here you are completely wrong. Henry’s Law is only about the same species in air and water. In this case that is CO2 in air and CO2 in solution. Not bicarbonate and carbonate. That are different species which have no intention to escape from the solution at all. Only CO2 in solution does exchange with CO2 in the atmosphere. Which leads to a ratio between the two in steady state, depending of the temperature, that is what Henry’s Law is about. In fact that is what the pCO2(aq) observations show.

        Thus pCO2(aq) is a direct measure for [CO2] in the surface water, if the temperature is known, but that doesn’t tell us anything about DIC.
        That makes that your discussion of the Revelle factor is based on this erroneous assumption and some more.

        The ratio between observed change d [CO2]/ [CO2] vs. dDIC/DIC indeed is the Revelle factor. For BATS indeed a factor of about 10 in change rate.
        That will be further elaborated in the household (bi)carbonate experiment…

        The Revelle factor is applicable to all CO2, anthro or not, but the IPCC and most other people involved (myself included) assume that near all the increase in the atmosphere and thus surface waters is caused by the human emissions…

        So according to IPCC, the Revelle factor applies only after several tens of thousands of years.

        Sorry, but that is a misinterpretation. The dissociation reaction from CO2 to HCO3(-) and CO3(2-) is very fast (fractions of seconds to seconds). That has only an indirect connection with the precipitation out of solution by the formation of CaCO3/MgCO3 by coccoliths, which is a slow process (as an overall net process: much is formed, but much is dissolved again). That is what the IPCC is referring to, nothing to do with the Revelle factor, except that this process influences the ratio between the different carbon species and the pH.

        About the carbonate / acid experiment

        Your experiment does NOT show that Henry’s coefficient depends on pH or ionization as I denied. Henry’s coefficient was the same before and after your experiment, a tabulated, known value if only the bottle were in thermodynamic equilibrium.
        and
        That is, Henry’s Coefficient is not known to be a function of pH or ionization.
        or as expressed by King:
        α is the solubility of CO2, which is a function of temperature and salinity.

        Indeed the solubility of CO2 (that, again, is about free CO2 gas in the liquid) is a function of temperature and salinity, not of pH. But DIC is strongly influenced by pH. That is what the experiment shows. After you have added an acid, 95 to 99% of all carbonate and/or bicarbonate disappeared to the atmosphere as CO2, thus DIC reduced with some 94 to 98% (depending of the strength of the acid), while the solubility of CO2 still is the same (after temperature re-equilibrium), according to Henry’s Law.

        That simply shows that Henry’s Law is about the solubility of CO2 as gas in water and not about the rest of the carbon (as bicarbonate and carbonate) in solution.

        About the mass balance

        As said before, humans emit 8 GtC/year nowadays. The increase in the atmosphere is 4 +/- 2 GtC/year. Because of the law of conservation of matter, and no carbon species escapes to space, nature as a whole is a net sink for CO2. It is that simple. The real, net addition from nature to the atmosphere is zero, nada, nothing.
        No matter how large the contributions from oceans, vegetation, volcanoes,… within a year are, all natural sinks combined were larger over the past 50 years than all natural sources combined. Thus even without knowledge of any individual natural flow, we know that humans are the cause of the increase of CO2 in the past at least 50 years.
        All other observations add to this knowledge and all alternative explanations fail one or more observations.

        About the O2 balance and d13C balance

        Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline

        Not completely right: that only is right for pure carbon (C + O2 -> CO2), not for oil (CnH2n+… 3/2n O2 -> n CO2 + n/2 H2O) and not for natural gas (CH4 + 2 O2 -> CO2 + 2 H2O).

        The d13C level is decreasing by the addition of 13C depleted fossil fuels, but the decrease is diluted by the deep ocean exchange with the atmosphere: what goes down in the deep is the current 13C/12C mix, but what is upwelling is the deep ocean mix, which is higher in d13C than the current atmosphere. Based on that difference, one can calculate the deep ocean – atmospheric exchanges:
        http://www.ferdinand-engelbeen.be/klimaat/klim_img/deep_ocean_air_zero.jpg

        Some more calculations with O2 and d13C, which show how much CO2 is sequestered by vegetation and how much by the oceans:
        http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf

        Finally…

        Ferd, you need to do a mass balance analysis before you announce the discovery of an incredible fixed ratio, and then establish a cause and effect by showing that (1) the MLO data are global, and (2) eliminating all possible natural causes. Ultimately, (3) you will need to do the same thing once again, because even if ACO2 were the cause of a global pCO2 increase, that doesn’t show that CO2 causes the observed global warming. For that, you will also have to establish (4) that the Sun was not the cause. Lots of luck.

        (1) The MLO data are representative for 95% of the atmosphere, which is global enough I suppose. See the data for lots of stations, air flights, ships, buoys and nowadays satellite data, like here:
        http://www.esrl.noaa.gov/gmd/ccgg/iadv/
        Only in the first few hundred meters over land, CO2 levels are far too variable to be of interest (except for people interested in individual fluxes).
        (2) I have calculated the mass balance, which shows that nature is a net sink for CO2. That effectively eliminates nature as a source.
        (3) That humans are the cause of the CO2 increase doesn’t say anything about its effect on temperature. I suppose that CO2 has some effect, but less than the minimum range of what the current climate models “project”.
        (4) I am pretty sure that current climate models underestimate the role of the sun.

        Points (1) and (2) are proven beyond doubt, but there can be a lot of discussion between “warmers” and “skeptics” about (3) and (4).

      • Ferdinand Engelbeen, 9/17/11, 11:44 am, CO2 Residence Time

        FE: About pCO2(aq) … .

        The method you describe is not unfamiliar to me, and it employs what is called an equilibrator. http://bloomcruise.blogspot.com/2011/07/sampling-co2-in-air-and-sea.html

        JG: Henry’s law is about the CO2 that dissolves, that is, about DIC, and the Law is not dependent on the decomposition of DIC as CO2(aq) + HCO3(-) + CO3(2-).

        to which you responded

        FE: Jeff, here you are completely wrong. Henry’s Law is only about the same species in air and water. In this case that is CO2 in air and CO2 in solution. Not bicarbonate and carbonate. That are different species which have no intention to escape from the solution at all. Only CO2 in solution does exchange with CO2 in the atmosphere. Which leads to a ratio between the two in steady state, depending of the temperature, that is what Henry’s Law is about. In fact that is what the pCO2(aq) observations show. [¶] Thus pCO2(aq) is a direct measure for [CO2] in the surface water, if the temperature is known, but that doesn’t tell us anything about DIC.

        1. So if I am not wrong about some things, but instead completely wrong, I must be wrong about the constitution of DIC.

        IPCC: The marine carbonate buffer system allows the ocean to take up CO2 far in excess of its potential uptake capacity based on solubility alone, and in doing so controls the pH of the ocean. This control is achieved by a series of reactions that transform carbon added as CO2 into HCO3– and CO32–. These three dissolved forms (collectively known as DIC) are found in the approximate ratio CO2:HCO3–:CO32– of 1:100:10 (Equation (7.1)). AR4 Box 7.3 Marine Carbon Chemistry and Ocean Acidification, p. 529.

        IPCC: Equation (7.3)), relating the fractional change in seawater pCO2 to the fractional change in total DIC after re-equilibration: Revelle factor (or buffer factor) = (Δ[CO2])/[CO2]/(Δ[DIC]/[DIC]) (7.3). Citations deleted, AR4, ¶7.3.4.2, p. 531.

        So what I did was substitute CO2(aq) for CO2 on p. 529, or for [CO2] on p. 531, and I think this agrees fully with your last sentence. This is reinforced by IPCC’s text in which the left hand side (necessarily) of Eq. (7.3) is identified as the fractional change in seawater pCO2. Thus IPCC refers to [CO2] as the pCO2 in seawater, which is no different than CO2(aq) as I was using the term. Also David Archer, Contributing Author to AR4, Chapter 7, confirms,

        DA: For total CO2, the exchanging species is CO2(aq), which constitutes about 0.5% of the total dissolved CO2 concentration. Archer, D., Daily, seasonal, and interannual variability of sea surface carbon and nutrient concentration in the Equatorial Pacific ocean, p. 10 of 17.

        So my terminology is not wrong. However the Archer citation about CO2(aq) concentration needs to be read in context with his much earlier statement:

        DA: The pCO2 of sea water is determined by its alkalinity, CO2, temperature, and salinity through carbonate equilibrium chemistry. Archer, id., p. 7 of 17.

        that is, pCO2 is not measured. And IPCC’s reference to [DIC] is erroneous, a reference to the concentration of a concentration.

        2. Your view about Henry’s Law and that carbonate and bicarbonate ions have no intention to escape from the solution at all shows a lack of understanding of the physics involved in the distinctly different processes of dissolution and chemical kinetics with their widely separated time scales. And of course, no one claims ions ever escape into the atmosphere.

        Those ions, created in a matter of a fraction of a microsecond from equilibrium by a pulse of CO2(aq), left undisturbed, return to equilibrium in a matter of a few milliseconds. Mitchell, M.J., et al., A model of carbon dioxide dissolution and mineral carbonation kinetics, 12/11/09, p. 1274, Figure 1. The process should be much faster with agitation of the solution, but the states of dynamic equilibrium would be unrecognizable, difficult to estimate, and perhaps impractical to generalize.

        Dissolution on the other hand proceeds according to the gas transfer velocity, which does account for agitation by wind and surface layer turbulence (not equilibrium). Velocity measurements lie in the region of about 4 to 29 cm/hr, which is equivalent to a cubic meter of gas dissolving in about 3 to 25 hours. See Wanninkhof, id., p. 7374.

        Within the time scale of Henry’s Law, the surface layer can for all practical purposes instantaneously buffer CO2(aq), not to its equilibrium value, but to the limits of DIC, whether for uptake or outgassing. That is so even constraining the layer to the sluggish, hypothetical state of equilibrium. Similarly, Henry’s Law proceeds effectively instantaneously with respect to diurnal or longer processes involved in weather, climate change, or climate.

        3. I’ll ignore the fact that your method doesn’t measure alkalinity, along with your concerns about biolife and seasonal effects. What no one can ignore is that the input seawater is not in thermodynamic equilibrium. So its ratio of CO2:HCO3(-):CO3(2-) is unknown. The measuring method changes that inlet ratio by equilibrating the sample. You don’t know if the open sea CO2(aq) is 0.5% as Archer says, or 0.1%, or 10%, for that matter. You only know CO2(aq) in the open sea is different than the estimated CO2(aq), otherwise the equilibrator would be a waste of time and money.

        Of course, my critique would be wrong should the surface layer actually be in thermodynamic equilibrium. Thermodynamic equilibrium is a ludicrous assumption, even if explained to a science illiterate, and one of the fatal errors in IPCC’s AGW model. As it stands, arguing as you and IPCC do that pCO2(aq) exists as an alternate theory to Henry’s Law is a corollary to the same false assumption.

        4. On the matter of a partial pressure difference, you are in agreement with others when you say,

        FE: If pCO2 in the atmosphere is higher than pCO2(aq), then the CO2 flux is net from air to water or reverse if pCO2(atm) is lower. One can discuss the local and total fluxes (which depend of wind/mixing speed), but the direction anyway is fixed by the pCO2(aq-atm) difference.

        You give no references, but your notion about CO2 flux is consistent with these sources:

        IPCC: So long as atmospheric CO2 concentration is increasing there is net uptake of carbon by the ocean, driven by the atmosphere-ocean difference in partial pressure of CO2. Bold added, TAR, Ch. 3, Executive Summary, p. 185.

        IPCC: Estimates (4º x 5º) of sea-to-air flux of CO2, computed using 940,000 measurements of surface water pCO2 collected since 1956 and averaged monthly, together with NCEP/NCAR 41-year mean monthly wind speeds and a (10-m wind speed) dependence on the gas transfer rate (Wanninkhof, 1992). Footnote deleted, bold added, AR4, Figure 7.8 caption, p. 523.

        RW: Gas transfer of CO2 is sometimes expressed as a gas transfer coefficient K. The flux equals the gas transfer coefficient multiplied by the partial pressure difference between air and water:
        F = K(pCO2_w – pCO2_a) (A2)
        where K – kL and L is the solubility expressed in units of (concentration/pressure).
        Citations deleted, bold added, Wanninkhof, R., Relationship Between Wind Speed and Gas Exchange Over the Ocean, Appendix, p. 7379.

        Here pCO2_w is pCO2(aq) and pCO2_a is pCO2(atm).

        But as you correctly state, what is measured

        FE: is deemed pCO2(aq). Bold added.

        It has to be deemed to be a partial pressure because it doesn’t exist. It can’t be measured. It can’t exert a pressure to resist pCO2(atm). As logical as it might seem to attribute a flux to a difference in pressure, the analogy is lost when one term is only deemed to exist.

        Besides, this theory that the flux depends on a partial pressure difference is not in accord with Henry’s Law. That Law explicitly depends on pCO2(atm), and says nothing about an alleged pCO2(aq). Henry’s Law does not make the mistake of giving reality to a parameter deemed to exist, especially by bootstrapping a parameter derived from the same law.

        5. Your reference to the isotopic ratio is imaginary, compounding IPCC’s fraud on this so-called fingerprint.

        FE: Based on the d13C decline, which is about 1/3rd of what can be expected from fossil fuel use … .

        I think when pinned down, you just make up data out of whole cloth. Prove me wrong with a reference. When IPCC tried to rely on δ13C, it had to manufacture fraudulent data. See SGW, Part III, Fingerprints, and especially Figure 27 (AR4, Figure 2.3, p. 138.

        6. You make unsupported claims about the physics of solubility and of the thermohaline circulation.

        FE: … the THC down/upwelling is about 40 GtC/year, thus leaving about 50 GtC/year back and forth from seasonal changes in ocean temperature.

        I claim the THC accounts for the 90 GtC/yr given by IPCC and others. I may have been the first to have the THC transport any amount of CO2. I think you just made up the 50/40 split. Prove me wrong by providing a reference.

        7. You disassemble huge carbon mass flows, scattering them into many small differences between large numbers. This is a conjecture essential to IPCC’s model in which natural forces are in balance, a model in which the alleged Revelle Factor magically buffers only manmade CO2 emissions. This conjecture makes man’s CO2 emissions appear far more significant than their insignificant 1.5% to 3% contribution. Your argument is

        FE: Anyway, you shouldn’t underestimate the fluxes back and forth caused by the seasons from the mid-latitude oceans. … You make the same logic error as many before you: the flux is a back and forth flux, where inputs and outputs are near equal.

        You, as well as others, have repeatedly tried to characterize the annual fluxes in the carbon cycle as distributed into a large number of small, nearly zero fluxes. It is, for example, the essence of Takahashi’s beautiful map of faux air-sea fluxes. I am willing to accept that model with respect to land-air fluxes, which are in fact distributed. But you have no rational physical model for the air-sea fluxes of about ± 91 GtC/yr. Your model fails physics.

        Prof. Robert Stewart, Texas A&M University, maintains a commendable, open source, online textbook, Oceanography in the 21st Century. He shows the surface ocean to atmosphere flux as +90/-92 GtC/yr. Part II, ch. 5, p. 2 of 4. He explains:

        Carbon dioxide dissolves into the ocean at high latitudes. CO2 is carried to the deep ocean by sinking currents, where it stays for hundreds of years. Eventually mixing brings the water back to the surface. The ocean emits carbon dioxide into the tropical atmosphere. This system of deep ocean currents is the marine physical pump for carbon. It help pumps carbon from the atmosphere into the sea for storage. Stewart, id., p. 3 of 4.

        His air-sea fluxes agree with IPCC, and they are not the sum of small increments. This CO2 flux is a massive river, with an input and an output, contradicting the distributed model. Stewart’s model supports mine in which the flux is controlled by the thermohaline circulation, also known as the MOC. However, because of the underlying poleward cooling currents, physics demands that the CO2 dissolving into the ocean occur over the entire surface, not just at high latitudes as Stewart states. It is at high latitudes that DIC is carried to the deep ocean, as Stewart does say.

        What you refer to as mid-latitude … back and forth flux and as seasonal changes in ocean temperature exist, but add to zero, and are quite negligible. They are also due in major part to the ocean gyres. These flux variations are second order effects that contribute nothing to the first order effect of the +90/-92 GtC/yr pair of air-sea fluxes. You still need to account for them.

        On another point, Stewart’s model is not yet mine because he says,

        I define the deep circulation as the circulation of mass. Of course, the mass circulation also carries heat, salt, oxygen, and other properties. Stewart, id., Ch. 13, Deep Circulation in the Ocean, p. 1 of 7.

        Stewart has omitted the major feature of the transport of CO2, and a major component of the THC/MOC mass. The total MOC flow is 31 Sv. AR4, Box 5.1, p. 397. That’s a potential to outgas over 530 PgC/yr if it were 100% efficient and warmed from 0ºC to 35ºC.

        8. You misunderstand what I wrote about the Revelle factor.

        JG: So according to IPCC, the Revelle factor applies only after several tens of thousands of years.

        FE: Sorry, but that is a misinterpretation

        If you wanted to be accurate, you might have said that the Revelle statement was wrong. However, the error is IPCC’s, not mine. I summarized IPCC’s words literally and correctly.

        9. You have failed to grasp the error in your model for acid changing solubility.

        FE: But DIC is strongly influenced by pH. That is what the experiment shows. After you have added an acid, 95 to 99% of all carbonate and/or bicarbonate disappeared to the atmosphere as CO2, thus DIC reduced with some 94 to 98% (depending of the strength of the acid), while the solubility of CO2 still is the same (after temperature re-equilibrium), according to Henry’s Law.

        You ignore the error I pointed out in your experiment to talk around it. You added baking soda and acid, and that is what produced the CO2. It was not released by a change in Henry’s coefficients as you claimed originally, and now try to rehabilitate.

        10. You next quote me accurately but out of context to change IPCC’s alleged combustion fingerprint for your own purposes. Here is the statement from me with your omissions in bold:

        JG: IPCC’s argument is that the decline in O2 matches the rise in CO2 and therefore the latter is from fossil fuel burning. Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline, so the traces should be drawn identically scaled in parts per million (1 ppm = 4.773 per meg (Scripps O2 Program)). SGW, III, Fingerprints.

        My complete statement is about IPCC’s argument. That’s why the sentence you saved says should, not “does”. Having distorted what I said, you say.

        FE: Not completely right: …

        What’s not right is your dishonest quotation. What’s not completely right is IPCC’s argument.

        11. You believe that MLO CO2 concentrations are global because they agree with other stations, not recognizing that investigators intentionally calibrate the other stations to agree with MLO.

        FE: The MLO data are representative for 95% of the atmosphere, which is global enough I suppose. See the data for lots of stations, air flights, ships, buoys and nowadays satellite data, like here: http://www.esrl.noaa.gov/gmd/ccgg/iadv/ .

        Your link to the CO2 measuring station network is a nice resource. It’s an update to the version blessed by IPCC. TAR, Figure 3.7, p. 212. MLO should indeed be representative, not because it IS representative, but because IPCC has seen fit to calibrate the network into agreement:

        The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically <1 ppm. Resolution of such a small signal (against a background of seasonal variations up to 15 ppm in the Northern Hemisphere) requires high quality atmospheric measurements, measurement protocols and calibration procedures within and between monitoring networks (Keeling et al., 1989; Conway et al., 1994). Bold added, TAR, ¶3.5.3, p. 211.

        And

        To aid in interpreting the interannual patterns seen in Figure 4, and in derived CO2 fluxes shown later, we identify quasi-periodic variability in atmospheric CO2 defined by time-intervals during which the seasonally adjusted CO2 concentration at Mauna Loa Observatory, Hawaii rose more rapidly than a long-term trend line proportional to industrial CO2 emissions. The Mauna Loa data and the trend line are shown in Figure 5 with vertical gray bars demarking the intervals of rapid rising CO2. Data from Mauna Loa Observatory were chosen for this identification because the measurements there are continuous and thus provide a more precisely determined rate of change than any other station in our observing program. Also, the rate observed there agrees closely with the global average rate estimated from the nearly pole to pole data of Figure 4 (plot not shown). Bold added, Keeling, et al., Exchanges of Atmospheric CO2 and 13CO2 with the Terrestrial Biosphere and Oceans from 1978 to 2000. I. Global Aspects, SIO Ref. 01-06, June, 2001.

        Your ESRL reference says under Measurement Details, Carbon Dioxide,

        Because detector response is non-linear in the range of atmospheric levels, ambient samples are bracketed during analysis by a set of reference standards used to calibrate detector response.

        That should be sufficient. To bracket: to place within. Thefreedictionary.com.

        MLO is representative of global CO2 measurements by necessity for the AGW conjecture, then by assumption, followed by so-called calibration procedures. You might note that IPCC has not made the calibration data available.

        12. Conclusion: Because your belief system, AGW, is a fiction, it must be supported by a seemingly unending string of fictions, and a concealment of reality.

        Ferd, you have been bamboozled, not once but many times just in this one post – • by the phantom of pCO2(aq) only deemed to exist, • by the unwarranted assumption, both overt and hidden, of thermodynamic equilibrium, • by the broken link between open ocean CO2 measurements and the ocean surface, • by IPCC’s fraudulent reliance on δ13C, • by an imaginary partial pressure difference substituting for ordinary solubility, • by reports of the existence of the failed Revelle factor, • by your own hypothetical experiment to add CO2 and confuse it with Henry’s Law outgassing, • by a network of CO2 stations calibrated into agreement.

        When you put aside the powerful evidence that both Earth’s climate and its climate change are determined by the Sun, and that surface temperatures in both Earth’s cold and warm states are regulated by albedo, to adopt the belief that man is the cause of climate change, you put yourself in the box of having to defend a raft of exceptions to physics and fudged data.

  11. Judith,
    You should have a good answer to Hal’s question. It’s a point of elementary physical chemistry, well set out by your Skeptical Science link:
    “Individual carbon dioxide molecules have a short life time of around 5 years in the atmosphere. However, when they leave the atmosphere, they’re simply swapping places with carbon dioxide in the ocean. The final amount of extra CO2 that remains in the atmosphere stays there on a time scale of centuries. “

    They could also have included exchange with the biosphere, which is significant on the 5 year scale. Isotope exchanges measure that individual molecule scale. What counts is the time taken for the CO2 excess to go.

    98% of the atoms in a human body are exchanged every year. Residence time is a matter of months. That’s what isotopes would measure. We stay around a lot longer.

    • CO2 is a trace gas in a large atmosphere.
      Does Skeptical Science or anyone else mention much CO2 is in the
      atmosphere?
      Wiki:
      “Carbon dioxide in earth’s atmosphere is considered a trace gas currently occurring at an average concentration of about 390 parts per million by volume or 591 parts per million by mass.The total mass of atmospheric carbon dioxide is 3.16×1015 kg (about 3,000 gigatonnes).”

      So various natural processes are emitting hundreds of gigatonnes, plants even though they consume hundreds of gigatonnes also emit about 100 gigatonnes per year. The largest absorber of CO2 is weathering of rocks from rainfall, such CO2 and limestone. The dissolve minerals end up in the ocean which is used in biologic process of all life and is also used to make shells, which can deposit on the ocean floor and after millions of years can make sedimentary rock [such as limestone] which can end up back on the land again and being dissolved by CO2. Rinse and repeat.
      The ocean also absorbs and emits CO2.
      The total amount of CO2 processed per year from biological, weathering, and warmer ocean water emitting CO2 and cooler Ocean water absorbing CO2, is unknown- though there are rough estimates. But in total it’s on the order of 1000 gigatonnes per year. And tens of gigatonnes are emited due to human activity per year. And so you have 1000 gigatonnes added to a 3,000 gigatonnes atmosphere and 1000 gigatonnes are removed each year. So roughly, a 1/4 of CO2 emitted by human activity is absorbed each year. The amount of total global CO increase per year is about 1/2 the amount of human emission per year [roughly].
      So you have two different answers- about 1/2 human emission “looks” like it’s being adsorbed per year. And second answer is looking at the huge pot of CO2 in which Human emission is added and how much of total CO2 is being recycled each year- roughly 1/4 of all CO2 in atmosphere is turned over yearly.

    • Stop eating for a year, and there wouldn’t be much of you left. You have to intake new mass, and without that new mass to exchange the old mass with in the first place, there would be no turnover; only a slow leeching into decay.

      The same is for CO2. If a molecule leaves the atmosphere its mass has to be replaced, and where does a replacement come from? The Ocean? Then how do you get a rise or decline of bulk CO2 in the first place if exchange is always 1 to 1? That’s the problem. If the residence time of a CO2 molecule is only 5 years (that is the kinetic rate of the sinks), then the response rate of the bulk CO2 amount is also 5 years; as any imbalance in input and output from the atmosphere will change the bulk no slower than the slowest kinetic rate of source or sink. Realize too, that this rate can be affected by concentration: the more CO2 is in the atmosphere, the shorter the residence time could become, or put another way the faster the kinetic rate for uptake by sinks. This is dependent on the -rate law- that covers CO2 equilibriums, be it zero order, first order, second order or so forth. And some of these rate laws, like second order, are independent of concentration and fixed; while others like zero and first are dependent and variable. Which is CO2?

      So, to show that elevated CO2 levels stay elevated, you have to show the kinetic rate and equilibrium constants for all the sources/sinks, and why additional CO2 would suddenly change those rates as would be necessary to say elevated CO2 can last centuries when residence time is 5 years.

      I fear there is a huge lack of understanding for kinetics in these debates, and just what equilibrium actually means.

    • The size of a mixed-layer (upper 50-100m) ocean CO2 reservoir is approximately equal to that of the entire atmosphere, so we can expect the rate of increase in atmospheric co2 to be half of that of emission. And incidentally, that’s what happens. Approximatly half of 14 GT of co2 are going somewhere. If they go into mixed layer ocean, it means that time for mlo to equilibrate with atmosphere is on order of weeks to months. How can they claim then that individula molecule residence time is 5 years? Are they nuts?

      • “The size of a mixed-layer (upper 50-100m) ocean CO2 reservoir is approximately equal to that of the entire atmosphere, so we can expect the rate of increase in atmospheric co2 to be half of that of emission.”
        STOP DAMN IT STOP.

        The 14CO12 from the Atmospheric H-bomb tests had a t1/2 of 15 years and a t1/4 of 30 years. We know this to be true. It is a fact.
        Now, if the Ocean CO2 Reservoir was the same size as the AtmosphericCO2 Reservoir then it could NEVER drop below half max. If the Ocean CO2 Reservoir was 10 times bigger than the AtmosphericCO2 Reservoir, then the 14CO2 end point would be 10% higher than before the bomb tests.
        The end point for 14CO2 is within the noise, 1-2%, of the starting point.
        We therefore KNOW that the Ocean CO2 Reservoir, which is in exchange with the atmosphere, is >30 times greater than the AtmosphericCO2 Reservoir .
        This is true. This is not speculation. This is not pulling figures out of my ass. This is reality. This is basic dilution. The 14CO2 spike’s from the 50’s until 1965, increased the 14C background by a factor of 10. 45 years later (three t1/2’s), the 14CO2 is back to the background level it was before. So, all the 14CO2 generated by the bomb tests has been diluted;
        The dilution due to the increase in atmospheric CO2, from 320 ppm to 390 ppm, explains only 18% of the disappearance, ocean buffering and biotic has taken the rest.

    • She does have a good answer to Hal’s question, and she knows it, and it is exactly the source you noted. But she doesn’t like the right answer, so she presents it with a lot of absolutely wrong, uninformed and willfully misinformed gobbledygook, to dredge up page hits by pretending that propaganda from oil- and coal-financed doubt merchants are legitimate scientists or have anything to add to scientific knowledge. In fact, the likes of Hal Doiron and Jack Schmitt do nothing but subtract from the sum of human knowledge, and by lending the credibility of her academic stature to their ilk, Curry aids and abets their fraud.

  12. The Earth, mainly the oceans, has been absorbing about 50% of human emissions in recent decades, so doesn’t it follow from this that if human emissions were to halve, and the 50% rate were maintained, CO2 concentrations would stabilise?

    And in the very unlikely event that they ceased completely, doesn’t it also follow that CO2 concentrations would fall at approximately the same rate that they have risen?

    • i>”so doesn’t it follow from this that if human emissions were to halve, and the 50% rate were maintained, CO2 concentrations would stabilise?”
      No, if the airborne fraction remains at 50%, CO2 concentrations would rise at 50% of the former rate.

      “doesn’t it also follow that CO2 concentrations would fall at approximately the same rate that they have risen?”
      No, on that logic they would then stabilise. The rate at which they rose was determined by the rate at which we mined and burnt the carbon. That won’t be reflected in any natural redistribution process. Excess CO2 would be absorbed by the sea, but on a different and much longer timescale.

      • Why does CO2 at Mauna Loa fluctuate by as much as 2.3ppm in 7 days, and quite regularly by 1ppm ore more?

        Why would that be a sign of man-made Co2?

      • How about 2ppm over 1 day?

        http://cdiac.ornl.gov/ftp/trends/co2/Jubany_2009_Daily.txt

        Isn’t this supposed to be a well mixed gas?

      • Nick and tempterrain

        The answer to tempterrain’s question is: we don’t know.

        IF (the BIG word) the CO2 half-life in our climate system is really 120 years (upper end of range suggested by Zeke Hausfather at a recent Yale climate forum), this means that the annual decay rate (at the beginning of the decay curve) is 0.58% of the atmospheric concentration, or around 2.2 ppmv today.

        This represents a bit less than half of the CO2 emitted by humans today.

        This tells me that IF (there’s that BIG word again) the increase in atmospheric CO2 is caused primarily by human emissions, and IF we were to reduce these emissions to around half what they are today, the net in and out would be in balance, and the CO2 concentration would stabilize.

        But, hey guys, there’s a lot of IF there.

        IF Professor Salby is right, it all has very little to do with human emissions.

        More IF.

        Max

      • Nick,

        No. If human CO2 emissions were reduced by 50% the Earth wouldn’t actually “know” that’s what had happened. Unless you believe in Gaia that is! It’s just coincidental that, at present atmospheric concentrations of CO2, the natural absorption rate is approximately half of emissions. Therefore, if emissions are halved the concentration of CO2 should stay constant.

        By the same logic, neither would the Earth be “aware” that CO2 emissions had stopped completely. In the unlikely event of that happening, CO2 concentrations would start to fall at approximately the same rate as they would have otherwise risen.

      • No it’s simpler than that – essentially Henry’s Law. The Earth “knows” we’re emitting because pCO2 goes up. The top layer of the sea in response takes in CO2 to reach an equilibrium concentration. Then CO2 is transported downward, more absorbed at the surface, and so on. All driven by our addition of CO2. If we add half as much, the driving pCO2 is less, and so are the fluxes.

        Henry’s Law is for equilibrium, and overall this isn’t equilibrium. But the partitioning has worked out to be fairly stable at 50% of added CO2, and that would be the strating point for estimating the effect of an emission reduction.

      • Nick,

        Any argument that relies on the Earth “knowing” anything is a bit suss, IMO.
        If you aren’t convinced by my few lines of argument , maybe you should take a look at:

        http://www.ipcc.ch/pdf/assessment-report/ar4/wg3/ar4-wg3-ts.pdf

        I’m not saying anything different to what the IPCC have already said. They make the point that to reduce CO2 concentrations, human CO2 emissions have to be reduced by more than half.

  13. Does the residence time even make an difference to anything???? Is a CO2 molecule from a fossil fuel substantially different in behavior from a CO2 molecule from a breathing human or animal or a CO2 from a non-fossil fuel.

    Surely what matters are the quantity of CO2 produced by all means vs the quantity of CO2 consumed by all means and the quantity of CO2 sequestered in the oceans. There are multiple factors and no single “forcing” will necessarily dominate unless the mechanisms to consume and sequester CO2 are overwhelmed.

  14. Norm Kalmanovitch

    The question that needs to be asked is how much of the annual increase of 2ppmv/year is due to humans. There is no published literature that demonstrates conclusively that this is any more than 5% with at least the other 95% being naturally sourced.
    http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo
    The seasonal variation in atmospheric CO2 concentration is in the order of 6ppmv over the course of a year and since this is entirely natural and three times the year to year increase of 2ppmv it seems reasonable that using 95% for the amount of increase in CO2 being naturally sourced is likely valid.
    If this is the case then the annual contribution from humans would be no more than 5% of 2ppmv/year or just 0.1ppmv/year.
    Over the past ten years there has been no detectable increase in global temperature on all five global temperature datasets (NCDC, HadCRUT3, GISS, RSS MSU, and UAH MSU) in spite of a 20ppmv increase in atmospheric CO2, so it is highly unlikely that 200 years of human emissions of 0.1ppmv/year producing the same 20ppmv increase in atmospheric CO2 will have any detectable temperature effect.
    One must remember that the global temperature is an absolute temperature value and not the temperature anomaly value that is used in climate discussions. The IPCC 2001 Third Assessment Report stated that the global temperature increase was estimated at 0.6°C +/- 0.2°C per century. This is only 0.006°C/year.
    The absolute global temperature from NCDC plotted by Junk Science
    http://junksciencearchive.com/MSU_Temps/NCDCabs.html
    shows that the annual seasonal variation in global temperature is in the order, of 3.9°C/year or 650 times greater than the year to year increase of jusr 0.006°C/year.
    Since this seasonal variation is entirely natural and due to the seasonal effect from the significantly larger Northern Hemisphere Landmass, it would only take a change of 1/650 in the completely natural seasonal variation to account for the entire annual change of 0.006°C without invoking any change to the greenhouse effect from the annual 0.1ppmv increase in CO2 from fossil fuel emissions.
    To carry the argument one step further the increase in atmospheric CO2 concentration from 337ppmv in 1979 to the 390ppmv concentration today according to the CO2 forcing parameter of the IPCC climate models should cause a reduction in OLR of precisely 0.782Watts/m^2 or just 0.025watts/m^2/year
    The measurement of OLR (www.climate4you.com under the heading “global temperatures” titled “Outgoing longwave radiation Global”) shows that the annual seasonal variation in OLR is in the order of 10Watts/m^2 which is 400 times what the IPCC states would be the effect from the 2ppmv increase per year and 8000 times greater than the portion attributed to the 0.1ppmv/year human contribution.
    What makes the whole thing totally ridiculous is that even in spite of the mostly natural 2ppmv/year increase in CO2 and the 57.1% increase in CO2 emissions over the past 31 years; there is no detectable decrease in OLR and in fact the OLR has increased over these 31 years proving conclusively that there has been absolutely zero enhanced greenhouse effect from CO2 increases human sourced or otherwise

    • Man is responsible for 4 ppm/yr based on Gt of fossil fuel burning, so it looks quite easy to conclude who is responsible for the 2 ppm/yr.

      • Jim D

        A bit too “easy”, I’d say. [For an alternate “conclusion”, check the suggestion by Murry Salby.]

        Max

      • Max,

        You’re making tha same mistake as Nick Stokes in thinking the Earth “knows” that humanity is reponsible for a 4ppmv/yr contribution, and has somehow decided to help us out by absorbing 2ppmv/yr !

        The absorption rate is determined by pC02 as Nick rightly says. That’s just another way of saying that the natural absorption rate is proportional to atmospheric CO2 concentrations and not the rate of human emissions.

        Concentrations would not change that much, in the course of one year, if human CO2 emissions were halved. The natural absorption rate would stay approximately constant, and, so would CO2 concentrations. As Jim D says ” it looks quite easy to conclude who is responsible for the 2 ppm/yr.”

        There are really no prizes for getting the answer right to that one!

      • tempterrain,

        If humans add 4 ppmv/year and one measures an increase of 2 ppmv/year, then nature is a net sink of 2 ppmv/year. Thus humans are fully responsible for the increase. If humans were to reduce the emissions to 2 ppmv/year, then we would see stable levels in the atmosphere, but until now human have emitted twice the amounts as measured as increase over the past 150 years, with a slightly exponential increase per year over the years. That is the reason that there is no leveling off of the increase in the atmosphere and that the sinks also follow the emissions at a near constant rate.

      • The question that arises is why did the sink efficiency increase by a factor of 3? eg Sarmiento 2010.

        Except for interannual variability, the net land carbon sink appears to have been relatively constant at a mean value of −0.27 PgC yr−1
        between 1960 and 1988, at which time it increased abruptly
        by −0.88 (−0.77 to −1.04) PgC yr−1 to a new relatively constant mean of −1.15 PgC yr−1 between 1989 and 2003/7 (the sign convention is negative out of the atmosphere). This result is detectable at the 99% level using a t-test

      • Mr. Englebeen,

        how long do you think we will be able to continue putting 4ppm into the atmosphere even without moronic politicians interfering in drilling and useage?

        Would you estimate even 1000ppm total?? I personally would be very happy if we gould get the CO2 level on earth to 1000ppm. Unfortunately the capacity of the earth to use CO2 is not going to allow it unless you have suggestions for how we can process caco3 or other compounds economically to release the co2?

      • kuhnkat, with the newest drilling techniques (horizontal shale fracking) gas reserves increased with many decennia and oil is going the same direction. Thus it looks that there still is cheap energy available for the foreseeable future, including for expanding countries like China, India, Brazil,…

        Thus as long as the emissions are slightly exponentially increasing, I expect that the airborne fraction in the atmosphere remains a fixed percentage of the emissions, thus also slightly exponentially increasing. If the year by year emissions don’t increase, we may expect a fixed level in the atmosphere somewhere in the future (when sinks equal emissions). If we halve our emissions (not very likely) at the current sink rate of 4 GtC/year, then the increase in the atmosphere would be zero, etc…

      • kuhnkat, with the newest drilling techniques (horizontal shale fracking) gas reserves increased with many decennia and oil is going the same direction. Thus it looks that there still is cheap energy available for the foreseeable future, including for expanding countries like China, India, Brazil,…

        The alternative natural gas deposits show large initial production rates but deplete rapidly. Oil has the same problem; Bakken depostis have a short production life. And where we do find large deposits, such as the tar sands, these require significant amount of natural gas to achieve an energy return on investment. Hard oil shale is the worst.

        The cheap and plentiful fossil fuel energy is still coal, and it is getting dirtier with time as we continue to access lower grades of coal.

      • A good example of lower grade of coal is lignite, which is classified somewhere between ancient peat moss and carbonaceous mud.
        When the future is exploiting lignite, we know prospects are bleak:
        http://arkansasnews.com/2011/05/16/lawmaker-sees-promise-in-lignite-as-alternative-fuel/

      • Mr. Englebeen,

        glad to hear you are not a peak oil fan!! 8>)

      • Webby,

        “When the future is exploiting lignite, we know prospects are bleak:”

        I hear ya. The most successful renewable energy user in Europe counts burning wood chips as renewable and produces more useable energy from them than their enormous wind capacity!!!

      • Norm Kalmanovitch

        If you check the CO2 emissions data you will see that because of the skyrocketing oil price CO2 emissions decreased from 1979 to 1980, and again from 1980 to 1981, and again from 1981 to 1982.
        If you check the CO2 dfata from Mauna Loa Observatory this had no effect on the year to year increase in atmospheric CO2 concentration.
        If a decrease in CO2 emissions from fossil fuels three years running does not even propduce even a deflection in the atmospheric CO2 concentration curve it must be concluded that the observed increase is definitely not primarilt from CO2 emissions from fossil fuels.
        Oceans are a very large depository foe CO2 containing far more CO2 than the atmosphere.
        The Argo Buoys deployed in 2003 show a slight overall cooling in the sea surface but an overall increase in the heat content of the oceans.
        Oceans are saturated in CO2 and this saturation is controlled by temperature and pressure with the deep ocean containing virtually all ofm the CO2 because of the high pressures. The overall increase in heat content of the oceans leads to increased outgassing of CO2 as the saturation point gets lowered by the increased heat. This is the primary source for the increase in atmospheric CO2 concentration.
        A close look at the infamous Al Gore demonstration of the 650,000 years of ice core data showing warming and cooling cycles and changes in CO2 will show that temperature leads CO2 by about 800 years. This is because it takes time to heat the oceans which then outgas CO2 creating this 800 year lag.
        There is plenty of physical evidence to demonstrate this; do you have any actual physical evidence to demonstrate your conjecture that “Man is responsible for 4 ppm/yr based on Gt of fossil fuel burning”?

      • 25 Gt CO2 annually emitted from fossil fuels is nearly 1% of what is in the atmosphere, hence 4 ppm/yr. 6 Gt CO2 from the US alone. These are known numbers you can find anywhere for yourself.

      • Norm Kalmanovitch

        The atmosphere contains approximately 2800 Gt of CO2 and each year approximately 750 Gt is added and an approximately 750 Gt of CO2 is removed. The annual difference between what is added and what is removed resdults in an increase or decrease in the year to year concentration. The question is whether changes to this 750 Gt from changes in the 33.158 Gt (2010) human portion of this is primarily responsible for the observed near linear average increase in CO2 concentrarion of just over 2ppmv/year for the past dozen years. If your contention is correct then year to year changes in CO2 emissions should be seen as year to year changes in CO2 concentration. The devil is in the details so I have listed the details that cover the data for the past five years.
        Current levels of CO2 emissions from fossil fuels for the past five years are: 33.158 Gt (2010) 31.3387 Gt (2009) 31.915 Gt (2008) 31.641 Gt (2007) 30.667 Gt (2006) and 29.826 Gt (2005). (BP Statstical review 2011)
        http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo
        Shows that the annual average concentration for these years has been:
        389.78ppmv (2010) 387.36ppmv (2009) 385.57ppmv (2008) 383.72ppmv (2007) 381.86ppmv (2006) and 379.78ppmv (2005).
        If your conjecture that fossil fuel emissions are the dominent source for the increase in atmospheric CO2 concentration then year to year changes in CO2 emissions should match precisely the year to year changesm in CO2 concentration.
        From 2009 to 2010 emissions increased by 1.820 Gt and CO2 increased by 2.42ppmv.
        From 2008 to 2009 emissions decreased by 0.577 Gt but CO2 increased by 1.79ppmv.
        From 2007 to 2008 emissions increased by 0.274 Gt and CO2 increased by 1.85ppmv.
        From 2006 to 2007 emissions increased by 0.974 Gt and CO2 increased by 1.86ppmv.
        From 2005 to 2006 emissions increased by 0.841 Gt and CO2 increased by 2.08ppmv.
        The last two values show that a smaller increase in emissions of 0.841 Gt produced a larger increase of 2.08ppmv than the greater increase in emissions of 0.974 Gt which only produced a 1.86ppmv increase in atmospheric CO2 concentration. This would not happen if CO2 emissions from fossil fuels were the dominant source for observed increases in atmospheric CO2 concentration.
        Far more obvious is the demonstration that the 1.86ppmv increase in concentration and the 0.974 increase in CO2 emissions from fossil fuels is very similar to the 1.79ppmv increase in CO2 concentration from 2008 to 2009 but during this period there was a year to year reduction in CO2 emissions from fossil, fuels of 0.577 Gt!!
        If CO2 emissions from fossil fuel were in fact the prime source for increase in atmospheric CO2 concentration; a decrease in emissions would cause a proportionate decrease in CO2 concentration and since this is not the case it is an absolute certainty that some other source within the 750 annual addition of CO2 to the atmosphere is the primary source for the observed average 2.046ppmv increase in atmospheric CO2 concentration of the past six years.
        Don’t make statements that you can’t back up with hard physical evidence!

      • If your conjecture that fossil fuel emissions are the dominent source for the increase in atmospheric CO2 concentration then year to year changes in CO2 emissions should match precisely the year to year changesm in CO2 concentration.

        I don’t think you understand how response functions work. To first-order you have to convolve the emissions amount with the CO2 impulse response function. This will smooth out the atmospheric concentration much like a signal processing filter works.

        Don’t make statements that you can’t back up with hard physical evidence!

        That is not necessarily the way that research works. A theorist can spend lots of time coming up with a great explanation for some behavior. Since he may be talented at that aspect but perhaps not at running experiments, the researchers who are better at experimental work take on the challenge of proving or disproving the theorist’s assertions.
        More importantly, I would add that one shouldn’t make claims without having a basic understanding of the physics.

      • Norm Kalmanovitch

        There is a seasonal variation in atmospheric CO2 concentration of approximately 6ppmv over the course of a year. This is easily seen on the detail measurements from MLO http://www.esrl.noaa.gov/gmd/ccgg/trends/#mlo
        If you look closely at this graph or the actual monthly data you will see that the variation is not a smooth sinusoid demonstrating that MLO can pick up changes over time periods as short as a month so year to year changes are therefore perfectly represented.
        Simply put this means that if CO2 emissions from fossil fuels change from year to year this should present at least some detectable change in the year to year CO2 concentration and since this is not the case changes in something much larger than the CO2 emissions from fossil fuels is the primary source for the observed increase.
        IOf you average the data and smooth out the variations all you do is remove the evidence that CO2 is not the prime contributor.
        By the way there is a three year period from 1979 to 1982 during which there was consecutive year to year decreases in CO2 emissions from fossil fuels with no visible change to the rate of increase in atmospheric CO2 concentration proving once again that CO2 emissions are not the prime source of the observed increase.
        If you want some more proof the rate of increase in CO2 emissions changed slope around the year 2000 with the previous 10 years having a significantly lower slope than the later 10 years. If you take the slope of the first ten years and compare it to the increase in CO2 concentration you should be able to calculate a rate of increase in concentration per Gt of emissions increase. If CO2 emissions from fossil fuels are the prime driver this same relationship should hold for the ten years after 2000; if it doesn’t then emissions from fossil fuels are definitely not the primary source for the observed increase. I will leave it up to you to do the exercise and try and prove yourself correct.
        If you do not want to go to that trouble just look at the longterm record.
        The increase in CO2 emissions from fossil fuels was essentially zero prior to 1850 but the CO2 concentration was already increasing at a slowly accelerating rate. How do yopu explain a hundred years of CO2 concentration on the same accelerating curve that was in place until reaching the current asymptote trend of 2ppmv/year that occured before there were increasing CO2 emissions from fossil fuels.
        I can demonstrate this half a dozen different ways but if you are a true scientist you only need one to abandon this error ridden concept and research the actual source of the CO2 concentration increase. If you are not a true scientist but merel;y a researcher with a preconcieved notion then you will simply dismiss physical evidence and attempt to justify what is clearly false.
        The basic physics is that the CO2 molecule is linear and symetrical and therefore doesn’t have the necessary permanent dipole moment tyo allow interaction with all wavelengths radiated by the Earth and is limited to just a single resonant wavelength band centred on 14.77microns. Because clouds and water vapour account for well over 90% of the Earth’s 33°C greenhouse effect the remaining 3.3°C greenhouse effect possibly arrtibutable to CO2 represents at least 80% of the energy within this 14.77micron band already being accessed leaving only 20% of 3.3°C fruther possible effect from CO2 regardless of how large the concentration becomes. This is basic physics and basic physics trumps fabricated computer models and unfounded estimates of the source of observed CO2 increases.

      • “Simply put this means that if CO2 emissions from fossil fuels change from year to year this should present at least some detectable change in the year to year CO2 concentration”

        There would be change in the year to year CO2 concentration even if CO2 emissions from fossil fuels did not change from year to year. Which may very well make it impossible to detect what you are trying to detect.

      • Norm, you are kind of all over the map here. Especially at the end you bring up CO2 absorption characteristics which has nothing to do with its residence time.
        The small sinusoidal ripple is very easy to fit as it is modeled as a steady state response to a long-term natural variation. In other words, according to signal processing theory, a response function applied to a sinusoidal function in the steady state will result in a scaled version of the original sinusoidal signal. This is one of those mechanisms that is so common that an engineer or scientist should not even bat an eye in doing the analysis.

        From the Fourier transform of the impulse response, one can accurately predict the scale of the periodic excursions of the natural CO2 emissions prior to the filter being applied. Read the amplitude response at the natural frequency and that is the filtering scale factor.

        As far as the noise is concerned, knock yourself out trying to figure out what causes it. It could be measurement noise after the response occurs which would make it problematic to distinguish from random natural events.

      • Simply put this means that if CO2 emissions from fossil fuels change from year to year this should present at least some detectable change in the year to year CO2 concentration and since this is not the case changes in something much larger than the CO2 emissions from fossil fuels is the primary source for the observed increase.

        Norm, I’m on your side on this one: CO2 emissions from fossil fuels do indeed change from year to year, and we should indeed see the effects of these changes within a year, or even six months.

        But let’s sit down and do the math here. The atmosphere weighs 5140 teratons (aka exagrams), and a mole of air weighs 28.97 g, so the volume of the atmosphere is 5140/28.97 = 177 examoles, agreed? A millionth of that is 177 teramoles. So a fluctuation of 6 ppmv is 6*177 teramoles or about one petamole of CO2.

        Since the carbon in a mole of CO2 weighs 12 g, a petamole represents 12 petagrams, aka gigatons, of carbon.

        Now the annual carbon emissions from fossil fuel in 2009 amounted to around 9 gigatons (this year it should hit 10 GtC). So in order to compete with the annual 6 ppmv CO2 fluctuation, mankind would have to suspend all fossil fuel emissions for the year, and even then that would only get you a 4.5 ppmv reduction.

        I don’t know what fluctuations you had in mind, but if you look at the record of global fossil-fuel CO2 emissions maintained by the US Department of Energy’s Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory, you can easily see that any departures from a smoothly growing curve are less than 100 megatons or 0.1 gigaton.

        I fully agree with you that the CO2 level at Mauna Loa should fluctuate with fluctuating fossil fuel emissions. This fluctuation will however be less than 1% of the annual 6 ppmv fluctuation, or at most .06 ppmv.

        Since the month-to-month fluctuations at Mauna Loa are much greater than this, what you’re looking for will be completely masked by random noise.

      • Vaughan,

        Is this a strawman? The change in the year to year CO2 concentration is the annual growth rate. It looks like this:
        http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf

        Does it look like the growth is caused by anthropogenic emissions? Do the math.

      • Edim – are you a strawman? Let me suggest you open an Excel worksheet. Column B will be the years 1960 to 2010. Column C should start at 2.8, and increase by 0.025 a year. Column D should be =2*RAND()-3. E1 should be C1+D1, E2 should be =E1+C2+D2.

        Here, C is meant to represent human emissions, D to represent ocean and ecosystem uptake, and E to represent concentrations. If you then make a column F which is E2-E1, this represents the annual mean growth rate. Plot F versus B. You’ll get something that looks a lot like the annual mean growth rate plot that you link to.

        Now, does it look like the growth in CO2 is due to column B? Do the math.

        -M

      • Does it look like the growth is caused by anthropogenic emissions? Do the math.

        Yes, let’s resolve this by doing the math. The year-to-year fluctuations in that graph are on the order of 0.5 ppmv or approximate 1 GtC. Annual fossil fuel emissions are 9 GtC. Nature’s contribution is around 210 GtC, so the total natural and human CO2 emissions come to around 220 GtC.

        So even before looking at anthropogenic emissions, these half-ppmv fluctuations in that graph represent 1/210 < 0.5% of the total natural emissions.

        Do I understand you to be telling nature she’s not allowed to fluctuate by 0.5% of her total annual emissions?

        I don’t know what you’re seeing in that graph that proves your point, but what I’m seeing there is a 0.5% noise level in natural CO2 emissions. Though small in the big picture, this is nevertheless large enough to dwarf the fluctuations Norm is looking for. The signal is simply too noisy to allow him to tell whether those fluctuations are present.

      • Norm, you have echoed the Salby argument about correlations. The data on CO2 rise fit the anthropogenic emissions extremely well when you take into account that the ocean/biosphere sink becomes less efficient when it is warmer. In warm years CO2 rises faster even if emission stays fairly constant because a warmer ocean can’t take up so much.

      • Jim D,

        You need to prove CO2 has reached or near saturation in the ocean in order to make such a statement ‘a warmer ocean can’t take up so much’. The ocean is far from CO2 saturation. I thought AGWers say warmer induce more water evaporation and more precipitation. More precipitation will absorb more CO2 in the air.

      • The ocean doesn’t need to reach “saturation” to lower its rate of CO2 uptake. You’re applying only chemical stoichiometry, but failing to consider chemical kinematics (rates of chemical reactions).

      • settledscience,

        I hope you digest what I said and what Jim D said. Did I mention it did not change rates? Do you know the ocean CO2 absorption rates between 1C SST difference? If you know, you would make such a statement.

      • Sam NC, it is not saturation, it is the equilibrium ratio, that depends on temperature. Warmer temperatures favor keeping more CO2 in the atmosphere versus in the ocean (much like water vapor in this way).

      • Digest this, Sam.

        The ocean doesn’t need to reach “saturation” to lower its rate of CO2 uptake. You’re applying only chemical stoichiometry, but failing to consider chemical kinematics (rates of chemical reactions).

        Jim D told you this too.
        “Sam NC, it is not saturation, it is the equilibrium ratio, that depends on temperature.”

        The equilibrium ratio changes as the rate of emission exceeds the rate of absorption.

        Finally, your claim that due to warming, higher levels of “precipitation will absorb more CO2 in the air” is irrelevant because it is not quantified.

        I thought AGWers say warmer induce more water evaporation and more precipitation. More precipitation will absorb more CO2 in the air.

        Do you believe that absorption is higher than the amount emitted? Based on what study?

        Nothing. You just pulled it out of your arse.

      • Jim D,

        Ah, so now you realize that CO2 in the atmosphere is not entirely due to man made.

      • Sam NC, maybe you just realized that. Everyone else knew that 280 ppm was the natural level, and this cycles between the atmosphere and ocean.

      • Norm,

        Nice analysis. AGWers are having a big difficult time to respond adequately.

      • Quite the opposite: the calculations above show that the fluctuations at Mauna Loa due to the fossil fuel emission fluctuations Norm asked about, which are on the order of 100-150 megatons of carbon (= 350-550 megatons of CO2), can be responsible for at most 1% of the annual oscillation in the Keeling curve. The noise in the Keeling curve is substantially more than 1%, making a 150-megaton variation in annual carbon emissions essentially invisible.

        Incidentally one thing I forgot to take into account in my calculations is that only half the emissions remain in the atmosphere. So my 1% should have been 0.5%. That only makes fuel fluctuations even less visible.

        Norm is talking about something too small to be observable in the Mauna Loa data.

      • Strawman. Norm is talking about the annual growth rate of atmospheric CO2 and the rise of anthropogenic CO2 emissions (~3 Gt in 1960 and 10 Gt now, according to your link).

      • Yes, Norm is indeed talking about those, as am I. How does that make my calculations a strawman? Are you saying I made a calculation error somewhere?

      • It seems that you are talking about anthropogenic emissions fluctuations (100-150 MtC). He’s talking about the anthropogenic CO2 increase (~4 GtC in 1960 and ~9 GtC now, according to your link). He compares it with the annual atmospheric CO2 growth rate. Does it look like human emissions are causing the atmospheric CO2 growth?
        http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf

      • Edim

        It looks like CO2 annual mean growth varies with temperature.

        Who of thunk it?

        Cheers

      • Does it look like human emissions are causing the atmospheric CO2 growth?

        Yes it does, on the assumption that nature is drawing down 45% of fossil fuel emissions (not all of our emissions remain in the atmosphere). Let’s look at the emissions for 2005 since that’s the middle of the rightmost decade in this graph. That was 7.97 GtC. The 55% of that left in the atmosphere is 4.38 GtC. Now 1 ppmv of atmospheric CO2 equals 5.140/28.97*12 = 2.124 GtC. So we should have seen 4.38/2.124 = 2.06 ppmv for 2005.

        Looks like that to me. Are you sure we’re looking at the same graph? Or have I misunderstood what you meant?

      • It looks like CO2 annual mean growth varies with temperature.

        I think CH wins this one. I lined up the NH temperature graph from WoodForTrees for 1960-2011 with Edim’s CO2 graph and put the result up at http://thue.stanford.edu/TempCO2corr.JPG .

        While the CO2 isn’t perfectly tracking temperature (after all there are other natural contributors to CO2 such as volcanoes that would throw off any such correlation), it’s still a pretty impressive correlation.

        One very noticeable place where the temperature rises while CO2 goes down is 1990-1991. The cataclysmic explosion of Pinatubo on June 15, 1991 in the Philippines, 5000 miles west of Mauna Loa, might be relevant. In 1992 the NH temperature declines, consistent with a massive ash cloud lasting a couple of years. The CO2 then rises in 1993. Hard to tell what’s going on there. The rest of the half-century doesn’t seem to have as big a reverse correlation as 1990-1991.

  15. Little busy, but have a look at this paper; especially Figure 3.

    http://www.whoi.edu/cms/files/Zhang_et_al_12Feb08_final_DSRII_34850.pdf

    Marine photosynthetic microorganisms fix more than 10x their body mass each year. They move up and down the ‘mixing layer’ during a full seasonal cycle. Moreover, the feces of the animals which eat them fall as ‘snow’ and are oxidized all the way down to the bottom.
    Treating the oceans as a chemical onion misses the whole oceanic biosystem.

  16. If you could follow an isolated CO2 molecule through the atmosphere, ocean, biosphere, etc you would find that it would exchange reservoirs rapidly (of order years or less). However, exchange does not necessarily imply net CO2 drawdown.

    The relevant piece of information for climate change is moreso involved with the perturbation timescale of the excess CO2 (i.e., if we throw an extra slug of 100 ppm of CO2 into the atmosphere, how long before it decays back to pre-perturbation levels?). An individual number as an answer to this question is not incredibly useful due to removal processes that act on multiple timescales, ranging from decades to hundreds of thousands of years, and the governing processes range from ocean chemistry to silicate weathering. In fact, it would take many millennia to completely draw down all the excess CO2, as indicated in carbon cycle models and past analogs (e.g. PETM, which took some 150,000 years to recover). In fact, a recent NAS report was dedicated to precisely this issue, looking at the long-term impacts of excess CO2. This page summarizes the relevant timescales and removal processes.
    http://www.nap.edu/openbook.php?record_id=12877&page=75 (and following pages)

    Another review of this topic is in

    David Archer, Michael Eby, Victor Brovkin, Andy Ridgwell, Long Cao, Uwe Mikolajewicz,Ken Caldeira, Katsumi Matsumoto, Guy Munhoven, Alvaro Montenegro, and Kathy Tokos, Atmospheric lifetime of fossil-fuel carbon dioxide., Annual Reviews of Earth and Planetary Sciences 37:117-134, doi 10.1146/annurev.earth.031208.100206, 2009.

    The timescale commonly cited (~100 years) by Fred and others, is in fact a poor representation of the carbon cycle and based on largely on application of linear kinetics that tell only a part of the story. Equilibration with the ocean takes a couple centuries (depending also on the size of the perturbation), while something around 25% of the excess CO2 is fated to be removed by slower chemical reactions with CaCO3 and igneous rocks. This is especially relevant for slow feedbacks like ice sheet responses.

    On the extreme short side, the “5 year” timescale for CO2 lifetimes are often meant to imply that if we stopped burning CO2 today, we’d return back to 280 ppm in a few years. None of this is in line with anything we know about carbon cycle physics, nor can make sense of in observed records of CO2 levels. It’s just as nonsensical as the “anti-greenhouse effect” stuff played up by Claes Johnson or Postma. In fact, a substantial fraction of anthropogenic CO2 will persist in the atmosphere for much longer than a century.

    This, and the underlying long-term temperature response was also the subject of a very good paper by Matthews and Caldeira a couple of years ago, on why it takes near zero emissions to stabilize atmospheric CO2
    https://www.see.ed.ac.uk/~shs/Climate%20change/Data%20sources/Matthews_Caldeira_%20Instant%20zero%20C%20GRL2008.pdf

    • Chris – I haven’t cited a time scale of 100 years, but rather emphasized the multiple trajectories with their different timescales, including the very long tail of the distribution. However, for readers interested in a single number, we have to come up with something, and a number in the order of about 100 years is not unreasonable to convey a sense of the long residence time, even though it has no formal mathematical meaning No single number can do that, but 100 years may not be too far off from the time it would take about half of the excess to be absorbed, even though that is not a “half life” in the exponential decay sense, and understates the slowness by which the remaining half would disappear. I don’t think there is much disagreement about the actual nature of the reduction in CO2, as your comments, plus those of some others of us below make clear.

      • However, for readers interested in a single number, we have to come up with something, and a number in the order of about 100 years is not unreasonable to convey a sense of the long residence time.

        I cringe when I read things like this. It gives science a bad name to overstate its confidence like that. If science doesn’t have a number it shouldn’t make one up.

        The problem with a made up number is that people may start repeating it and pretty soon you’ve got everyone agreeing that that must be the correct number, for no better reason than that everyone says it is.

      • It is not a matter of overstating confidence. It is a matter of what is the best value to characterize a highly non-exponential process. I.e., the main problem is not a lack of knowledge but a lack of a way to express that knowledge by a single number.

      • Agree with Fred and Joel. To sate my curiosity, I created a coupled set of differential equations described with a diagram below in this thread.

        Fat-tails are not well described by conventional statistics as they do not present any of the conventional statistical moments such as mean, variance, etc. The best I have been able to come up with is presenting the shape of the curve along with a characteristic time that is usually related to a median value. Again this is less than ideal because you might find a characteristic time of 20 to 30 years but the fat-tail still dominates. So we need a canonical representation of the impulse response.
        I would suggest a hyperbolic function as an impulse response curve. Previously I was able to take the fossil fuel emission curve and convolve with this impulse response and found that it kept track of the atmospheric CO2 over the industrial age.

        From oil discovery to oil production to emission and to sequestering, everything is a compartment model, and it is all connected. It’s kind of magical that the math works on a statistical scale but not surprising.

      • Vaughan,

        do you ever have nightmares of people in white smocks surrounding your home with burning torches chanting??

      • I know the kind you’re thinking of, kuhnkat, but I seriously doubt enough of them could afford the bus fare to California to be much of a nuisance.

        Anyway we’re used to that sort of thing here around Christmas. After a couple of chants we hand out cookies and they move on to the next house.

        However if by some remote chance Ottawa’s ferocious Greenfyre were to come upon the two of us conspiring to confuse the public, I wouldn’t have to be able to run faster than him, only faster than you.

      • Not a problem. I am protected by the Second Amendment. Have fun leading them on a Long Beach or San Francisco Marathon.

      • Dr. Pratt, I feel your pain. I cringe when when people use control theory and global averages with simplified radiative physics neglecting varying responses temporally and spatially of conduction and convection impacted by pseudo-cyclic perturbations. :)

      • You must be on an appointments and promotions committee, Dallas, you sound like you’ve read one tenure case folder too many.

        Athletes get tendonitis, A&P committee members get tenuritis. Who can afford to pay attention to the rule that the candidate’s case be presented in language that the Board of Trustees can understand? Most institutions would have a hard time maintaining a reasonable senior-junior faculty ratio if they took that rule seriously.

      • I have never been on a blog comment thread for this long where the insight keeps coming. And it really doesn’t matter if it is good insight or misdirection — like entropy and disorder and disorder getting in the way, one has to be able to reason around the perturbations.

      • Sorry, I was attempting a pithy summary. :)

      • I am agreeing with you. You have some good ideas, and so keep them coming.

    • In fact, it would take many millennia to completely draw down all the excess CO2, as indicated in carbon cycle models and past analogs (e.g. PETM, which took some 150,000 years to recover).

      Chris, how do the carbon cycle models account for the 4 GtC/yr increase in natural removal of CO2 from the atmosphere, offsetting our 9 GtC/yr by nearly 50%? And how confident are they of their accounting?

      Also is PETM a good analogy to the present? The current rate of increase is over 2 ppmv/yr, and the drawdown is tracking that increase at around 1 ppmv/yr. Unless PETM witnessed something remotely resembling that rate of increase it may also not have experienced the rapid drawdown we’re currently in the middle of.

      We may be entering a time when nothing in the past is a reliable analogy. For all we know a speedy rise may be followed by a speedy decline.

      Or not, but absent suitable precedents it’s hard to say one way or the other. This is a relatively speculative corner of climate science, unlike better understood things like the rate of onset of CO2 and temperature.

      • Vaughan: around 20 to 30% of the CO2 we emit will stay in the atmosphere until slow processes like sedimentation, weathering, and carbonate formation can act – that’s tens of thousands of years.

        The other 70-80% will eventually end up in the oceans and ecosystems, but even that takes time, and is controlled by diffusion rates in the ocean and convective currents – so even to get down to that 20-30% takes decades to centuries.

        I did a rough back-of-the-envelope calculation once, and think I decided that we’re already emitted enough CO2 so that if we stopped emitting today, we’d slowly relax back to a lower limit of 310-350 ppm CO2 (low end assumes that land-use change emissions are reversible and only 20% stays in the atmosphere, upper end includes land-use change emissions and a 30% persistence). The initial drop would be at current rate of natural uptake, but would slow as it asymptotically approaches that lower limit over centuries… and then after tens of thousands of years it would start dropping below that lower limit as it presumably would eventually return to the 280 ppm preindustrial level.

        -M

      • M, the fallacy in any reasoning that talks about “20 to 30% of the CO2 we emit” is that, as soon as it’s emitted, it is indistinguishable from the 210 GtC nature emits in parallel with our 9 GtC/yr contribution.

        For that reason it only makes sense to talk about 1-1.5% of atmospheric CO2, not about the 20 to 30% of what we no longer own. It doesn’t exist as “ours” any more.

        Nature is currently removing 214 GtC, up 2% from what it was a century ago, and if that were to increase by another 0.5% for some reason, right there we’d see the rate of increase decline by 1 GtC/yr or around 20% of the 5 GtC/yr that carbon is currently increasing by.

        Columbia’s Klaus Lackner, as well as Exxon, have been working on directly extracting CO2 from the atmosphere. Leveraging Nature’s 214/yr GtC removal program strikes me as a potentially powerful alternative to Lackner’s very labor-intensive approach.

        … if we stopped emitting today … the initial drop would be at current rate of natural uptake,

        Apologies for continuing to contradict you, I’m starting to feel like a wet blanket. However if you have a human pouring 300 tons/sec into a leaky bucket that has a natural leak of 100 tons/sec, and the human turns off the hose, the rate of increase does not drop by 100 tons/sec. It drops by 300 tons/sec because the rate of increase went from +200 to -100.

        If the hose output is reduced suddenly, the filling rate drops initially by whatever the hose rate dropped by, up to and including all of it. Nature has no say in that.

        we’d slowly relax back to a lower limit of 310-350 ppm CO2

        Assuming exponential decay, that’s two parameters: the asymptotic limit (which you gave) and the rate it is approached (which you didn’t). It would be very interesting to see how you arrived not only at the number you gave but also the one you didn’t. (Error bars would be even better but I’ll settle for just the two numbers for starters.)

        For all I know you may have a compelling analysis. I don’t like trying to second guess these things.

      • ” if we stopped emitting today … the initial drop would be at current rate of natural uptake,”

        I think you misread me, and we actually agree on this point. What I meant was that if we see 5 GtC natural uptake per year today (along with our 10 GtC emissions), that if we eliminate human emissions, atmospheric loading of CO2 will drop by 5 GtC per year, because the rate of uptake is controlled by the difference between the atmospheric concentration and the concentrations in the various ecosystem, soil, and ocean reservoirs, so, to a first order, the uptake rate in any given year is independent of human emissions in that year (though it does depend on how much was emitted in the previous decades, which controls how far out of equilibrium the atmosphere is).

        “M, the fallacy in any reasoning that talks about “20 to 30% of the CO2 we emit” is that, as soon as it’s emitted, it is indistinguishable ”

        I’ll disagree with you here. Dollars are indistinguishable, but if my bank account is approximately in a steady state (income in equals rent plus food plus entertainment money out), and then my grandmother gifts me $100, I can perfectly well note that 20% of that $100 will end up permanently increasing my retirement account, but the other 80% will go towards more entertainment and food (rent being fixed). There’s a fixed quantity of labile carbon in the ocean/atmosphere/ecosystem/soil system. If I dig up coal and burn it, I have increased that total amount of carbon. I can calculate how that extra carbon will distribute itself between the reservoirs at equilibrium. Therefore, I can talk about how 20% of that extra carbon will remain in the atmosphere after the system reaches its new equilibrium.

        “Nature is currently removing 214 GtC”:
        I also disagree with the way this is formulated: I’d argue that Nature is currently removing only a few GtC. The rest of it is in balance – plants breathe in and breathe out, leaves grow, fall, and decay, carbon enters the ocean and leaves the ocean. An increase in atmospheric concentration disturbs the balance a bit – a little more enters the ocean than leaves, plants grow a little bigger than they would have otherwise – but only a bit. Yes, you can calculate a Gross Primary Productivity of the ecosystem of 120 GtC per year, but I think the net is more informative than the gross for most purposes. Especially when you are talking about the 70 GtC going into (and out of) the ocean. It might be possible to change this, but only at great expense: iron fertilization the oceans probably won’t actually work (and would be ecologically disruptive), and we are already doing a bunch of terrestrial ecosystem harvesting and storage (eg, making wooden buildings) but the sheer volumes of matter we’d have to store/bury would be… well, in the gigatons per year to make a difference. That’s a lot of trees buried. I’m also unconvinced that Lackner’s work will ever be practical due to thermodynamic energetic constraints as well as the volumes of material the process would create. (I like Caldeira’s thinking: any carbon-capture system that can compensate for our emissions will have to be at least as large as our current fossil infrastructure, if not 3 times as large if you have to deal with the CO2 and not just the carbon)

        “It would be very interesting to see how you arrived not only at the number you gave but also the one you didn’t.”

        My “20-30%” comes from Archer et al. 2009 (http://geosci.uchicago.edu/~archer/reprints/archer.2009.ann_rev_tail.pdf). The fossil & land-use emissions I used come from CDAIC (http://cdiac.ornl.gov/). I assumed the unperturbed CO2 concentration was 280 ppm. So, 280 ppm + (347 GtC)*0.2/(2.12GtC/ppm) equals about 310 ppm (lower bound) (upper bound uses 0.3 and included land-use change emissions). I didn’t give an asymptotic approach rate: Archer says “2 to 20 centuries”, but probably looking at the actual model results in his paper will give you a better feel for those rates.

        -M

      • I think you misread me, and we actually agree on this point. What I meant was that if we see 5 GtC natural uptake per year today (along with our 10 GtC emissions), that if we eliminate human emissions, atmospheric loading of CO2 will drop by 5 GtC per year, because the rate of uptake is controlled by the difference between the atmospheric concentration and the concentrations in the various ecosystem, soil, and ocean reservoirs, so, to a first order, the uptake rate in any given year is independent of human emissions in that year (though it does depend on how much was emitted in the previous decades, which controls how far out of equilibrium the atmosphere is).

        So far I’m not convinced we agree, but perhaps you can persuade me otherwise. Continuing with my 300 tons/sec bucket example, I believe you’re saying that turning off the 300 tons/sec when there’s a leakage of 100 tons/sec will take 200 tons/sec off the loading. That is, we were adding 200 tons/sec and now we’re not.

        Where we disagree is on the importance of the 100 tons/sec continuing to leak. Whereas you don’t want to count that, I do.

        Let me stop for a second and see whether you still think there’s no difference between our respective viewpoints.

      • Okay. We have a bucket with 100 stones in it. Every day, I add 10 stones, and nature takes away 5 stones. So, under current conditions, the bucket increases by 5 stones per day (eg, it would be at 105 stones tomorrow). If I stop adding stones, nature is still taking away 5 stones per day, so the “loading” of the bucket is decreasing by 5 stones per day (eg, it would be at 95 stones tomorrow).

        Eventually, nature’s uptake will drop below 5 stones per day, because it turns out that the natural uptake is controlled by the difference between bucket 1 (with 100 stones) and bucket 2 (with 50 stones), and once the two buckets are equalized, nature will stop taking stones out of the first bucket. But, to first order, nature’s stone removal tomorrow is not dependent on whether or not I’ve added another 10 stones today: nature will take 5 stones out either way.

        I do think we’re still in agreement: to use your 300 ton/sec minus 100 ton/sec, we’re going from +200 ton/sec to net -100 ton/sec: the 100 ton uptake is constant (in the short term), regardless of whether we’re adding 300 tons or 0 tons. Right?

      • Good information, thanks.

        When I first looked at the IPCC Bern profiles, I thought the CO2 impulse response appeared as a diffusive fat-tailed curve. From what I have learned from you and Bart and Manacker, I thought to bit the bullet and create a mesh of first-order rate equations to model the diffusion to the deeper sequestering sites.
        http://img534.imageshack.us/img534/9016/co250stages.gif
        This model goes on for about 50 stages, with the steady state showing an equal amount of carbon at each stage. The interesting feature is the shape of the atmospheric CO2 curve; this indeed shows a 1/(1+k*sqrt(t)) dependence which is very close to the IPCC Bern model.

        I also now agree with Bart and Vaughn and a few others that think that the definitional residence time of the CO2 in the atmosphere is a bit of a red herring. The residence time in the atmosphere may be 10 years but when one factors in the interchange of the CO2 between the boundaries of the system, the actual impulse response does get these fat-tails.
        I think it explains much of what is happening.

      • The interesting feature is the shape of the atmospheric CO2 curve; this indeed shows a 1/(1+k*sqrt(t)) dependence which is very close to the IPCC Bern model.

        This would be very reassuring if it included estimates of (i) any recent increase in vegetation biomass, and (ii) how quickly that vegetation will throw in the towel upon commencement of a program of starving it of CO2.

        If the answer to (i) is “negligible” I would be more comfortable with the estimates people are coming up with based on exponential decays. However in my experience life is not into exponential decay, quite the opposite in fact according to Malthus. (The Black-Scholes-Merton partial differential equation for the price of a financial asset is similarly opposite to what happens in physics.) And not just for human populations but all populations of living creatures including vegetation.

        This is why I’m suspicious of these arguments about residence time: they start from the premise that there is no life on Earth at all, let alone intelligent life. This may be the impression created by Internet blogs, but for my money some vegetables are pretty intelligent. Those Venus flytraps can outsmart flies for example, and generally speaking flies seem smarter than many bloggers, present company excluded of course.

      • (The Black-Scholes-Merton partial differential equation for the price of a financial asset is similarly opposite to what happens in physics.)

        Black-Scholes is derived from the Fokker-Planck which is the so-called “master equation” used quite often in physics. Financial returns randomly walk about some central value. The problem is that Black-Scholes like many formulas don’t work when game theory strategies and human psychology is involved.

        But that’s beside the point. The set of equations I solved was precisely the Fokker-Planck formulation with drift removed. It is a purely diffusional process set up as a staged slab model. It is only odd in that I have the top layer with different rates than the diffusional slab layers.

        And I agree with you about the residence time. What I devised was a set of somewhat stiff equations, and the stiffness is due to a faster rate interfacing the atmosphere than the slower rate between the slab layers.

        This is a fascinating problem to model and if we don’t do it, the engineers will. If any engineering is done with respect to sequestration they will likely do it, because it is all about solving problems that their bosses lay in front of them. In other words, engineers use models out of necessity and wanting to keep getting paid. I think we are here because we see it as a challenge.

      • WHT, your diagrams are a tad cryptic but I get some of the idea.

        Have you looked into reconciling your modeling with results such as those in the Domingues et al paper I cited earlier?

        The connection I intended between Malthus and Black-Scholes is that both are simply time-reversed versions of what we can observe in simple physical situations, respectively exponential decay and diffusion (but you knew that).

      • Thanks for the ref. The paper does talk about doing detailed balance, which is what I am trying to. Master equations at the most fundamental level are about doing mass balance and accounting for conservation of material.
        Domingues also includes sequences of volcanic events as disturbances which means that they are doing more than a single impulse response, and they are looking at more than just CO2 concentration change. Also, if they are looking at heat content of oceans then that is inferring to a couple of steps down the road.

        Somebody else mentioned pH with ocean depth, which would tell us about CO2 diffusion through the slabs. That would seem a better comparison, as CO2 would random walk down from the atmospheric disturbances.

      • “I would be more comfortable with the estimates people are coming up with based on exponential decays”

        It is important to remember that the real carbon cycle models are NOT based on exponential decays. The four term exponential Bern cycle approximation is NOT the Bern cycle model: it is a four term exponential FIT TO the real Bern cycle model, for a case where the system is in equilibrium and a pulse of carbon is being added. Real carbon cycle models have plant types which respond to increased CO2 concentrations by changing the modeled stomatal conductance (basically, they can expend less energy, and use less water, to fix the same amount of carbon). Sophisticated models also include nitrogen in the modeling, so that the fertilization effect from increased CO2 drops when nitrogen becomes a limiting factor: but, even more sophisticated models include the fact that as temperature increases, the rate of decay in leaf litter increases, which makes more nitrogen available. The oceans are modeled at a similar level of detail, with mixed layers, diffusion rates, thermohaline circulation currents into the deep oceans, and biological pumps where organisms suck carbon out of the upper mixed layers and then sink (upon depth) into deeper waters where they turned back into dissolved inorganic carbon through their own decay rates.

        This is like claiming that IPCC models are wrong because they don’t account for the diurnal cycle because all the Sky Dragon people have looked at are the 1-D approximation equations. No, the real models are actually quite sophisticated – not perfect, but reasonably physically realistic – and it is only the teaching tools that are stripped down to their bare minimums…

      • It is important to remember that the real carbon cycle models are NOT based on exponential decays. The four term exponential Bern cycle approximation is NOT the Bern cycle model: it is a four term exponential FIT TO the real Bern cycle model, for a case where the system is in equilibrium and a pulse of carbon is being added.

        That is a good point. Since most of the papers are vague about the origins of the Bern model, I probably got the impression that the fat-tails were caused by a superposition of a few exponentials. But after looking at it from the perspective of a diffusional slab model going into the earth interacting with a faster steady-state balanced flow between the surface and atmosphere, I can see why they made up that heuristic. It is all a matter of matching some simpler expression to the detailed model.

        Consider that my slab model has 50 layers below the surface and so includes 100 rate flows between the layers, all balanced out in a master equation. Of course this would create an ugly analytical expression for the CO2 impulse response. So instead of selecting a few exponentials to match the response profile, it makes a lot of sense to fit the curve to the natural controlling factor. From the model, this factor is diffusion into the more permanent layers, so we can try something from the 1/sqrt(t) family. This heuristic certainly works well to explain both the Bern impulse response curve and the impulse response from the multiple layer model.

        In my opinion, the exponentials are still there but they are buried in a mesh of layers. The mesh is regular but that doesn’t make the approximating heuristic any easier to derive.

  17. Just a few quick points:

    (1) The concept of a single decay time for CO2 is not very good because we use such decay times to characterize exponential decays. The decay of a slug of CO2 is highly non-exponential: almost half of it partitions into the other easily-accessible reservoirs (ocean mixed layer, biosphere) in a matter of months or a few years at most but the rest decays very slowly…and very non-exponentially, so that even after, say, 1000 years there is still expected to be something like 25% of the original amount (I think “the original” after the initial rapid partitioning). You need to read David Archer’s book or papers to understand this.

    (2) Jeff Glassman, up to his usual tricks, tries to complain that something that even a hardened skeptic like Willis Eschenbach (and Hans Erren) agree with is somehow unfathomable: Why does a CO2 pulse decay less rapidly than CO2 exchanges between the atmosphere and the hydrosphere / biosphere? It is really not that complicated. The point is that the ocean mixed layer, the biosphere, and the atmosphere form a subsystem where the CO2 rapidly exchanges back and forth between these reservoirs. This means that when you add a new pulse of CO2 to the atmosphere, it doesn’t take long until it has partitioned between these three reservoirs. However, from then on, the decay rate out of this subsystem is governed by slower processes like exchange between the ocean mixed layer and the deep ocean or absorption by the lithosphere.

    (3) An analogy is useful: Imagine you have three connected containers containing water (and let’s have some sort of pump that mixes the water around between the containers just to make even the exchange of water molecules between containers reasonably rapid). If you add water to one of the three containers, what happens is that the level in all 3 containers goes up. Now, what Jeff would want you to believe is that if you measure the residence time of the molecules in the container that you added water to to be, say, 3 minutes, then the level of the water will decay back down with a characteristic decay time of that value. However, in reality, the level of the water in this example won’t decay at all…except at the much slower rate determined by evaporation of the water out of the containers!

    • Joel- The multiple trajectories by which excess CO2 declines toward an equilibrium baseline have been analyzed by David Archer, as you mention. The different rates have been estimated by a variety of models, as the linked article indicates. However, it is easy to assign a minimum lifetime for excess CO2 from a simpler set of observations (Archer mentions this). The current CO2 concentration slightly exceeding 390 ppm is about 110 ppm above a baseline for climates of previous centuries. From observational data we know that current emissions, if not absorbed by sinks, would add about 4 ppm per year, but the observed rise has only been about 2 ppm, with the remaining 2 going into sinks. Since the absorption into sinks is a response to the 390 ppm (the sinks don’t care whether they are absorbing old or new CO2), a linear return to baseline would require 110/2 = 55 years. The true rate is of course considerably slower due to the asymptotic nature of the approach to baselines values and to various climate feedbacks. It therefore appears that an “average” value of about 100 – 200 years is a reasonable estimate, as long as it is realized that the true rate involves a long “tail” that declines over many thousands of years.

      Interestingly, of the six links in Dr. Curry’s original post, plus the link to the Dyson/May dialog she added, there is really no substantial disagreement about this long lifetime. Rather, some of the links focus instead on the exchange rate of individual CO2 molecules, which involves a much shorter lifetime, some on the decline of an excess concentration as discussed above, and in the case of both Essenhigh and Dyson, both phenomena are acknowledged and distinguished from each other.

      • “a linear return to baseline would require 110/2 = 55 years”

        That is just awful reasoning. The effective gain is independent from the time constant.

      • What are you referring to? 110/2 = 55 years is simply the arithmetic that would describe the time for all the excess to disappear if its disappearance rate were 2 ppm/year, which is about the current rate. No-one argues that to be the case, but it’s easy to argue that the disappearance rate won’t be faster.

      • The quandary: to explain and have you not understand, or blow it off and allow you to imagine you are anywhere close to being right?

        It’s too late to worry about tonight…

      • Yep. Given realistic emissions scenarios, it looks like we’re heading for 800ppm in a century or so. Dyson has a proposal to mitigate that that does not depend on stifling industrial society, but rather on a worldwide planting program. His proposal relies on the short residence time of a single CO2 molecule in order to achieve its results, and for his purposes, that’s the correct residence time to use.

        The long-term residence of “net” CO2 is only the “relevant” number if your only policy for mitigation is shutting down fossil-fuel burning. The insistence that this longer duration is correct and the shorter one is incorrect is a sign of ingrained policy bias. If the world environmental community had launched a crusade for planting lots more carbon-sequestering plants, and tried to get an international treaty to set targets for that, etc., then the short-term number would be the “orthodox” answer given to the public.

        Optional Study Question: Since Dyson’s proposal acts fairly quickly, is reversible, and doesn’t require the abandonment of high-density energy sources (and the concomitant impoverishment of the world), why isn’t there more emphasis on figuring out how to do it practically?

      • Where is he gonna go stick it?

      • The insistence that this longer duration is correct and the shorter one is incorrect is a sign of ingrained policy bias.

        I didn’t realize that CO2 follows the rules of political science more closely than it does physics and chemistry.

      • Your sarcasm is misplaced. Read the argument–Dyson’s response to his critic–and then reread what I wrote.

        No one disputes that a given CO2 molecule circulates out of the atmosphere in about five years. The dispute is whether that residence time is relevant, and it is indeed not the relevant time if we want to know how quickly a cutback in CO2 emissions will take effect on the atmosphere

        But exactly which laws of physics are relevant depends on the proposed mitigation policy. Dyson’s proposal for carbon-eating plants makes the five-year molecular residence time the relevant one. If that were the “baseline” policy proposal on the table, then that would be the “standard” answer to the CO2 residence question. Your inability to understand this point on the first pass bespeaks your intellectual entrenchment in a single policy position.

    • Joel Shore 8/24/11, 9:48 pm, CO2 residence time

      JS: (1) The concept of a single decay time for CO2 is not very good because we use such decay times to characterize exponential decays. The decay of a slug of CO2 is highly non-exponential: almost half of it partitions into the other easily-accessible reservoirs (ocean mixed layer, biosphere) in a matter of months or a few years at most but the rest decays very slowly…and very non-exponentially, so that even after, say, 1000 years there is still expected to be something like 25% of the original amount (I think “the original” after the initial rapid partitioning). You need to read David Archer’s book or papers to understand this.

      You suggest some arbitrary representation, when the result is from first year physics:

      The formula for the residence time from any reservoir, M, at a rate S, is T = M/S. AR4, Glossary, Lifetime, p. 8. Since S = -dM/dT, the formula provides a differential equation for M: dM/M = -dt/T. The solution is M = M_0*exp(-t/T).

      The decay of a slug is exponential and it has one decay time.

      This formula conflicts with the Bern formula also used by IPCC.

      I have read Archer’s work. He appears to have been responsible for the material on carbonate chemistry in AR4, Chapter 7, where he was a Contributing Author. He erred to rely on the equilibrium in the surface layer, but he was facilitating AGW so it would buffer against dissolution, making the MLO bulge anthropogenic, and making the atmospheric CO2 concentration with its weak greenhouse effect sufficient to cause a calamity in just the right time frame.

      Archer’s Chapter 7 only said that 20% may remain in the atmosphere for many thousands of years, but the Technical Summary said as much as 35,000 years was needed for atmospheric CO2 to reach equilibrium(!). But Archer’s paper addressed the time to completely[!] neutralize and sequester anthropogenic CO2. … The mean lifetime of fossil fuel CO2 is about 30-35 kyr. Archer, D., The fate of fossil fuel CO2 in geological time, 1/7/05, p. 11. Earth’s climate is divorced from carbon sequestration, but IPCC incorrectly converted the ocean sequester time into atmospheric residence time.

      JS: (2) Jeff Glassman, up to his usual tricks, tries to complain that something that even a hardened skeptic like Willis Eschenbach (and Hans Erren) agree with is somehow unfathomable: Why does a CO2 pulse decay less rapidly than CO2 exchanges between the atmosphere and the hydrosphere / biosphere? It is really not that complicated. The point is that the ocean mixed layer, the biosphere, and the atmosphere form a subsystem where the CO2 rapidly exchanges back and forth between these reservoirs. This means that when you add a new pulse of CO2 to the atmosphere, it doesn’t take long until it has partitioned between these three reservoirs. However, from then on, the decay rate out of this subsystem is governed by slower processes like exchange between the ocean mixed layer and the deep ocean or absorption by the lithosphere.

      What was unfathomable was you and Eschenbach both insisting over at WUWT that the lifetime of a molecule of CO2 was somehow different than the lifetime of a slug of CO2 molecules. The former is only deduced from the latter.

      Now you’ve confused yourself. You are correct about the pulse of CO2 rapidly distributing into the three reservoirs. But that’s the end of the problem — you should have stopped there. The question in the introduction is

      How long does CO2 from fossil fuel burning injected into the atmosphere remain in the atmosphere before it is removed by natural processes? Bold added.

      You’re trying to answer a different question: how long does the slug of CO2 take before it is sequestered, before it reaches its final fate. The answer is as I have given previously 1.5 years with leaf water, and 3.5 years without. Obviously, those times obviously include uptake to the land. However, the IPCC model discusses four different time constants for the lifetime of a pulse of CO2 only in the context of the air-sea flux:

      Consistent with the response function to a CO2 pulse from the Bern Carbon Cycle Model (see footnote (a) of Table 2.14), about 50% of an increase in atmospheric CO2 will be removed within 30 years, a further 30% will be removed within a few centuries and the remaining 20% may remain in the atmosphere for many thousands of years (Prentice et al., 2001; Archer, 2005; see also Sections [¶7.3.4.2 Ocean Carbon Cycle Processes and Feedbacks to Climate] 7.3.4.2 and 10.4) Bold added, 4AR, §7.3.1.2, p. 514.

      I didn’t say anything as foolish as you suggest with your unrecognizable analogy. Kindly quote what you want to critique.

      • Jeff,

        As usual, you have added nothing useful to the discussion…just confusion and obfuscation. I think the posts of Chris, myself, Fred, and the others who are here to enlighten rather than to obfuscate stand on their own.

        It is sad to see people so wedded to their ideology that they are willing to sacrifice science on the alter.

      • Joel Shore 8/25/11, 9:00 pm CO2 residence time

        JS: As usual, you have added nothing useful to the discussion…just confusion and obfuscation. I think the posts of Chris, myself, Fred, and the others who are here to enlighten rather than to obfuscate stand on their own.

        It is sad to see people so wedded to their ideology that they are willing to sacrifice science on the alter.

        Unable or unwilling to defend his ideology as science, Dr. Shore, physicist, takes refuge in an unsupported, vitriolic attack. This response once again reveals how he, along with a few others, some of whom he is willing to out, post here to enlighten, to Spread the Word, according to the Gospel of IPCC and the tracts in advocacy journals, and to bring Salvation to the Unwashed. They are here neither to defend their new religion, nor to engage in any substantial debate over it.

        Uncomfortable in his compromised position, Dr. Shore tries to strengthen it by analogy, comparing criticism to the attacks on evolution by fundamentalism. Just in a single thread, he wrote:

        JS: You really haven’t a clue what you are talking about. You are just throwing around phrases … that seem to be as ignorant as when a Young Earth creationist says that evolution violates the Second Law. Slaying the Greenhouse Dragon. Part IV, Joel Shore, 8/14/11, 10:31 pm.

        JS: While there are some legitimate scientific issues … , most of this is not about science at all. It is simply an attack on science that is inconvenient for some people to accept, just as is the case for evolution… . Id., 8/15/11, 10:22 pm.

        JS: [I]mportant discussions occur in the scientific literature. It is understandable that when bad science, be it challenging evolution or challenging AGW, fails in those venues (or is so bad it could never even get into those venues), the proponents try to take their case directly to the public. It is a way to replace public policy based on science with public policy based on ideologically-inspired nonsense. Id. 8/16/11, 4:32 pm

        The parallel is quite the reverse of what Dr. Shore imagines. Fundamentalism (belief) is to Evolution (science) as skepticism (science) is to AGW (belief). Like the other AGWers, he learned his physics without learning science.

      • Joel Shore 8/25/11, 9:00 pm CO2 residence time

        Errata: The last paragraph should read:

        The parallel is quite the reverse of what Dr. Shore imagines. Evolution (science) is to Fundamentalism (belief) as skepticism (science) is to AGW (belief). Like the other AGWers, he learned his physics without learning science.

      • The parallel is quite the reverse of what Dr. Shore imagines. Evolution (science) is to Fundamentalism (belief) as skepticism (science) is to AGW (belief).

        Yeah…Right. With evolution, you have the National Academy of Sciences and all the other academies and the various scientific professional societies on one side … and for AGW, the same thing is true. Alas, it is not in the direction that you claim it to be. So, the question is: Whose interpretation of the science are you going to believe, these societies or a few ideologues like Jeff who want you to believe that their obvious Right-wing views have nothing to do with their conclusions and that all of the scientific societies have somehow been corrupted and only he and his fellow travelers can see the light!?!

        Oh, and it doesn’t help your case that one of the few “skeptical” scientists who is not talking complete nonsense is on record as saying “intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism.” ( http://www.ideasinactiontv.com/tcs_daily/2005/08/faith-based-evolution.html )

      • Joel Shore 8/27/11, 5:39 pm, CO2 residence time

        JS: Yeah…Right. With evolution, you have the National Academy of Sciences and all the other academies and the various scientific professional societies on one side … and for AGW, the same thing is true. Alas, it is not in the direction that you claim it to be. So, the question is: Whose interpretation of the science are you going to believe, these societies or a few ideologues like Jeff who want you to believe that their obvious Right-wing views have nothing to do with their conclusions and that all of the scientific societies have somehow been corrupted and only he and his fellow travelers can see the light!?!

        Oh, and it doesn’t help your case that one of the few “skeptical” scientists who is not talking complete nonsense is on record as saying “intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism.” ( http://www.ideasinactiontv.com/tcs_daily/2005/08/faith-based-evolution.html )

        We already figured out that Dr. Shore believes in consensus science, and to the exclusion of science. Slaying the Greenhouse Dragon. Part IV. 8/16/11, 10:15 pm. Because it’s important to him, he should keep on repeating the mantra.

        For other readers, science has nothing to do with consensus forming, voting, or endorsements — nor anything to do with personal foibles of supporters or detractors of one model or another. It has nothing to do with political orientations, left or right, toward models or modelers. Science is about models of the Real World that (1) violate no facts in their domain, and which (2) make predictions for fresh facts. The AGW model fails on both counts.

        Advancements in science always evolve from one person with one idea and no support.

        All the societies and professional journals in the world, every school of science or otherwise, and every media outlet and commentator could be unanimous to a man about a model, and fall, defeated by one person pointing out one error. In AGW the task is complicated only by which fault to chose among a dozen or so, first magnitude, fatal errors. Click on my name in the header, and follow the links to IPCC Fatal Errors and SGW.

        But AGW supporters, committed to a scientific-like model as a matter of belief, find themselves heirs to a multitude of errors from the model owner, IPCC. Constitutionally and professionally unable to admit their mistakes, they respond to scientific challenges not with technical point and counterpoint, but with digressions, irrelevancies, and insults.

      • Jeff: Your dichotomy between “consensus science” and “science” is a false one. You are right that one person can overturn a consensus. However, in order to do so, that person must convince his or her fellow scientists of their point of view. And, until such time, it is imperative that public policy be decided by what the scientists judge to be the best science. The only other alternative is to have science politicized as each political group believes their own “pet” scientists who support their ideologically-driven point-of-view.

        Of course, the fact is that the overwhelming majority of AGW “skeptics” are not really trying to convince scientists, probably because they know that their arguments are too weak to be convincing to scientists…In some case, like the Slayers, Postma, and arguments that man is not responsible for the current CO2 increase, they are ridiculously so! So, instead, they (you) try to take their case to the public where the techniques of sophistry compete much better against science!

        Let’s face it, your arguments are losing badly in the scientific community (for good reason!), which is why you are trying to go the way of all pseudoscience and say that the public should accept your view of the science and discard the view of the scientific community. And, they should ignore the fact that your ideology is much, much stronger than your science.

      • Joel Shore 8/25/11, 9:00 pm CO2 residence time

        JS: “intelligent design, as a theory of origins, is no more religious, and no less scientific, than evolutionism” followed by a link.

        Dr. Shore doesn’t say that his citation is from an article posted on the blog Ideas in Action with Jim Glassman (no relation), dated 8/8/05, which acquired no comments. The author was Roy Spencer.

        Check this:

        “Those who cavalierly reject the theory of evolution,” writes Spencer, “as not adequately supported by facts seem quite to forget that their own theory is supported by no facts at all.[“] Scopes Trial Transcript, 1925, p. 262, quoting Herbert Spencer, 1852.

        If Roy is descended from Herbert, we have evidence that evolution has no preferred direction.

      • Dr. Shore doesn’t say that his citation is from an article posted on the blog Ideas in Action with Jim Glassman (no relation), dated 8/8/05, which acquired no comments.

        And, that’s relevant or changes how the statement should be interpretted how exactly?

      • Joel Shore 8/28/11, 2:29 pm CO2 residence time

        JS: You are right that one person can overturn a consensus.

        I didn’t say that, and wouldn’t. Who cares what fiction the people believe? I addressed the failure of the model believed by the consensus.

        JS: However, in order to do so, that person must convince his or her fellow scientists of their point of view.

        Where did you get such a notion? Do you know of any example, and take the most famous of all, where the overturning scientist had to go around convincing his fellow scientists? Poppycock.

        JS: And, until such time, it is imperative that public policy be decided by what the scientists judge to be the best science. The only other alternative is to have science politicized as each political group believes their own “pet” scientists who support their ideologically-driven point-of-view.

        You not only believe in consensus science, but in technocracy! What a horrible, and thoroughly discredited, idea! Publicly funded academics in charge of public funds. A little healthy skepticism, augmented by a little science literacy, is enough to move the swing Policymakers, those in the middle, to reject AGW.

        JS: Of course, the fact is that the overwhelming majority of AGW “skeptics” are not really trying to convince scientists, probably because they know that their arguments are too weak to be convincing to scientists…In some case, like the Slayers, Postma, and arguments that man is not responsible for the current CO2 increase, they are ridiculously so! So, instead, they (you) try to take their case to the public where the techniques of sophistry compete much better against science!

        You make the case for why we don’t vote in science. You also show no skill with distinguishing signal from noise.

        JS: Let’s face it, your arguments are losing badly in the scientific community (for good reason!), which is why you are trying to go the way of all pseudoscience and say that the public should accept your view of the science and discard the view of the scientific community. And, they should ignore the fact that your ideology is much, much stronger than your science.

        There you go again, keeping imaginary score. Because I reason against your simplistic, left wing financial catastrophe, foisted on “Policymakers” to prevent a fantasized calamity, you assume I am following some right wing agenda. I follow no agenda, nor do I deny my reasoning to any, left, right, up, or down. Let science and objectivity prevail.

  18. Chris and Joel give very good explanations for the CO2 residence time.

    The long residence time is a fat-tail effect, very close in temporal behavior to what you would find in radioactive waste decay. The collection of different rates of decay leads some people to believe that it is a fast removal, due to the initial high slope decay. However, the fat-tail response is where the lengthy decay takes precedence. (That’s why Fukishima radiation went down fast but will be hanging around for hundreds of years, not the same physics but similar temporal response)

    The key math is when you do a convolution of a CO2 emission forcing function with a fat-tail impulse response. The result will show this peculiar lag that will generate a continual increase in CO2 concentration, long after the forcing function is removed. That is what has everyone spooked — even if we can immediately remove CO2 emissions, the CO2 will continue to increase for years.

    • WHT – It is hard to see a reason for CO2 to increase much or for long if emissions cease – see for example Climate Change Commitment . I can envision perhaps a brief increase if the warming from the residual forcing (before it disappears) causes some efflux from sinks, but even that is likely to be minimal. Warming itself should not last long before transitioning to a very gradual cooling that maintains an elevated temperature for millennia.

      On the other hand, if all anthropogenic emissions cease, we do have reason to expect a persistent increase in temperature for a while due to the reduction in cooling aerosols – see Climate Change Commitment 2.

      • Alexander Harvey

        Fred,

        I think that I am fairly consistent in my saying that if we can do little else we should do something about the risk of an aerosol overhang.

        If we face the bizarre prospect of having to use renewable energy to pump sulphates up chimneys then that needs fixing.

        Alex

      • Fred,
        You are looking at the temperature response with that link. That is not the same as the CO2 response. The CO2 response is very latent and has an inertia against change, just check against Archer’s papers. The link you provided also mentions Wigley, and his CO2 curves are shown here:
        http://www.globalwarmingart.com/wiki/File:Carbon_Stabilization_Scenarios_png
        You can definitely see the CO2 concentration continues to increase even if the fossil fuel emissions are cut back. If the temperature is not as sensitive to CO2 then that will not show as strong a latency.

        I have done the calculations myself, and this is what I get if I keep emissions constant.
        http://img39.imageshack.us/img39/9001/co2dispersiongrowth.gif
        The chart also shows first-order kinetics for CO2 sequestering, demonstrating the difference between classical exponential decay and a fat-tail response. Wigley shows pretty much the same thing but he does not have a constant emission scenario.

    • It’s important to distinguish between zero emissions and stabilized emissions.

    • Webby,

      “The result will show this peculiar lag that will generate a continual increase in CO2 concentration, long after the forcing function is removed. That is what has everyone spooked — even if we can immediately remove CO2 emissions, the CO2 will continue to increase for years.”

      ghosts always spook superstitious people. Where is the empirical evidence to suggest that CO2 would have a fat tail impulse response? Pekka didn’t even pull citations to support the Revelle Buffer Myth!!

      • Where is your evidence that it is thin-tailed?

        Another gas that is showing a hockey stick rise, Methane, is thin-tailed because it is exothermic and will decompose in a few years, 2 to 10 years is the quoted number I see.
        In comparison, CO2 is relatively inert and is endothermic in breaking down, so it needs special pathways to sequester out of the system. The skeptics quote CO2 residence times also at 2 to 10 years, same as Methane.

        My mind sees an inconsistency here. I would expect CO2 to have a much higher residence time than Methane. Perhaps that CO2 might be closer to the residence time of a relatively inert gas like Nitrous Oxide (N2O) which is quoted anywhere from 5 to 200 years. That will make it a fat-tail because of the large uncertainty in the mean.

        I would guess that the empirical evidence comes from historical forcing functions, such as volcanic events, that generated large impulses of CO2 into the atmosphere. Sample records would show a slow decline over time. I don’t know of any citations off-hand though. So shoot me.

      • Webby,

        I make no claim. YOU claim it is fat tailed. Let’s have something other than arm waving.

      • Chief Hydrologist

        kuhncat,

        This fellow is nothing but arm waving. He assumes a function – applies it in his imagination – and lo and behold there it is exactly as predicted with a fat head – I mean fat tail.

        He guesses that we can find out from valcanic emissions. I think he is a candidate for being thrown in the volcano.

        Cheers

      • CHIEF HYDROLOGIST said this:

        I think he is a candidate for being thrown in the volcano.

        When the death threats start, I know I am touching some nerves.

      • Not really – it is an in joke about not throwing virgins into the volcano to placate the climate gods. Useless gits however qualify.

        You flatter yourself that I would give a rat’s arse about any of your pointless and distracting comment. My only concern is whether I am getting too bored to continue with this nonsense. Well I certainly am – but it is a question of whether the field should be left to the a few noxious individuals intent on play a spoiling role.

        Just now the Numbnut gang is dominating the threads with almost 100% of the recent comment between them. Most of it simply nonsense and insults. I don’t know what the solution is – but it is getting to be uncomfortable scrolling through the large number of rude and abusive posts.

      • I make no claim. YOU claim it is fat tailed. Let’s have something other than arm waving.

        There are two options, thin-tailed and fat-tailed. You asked me to prove that something is fat-tailed, while the thin-tailed advocates haven’t demonstrated any agreement with data. It is all indirect hand-waving evidence by the thin-tail advocates.

      • No – the natural variability is big enough to swallow anthropogenic emissions tail and all.

      • Ah, so now you believe in natural variability.
        So, say you have a single CO2 molecule in the atmosphere. The thin-tail theory is that this molecule only has a 2% chance of still being in the atmosphere after 20 years (given a 5 year residence time). In other words, it is 98% likely to be sequestered out.
        The fat-tail version is that it will have a 50% chance that it will be removed after 20 years. But in the first few years the fat-tail distribution could have a faster apparent sequestering rate.
        It is an interesting premise that the CO2 could bounce around the atmosphere, occasionally reaching the surface resulting in a wide dispersion and natural variability in residence times.

      • Chief Hydrologist

        ‘And at some I didn’t believe in natural variability?’ You are the one who assumes a steady state – it is not true even remotely in biological or other complex systems. You follow up with made up numbers.

        http://www.nature.com/embor/journal/v9/n1/full/7401147.html

        ‘Feldman remembers watching the first dramatic example of SeaWiFS ability to capture this unfold. The satellite reached orbit and starting collecting data during the middle of the 1997-98 El Niño. An El Niño typically suppresses nutrients in the surface waters, critical for phytoplankton growth and keeps the ocean surface in the equatorial Pacific relatively barren.

        Then in the spring of 1998, as the El Niño began to fade and trade winds picked up, the equatorial Pacific Ocean bloomed with life, changing “from a desert to a rain forest,” in Feldman’s words, in a matter of weeks. “Thanks to SeaWiFS, we got to watch it happen,” he said. “It was absolutely amazing — a plankton bloom that literally spanned half the globe.”‘ http://www.sciencedaily.com/releases/2011/04/110404131127.htm

        You need to understand some science – and not just make it upo as you go along.

      • You need to understand some science – and not just make it upo as you go along.

        I am learning more as I go along, that’s for sure.

        In my comment I was just talking about the unlikelihood of a specific CO2 molecule being conclusively removed from the atmosphere after some lengthy time. I have an alternative derivation that only incorporates a statistical mechanics POV (which I believe is real science), read this
        comment in a thread below.

      • Webby,

        again you make an unsupported statement. That it has to be either fat or thin. No support, just assertion. Bye.

    • Fanney, please read mine, Joel’s, Fred’s, and Nick Stokes comments above. This isn’t even close to getting a cigar.

      • There he goes on about that tobacco industry again. :)

        The answer is that they used 100 years or more to confuse and mislead people. Now they are trying to back track.

      • Chris,
        I think part of the problem with the skeptics is that they only use the tools in their toolbox. So they immediately think response times have to be exponentially damped, and so look at the initial decay, plot that on semi-log paper and come up with residence times on the order of a few to 10 years. The reality as you and others indicate is that the fat-tails completely screw up the classic extrapolation. It’s that guy Segalstad who originally plotted these short residence times (here http://www.co2web.info/ESEF3VO2.htm) and really the embarrassment is on him and the scientists that he listed.
        The concept of fat-tail statistics is not something that you immediately pick up on in school and from shrink-wrap tools, but it comes with experience, and from watching how disorder and randomness plays out in the real world.

    • It appears this C3 link is also confused about the so-called missing carbon emissions and where that has gone. I did my own convolution-based modeling while carefully accounting for the actual fossil-fuel carbon emissions and discovered that very little CO2 has gone missing since the industrial age has started. To get a really good fit, I did have to raise the CO2 baseline to 294, but that may actually be the correct value.
      The writeup originally appeared on my blog:
      http://mobjectivist.blogspot.com/2010/05/how-shock-model-analysis-relates-to-co2.html

      Could it be that the missing carbon is caused by the assumption of an incorrect residence time on their part? With a long residence time, the impulse response will store much of the CO2 in the atmosphere, which is only slowly sequestered over time.

  19. Essenhigh’s abstract is from: Potential Dependence of Global Warming on the Residence Time (RT) in the Atmosphere of Anthropogenically Sourced Carbon Dioxide Robert H. Essenhigh, Energy Fuels, 2009, 23 (5), pp 2773–2784, DOI: 10.1021/ef800581r, April 1, 2009

    using the combustion/chemical-engineering perfectly stirred reactor (PSR) mixing structure or 0D box for the model basis, . . . With the short (5−15 year) RT results shown to be in quasi-equilibrium, this then supports the (independently based) conclusion that the long-term (100 year) rising atmospheric CO2 concentration is not from anthropogenic sources but, in accordance with conclusions from other studies, is most likely the outcome of the rising atmospheric temperature, which is due to other natural factors.

    Such quantitative combustion/chemical engineering approaches including ALL the sources and sinks are essential to get a grasp of the overall fluctuations and consequent net differences.

    Essenhigh references Tom Segalstad http://www.co2web.info/
    Carbon cycle modelling and the residence time of natural and anthropogenic atmospheric CO2: on the construction of the “Greenhouse Effect Global Warming” dogma.

    The apparent annual atmospheric CO2 level increase, postulated to be anthropogenic, would constitute only some 0.2% of the total annual amount of CO2 exchanged naturally between the atmosphere and the ocean plus other natural sources and sinks. It is more probable that such a small ripple in the annual natural flow of CO2 would be caused by natural fluctuations of geophysical processes.

    Se also Tom V. Segalstad: Correct Timing is Everything – Also for CO2 in the Air
    Referencing Essenhigh see:
    Ryunosuke Kikuchi, External Forces Acting on the Earth’s Climate: An Approach to Understanding the Complexity of Climate Change, Energy & Environment, Volume 21, Number 8 / December 2010

    The Intergovernmental Panel on Climate Change defines lifetime for CO2 as the time required for the atmosphere to adjust to a future equilibrium state, and it gives a wide range of 5-200 years; however, a number of published data show a short lifetime of 5-15 years. This implies that anthropogenic emissions of CO2 are sequestrated more easily than expected,

    Alan Carlin, A Multidisciplinary, Science-Based Approach to the Economics of Climate Change Int. J. Environ. Res. Public Health 2011, 8, 985-1031; doi:10.3390/ijerph8040985
    Arlan summarizes Essenhigh & Segalstad etc.

    SOURCES AND SINKS OF CARBON DIOXIDE by Tom Quirk, Icecap.us

    The results suggest that El Nino and the Southern Oscillation events produce major changes in the carbon isotope ratio in the atmosphere. This does not favour the continuous increase of CO2 from the use of fossil fuels as the source of isotope ratio changes. The constancy of seasonal variations in CO2 and the lack of time delays between the hemispheres suggest that fossil fuel derived CO2 is almost totally absorbed locally in the year it is emitted.

    Fred Haynie posts detailed models of CO2 driven by natural causes, especially polar fluctuations, not anthropogenic. His analysis of the different shapes between Arctic, tropics and Antarctic is thought provoking as the primary CO2 drivers. http://www.kidswincom.net/climate.pdf

  20. Isn’t CO2 residence time a property of the system rather than an inherent property of CO2 molecules? If so, then it will vary and can be made to vary.

  21. Chief Hydrologist

    It is a simple stocks and flows problem – that could in principle be modelled with commencial software such as STELLA.

    The sources are:

    ‘80.4 GtC by soil respiration and fermentation (Raich et al., 2002)
    38 GtC and rising by 0.5 GtC per annum by cumulative photosynthesis deficit(Casey, 2008)
    by post-clearance deflation (See Eswaran, 1993)
    7.8 GtC (IPCC, 2007 – Needs peer reviewed reference)
    2.3 GtC by process of deforestation (IPCC, 2007; Melillo et al., 1996; Haughton & Hackler, 2002)
    0.03 GtC? by Volcanoes
    by Tectonic rifts
    by multi-cellular Animal Respiration
    by multi-celluar Plant Respiration

    The sinks are:

    120 GtC by Photosynthesis (Bowes, 1991)
    By Ocean Carbonate Buffer

    Source: Wikipedia – it’s seems a reaonable list

    I added multi-cellular to distinguish betwewen complex and simple organisms. The animal plant distiction applies to both complex and simple – as it is based on the distiction between trophic (food) sources. Food from photosyhthesis or other by eating other organisms – autotrophs and heterotrophs.

    You would have to make some fairly heroic assumptions about how these things change with time, temperature and silicate weathering. One thing is for sure – this is not a problem of average decay rates or long tailed statistics. It is – as I said – a problem of stocks and flows.

    Consider the negative feedback of carbonic acid in rainwater – increases silicate weathering and therfore carbonate buffering in the oceans – decreasing atmospheric CO2 concentration and deep sequestration at the same time – reducing carbonic acid in rainwater etc. Hydrogeologically speaking the time scale could be significant.

    There seem quite a few back of the envelopes blowing through this thread – but I doubt that the quality of the data supports a number low or high to the level of certitude shown for the insustantial fluff seen in this post.

    As far as I am concerned this is more angels on pinheads discourse by the usual suspects. Perhaps that should be pinheads on angels – as I obviously have yet to renounce my general distemper at these proceedings.

    • Quantum Gravity Treatment of the Angel Density Problem

      http://improbable.com/airchives/paperair/volume7/v7i3/angels-7-3.htm

      • Chief Hydrologist

        ‘The only reason angels can fly is that they take themselves so lightly’ – so I would dispute the mass of angels assuption. But then I guess I’m being generally disputative.

    • STELLA can’t solve this problem because it doesn’t allow fat-tailed impulse responses and no way to add dispersion.

      One thing is for sure – this is not a problem of average decay rates or long tailed statistics. It is – as I said – a problem of stocks and flows.

      That is essentially what is known as compartment modeling. The two usual approximations to compartment modeling is either to assume a constant flow or to assume a flow proportional to an amount remaining (the stock). If CO2 is modeled as being fairly inert to sequestering, it ends up showing the fat tails which is neither constant or proportional flow. If users of STELLA could dial in one of these lengthy impulse response and then run a simulation, they would likely see a non-converging solution. Those same users would probably scream at the results and then toss STELLA in the garbage, not realizing that is the correct solution.
      The point is that you can’t blindly use these commercial tools unless you have a good understanding of the underlying physics model. They will blithely sweep all the interesting behavior under the rug.

      • Chief Hydrologist

        No – you would need to specify changes in flux with temperature and silicate weathering as I said. To do that – you would have to know what the rate of change of these variables was over time – so we are not talking physics at all but biology, hydrogeology, chemistry, geology etc – those processes are determined outside of a simple stock and flow model and only a simple rate function, look up table, spreadsheet, whatever is plugged into a stock and flow model such as STELLA.

        The actual software is irrelevant – and I mention STELLA only as an example of the type of model I am thinking of – one that has stocks (of CO2 for instance) and flows (CO2 for instance) on some time increment.

        What is specified is the flux based on the physical, chemical and biological constraints. The answer is the changes in the stores – atmosphere, terrestrial vegetation, ocean – over time. These can’t diverge from anything because they are based on physical and scientific realities and not just numbers pulled out of your arse. If the results were unrealistic – you would check assumptions and data.

        The point is that you are thinking of this as a radiation decay type problem – and it clearly is not. It is a stock and flow problem – in which there are compartments and flows between them. You need to think simply about the problem – what are the sources and sinks and what influences the flow between them. This shows what data is required (and the deficit of data) to make good estimates of rates of flow and therefore changes in stocks.

        All of the various processes could be plugged into a more complex model of course – but modelling will not help if the data is incomplete or wrong. We would still need to plug in a rate constant, look up table, whatever – representing actual real world processes and not just the product of physicists pulling numbers out of their fundament.

      • Dear Chief Hydrologist,
        From your title, you must know what breakthrough curves are. You also must have run across the idea of dispersive transport in porous media. The point is that material transport is often very disordered in its natural state and that you really can use some clever stochastic math to understand how long it takes material to get from point A to point B along a tortuous path. Do a literature search on this topic and you will find that the data shows that it’s a fat-tail effect, and the tail is fatter the more disordered the media (underground) or pathway (i.e. runoff) is. Bear in mind that you will find a lot of the civil engineers and geologists studying this will look like they are scratching their butts trying to figure out what’s happening. But then again you have to remember that they are civil engineers and geologists after all :)=

      • Chief Hydrologist

        My title derives from Cecil (he spent four years in clown school – I’ll thank you not to refer to Princeton like that) Terwilleger. Springfield’s Chief Hrydrological and Hydraulical Engineer

        But I am trained both in hydrological engineering and environmental science and have decades of experience running computer models of various types and in analysis of complex real world environmental problems. People find it funny – but I started programming with punch cards and computers that filled a room.

        ‘Breakthrough Curve: A plot of column effluent concentration over time. In the field, monitoring a well produces a breakthrough curve for a column from a source to the well screen.’ It generally has a Gaussian normal distribution. They can be used for instance in tracing pollution sources in groundwater. Groundwater movement is commonly modelled using partial differential equations that conserve mass across finite grids.

        This has no application to the problem at all – there is not even a fat tail in any of that. Which is commonly understood to involve a preponderance of rare and extreme events. Rainfall distribution has for instance a skewed power series distribution – commonly of a log-Pearson.

        Essentially you just using names for things that you seem hardly to have any understanding of and suggesting that some stochastic maths (mathematicians pulling numbers out of their arses) can solve the problem of CO2 residence times without any reference to scientific data on how these physical Earth systems actually work.

        I think I have your measure now – you are an uncommon idiot hoping to bamboozle your way through a technical discussion by using terms you don’t really understand in the hopes of confusing the picture in favour of your tribalistic leanings.

        I assume it has worked elsewhere for you – but I assure you that you are playing with the big boys now.

      • ‘Breakthrough Curve: A plot of column effluent concentration over time. In the field, monitoring a well produces a breakthrough curve for a column from a source to the well screen.’ It generally has a Gaussian normal distribution.

        Wrong. They rarely have a Gaussian profile and almost always have a significantly asymmetric tail that drops off slowly with time. Instead of going to the Yahoo Answers response, you might want to dig a bit deeper.

        I think I have your measure now – you are an uncommon idiot hoping to bamboozle your way through a technical discussion by using terms you don’t really understand in the hopes of confusing the picture in favour of your tribalistic leanings.

        I think you are confusing me with Claes or Oliver or some other crackpot that comments on this blog.

        Essentially you just using names for things that you seem hardly to have any understanding of and suggesting that some stochastic maths (mathematicians pulling numbers out of their arses) can solve the problem of CO2 residence times without any reference to scientific data on how these physical Earth systems actually work.

        Yes, identifying a crackpot is in the eye of the beholder. Lots of us are in this together and we get insight from each other and especially from those who have a slightly different perspective. The trick is to figure out who the crackpots are and who have genuinely unique insight.

        I think I have your measure now – you are an uncommon idiot hoping to bamboozle your way through a technical discussion by using terms you don’t really understand in the hopes of confusing the picture in favour of your tribalistic leanings.

        The picture is confusing and can continue to get confusing until it starts to clear up. Of course tribalism will works its course as we gravitate toward supporting other commenters that are providing insight. That is the way that citizen-based science works.

        I assume it has worked elsewhere for you – but I assure you that you are playing with the big boys now.

        Yes, indeed there are a quite a few commenters who try to invoke flowery and ornate dialog like this is some sort of Shakespearean literary outpost, but that is just fluff and in the end we are just trying to hammer home some logic and pragmatism. I am a fan of Jaynes, who has justifiably said that “probability theory is the logic of science”.

      • Chief fancies himself quite a poet.

      • As proven by comments on a blog.

      • Both when and when he isn’t insulting people and discussing his TV viewing habits, things he does with his laptop, and how he cleans it up afterwards (no, really).

        Chief Hydrologist | August 22, 2011 at 1:53 am |
        Mike

        I feel qualified to answer this – I saw a bit of a documentary last night in between bits of CSI:NY and doing unspeakable things with my laptop.

        […]

        Hope this helps – and if you have any other questions I can clean up my laptop and see whatever other ‘documentaries’ are available.

      • When I’m no being a comedian. Here’ the whole post – in which I am making fun of myself.

        http://judithcurry.com/2011/08/19/week-in-review-81911/#comment-103189

      • Good bye Joshua – it has been a distinctly creepy experience engaging with you.

      • Chief – don’t forget to take your ball with you.

        If you’re going to dish it out, Chief, you should be able to take it.

        Anytime you want to engage in mutually respectful posts, I’m more than game, but I’m afraid that would require you to leave your “pissant leftist”-type insults out of the discussion. I won’t take offense if you tee off on “numbnut,” and you don’t take offense when I laugh at Bruce.

        Anytime you’re ready. As much as I enjoy trading snark, I enjoy trading respectful dialogue more.

      • Now let me think – what is the best response to the Numbnut gang of pissant progressives? The best they can organise is an electrical engineer who cannot put an idea together on physical oceanic and atmospheric systems without numbscull ideas about stochasticity, breakthough curves, groundwater diffusion and ‘maximum entropy’. The rest of a poor lot cannot string any idea of any substance together at all. Incoherency is us – but feel free to drop in with a distracting and pointless troll any time at all.

        ‘Anytime you want to engage in mutually respectful posts, I’m more than game, but I’m afraid that would require you to leave your “pissant leftist”-type insults out of the discussion. I won’t take offense if you tee off on “numbnut,” and you don’t take offense when I laugh at Bruce.’

        Feel free to take offense whenever you like – but there is no licence to laugh at anyone. Perhaps my first instinct was right – it is wrong to leave natural born bullies to prevail.

      • Well – that didn’t last long did it, Chief.

        Like putty in my hands.

      • The best they can organise is an electrical engineer who cannot put an idea together on physical oceanic and atmospheric systems without numbscull ideas about stochasticity, breakthough curves, groundwater diffusion and ‘maximum entropy’.

        And your knee-jerk response to everything is chaotic systems? For making progress, that is as fine a strategy as punting on first down.

      • WebHubTelescope, Do you ever wonder, if the progressives will even get around to building a ‘Strategic Hamlet’.

      • It is a bell curve with an exponential rise of a solute in soil water and an exponential decrease – and still irrelevant to carbon dioxide.

      • The mathematics of dispersion still applies. That approach to math modeling is not taught in schools.

      • Chief Hydrologist

        What – precisely – do you mean by dispersion? Plume dispersion commonly is modelled by the Reynolds averaged Navier-Stokes non-linear partial differential equations. That is taught in engineering school – and is the standard method for floods, storm surge, tsunami, pollution plumes and groundwater movement. They are used as well as in atmospheric and oceanic simulations.

        Non-linearities might emerge – but in that case you adjust the time step for the numerical solution because an unstable solution for these things in the real world is useless.

        The Earth system as a whole has multiple negative and positive feedbacks with little understood thresholds and is in fact non-linear at a number of time and spatial scales. The correct model here is forcing with non-linear responses at critcal thresholds – just like earthquakes.

        Now we can define a power function for just about anything – which I assume is what you are talking about – but we do need real world data to make it meaningful and not just some assumed function.

        I can for instance fit a log-Pearson type 3 power function to 100 years of hydrological data and use it to calculate a 1 in 10,000 year storm. How accurate that is is a matter of conjecture – but it gives at least an agreed starting point which is all that matters in engineering public safety. It is a number that seems adequate from the experience and judgement of myself and my peers.

        I have my doubts as extreme events – ‘dragon-kings’ – at times of chaotic bifurcation are unlikely to fit the power function distribution. I very much doubt – 99% – that there is enough data to adequately define a power function for CO2 residence in the atmosphere. Basic science is required – rather than as I put in my crude Australian way – pulling a number out of your arse.

      • Take the ordinary Navier-Stokes equation and vary the values of the diffusion coefficients and convection/drift terms. That produces the degree of dispersion that I am talking about, and it doesn’t have to show turbulent behavior.

        The ordinary Navier-Stokes equation is also known as the Fokker-Planck equation. Solving these for disordered systems is a hobby of mine and it does show some very interesting agreements with experimental evidence. As far as I know, no one is working at it from the same angle I am.

        Basic science is required – rather than as I put in my crude Australian way – pulling a number out of your arse.

        Why do you continue to think that I am not applying a rigorous scientific analysis? All you have to do is take a gander at the link in my comment handle. Again, I thought this blog is partly about coming up with some potentially concise and neat ways of thinking about environmental phenomena.

      • Chief Hydrologist

        Archer et al keeps being raised – but this is a crude model study substituting crude values for parameters for which data is not merely inadequate but entirely lacking.

        I don’t run hydrological models without calibration against real world data. There is a diversity of results amongst the Archer models as one would expect – but is this the reflective of the actual range of variation in natural systems? How do we know what natural variability is without good long term data – or indeed for the most part without any data at all? How could you calibrate these models at all except through crude ensemble methods that might owe more to groupthink than anything real? How would you know what the results of sensitivity analysis might be? It is all hopelessly stupid.

        Without a deep discussion of the data and the limitations thereof – and of the assumptions made and the limits of error – and of the limitations of the models – there is every reason for scepticism. Models are not science and you seem to assume that science can proceed by modelling with broad assumptions and without data at all. It is not true – and is profoundly unscientific.

      • Without a deep discussion of the data and the limitations thereof – and of the assumptions made and the limits of error – and of the limitations of the models – there is every reason for scepticism. Models are not science and you seem to assume that science can proceed by modelling with broad assumptions and without data at all. It is not true – and is profoundly unscientific.

        Perhaps this intersects with the decision making under ignorance post, but assuming disorder is always a conservative approach. What is completely unscientific is those analysts that empirically guess at a residence time is 2 to 10 years by assuming that it follows an exponential decline.
        Look, I laid out a rigorous propagation of uncertainty analysis downthread that can explain significant dispersion in the CO2 residence times. Do those guys that generate the 2 to 10 year values do this? No freaking way!

      • Chief Hydrologist

        I should really say that models are analytical science at all – but are examples of synthesis and their proper use can only be understood in that framework.

        Where there is a lack of an anlaytical underpinning – they are almost guaranteed to be wrong.

        ‘Assuming disorder is always a conservative approach.

        A ‘rigorous propagation of uncertainty analysis downthread that can explain significant dispersion in the CO2 residence times.’

        You discussed a simple function of some sort – I can’t imagine what you mean by disorder. I don’t know what you could possibly mean by a ‘rigourous progagtion’ of uncertainty. I can only imagine from some of your other posts that it is not science at all that is under discussion – but some fervid incarnation of the climate wars. Oh – I forgot – I should try to be less literate.

        ‘Although it has failed to produce its intended impact nevertheless the Kyoto Protocol has performed an important role. That role has been allegorical. Kyoto has permitted different groups to tell different stories about themselves to themselves and to others, often in superficially scientific language. But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values.’
        http://eprints.lse.ac.uk/24569/

      • Chief Hydrologist

        er… not analytical science at all…

      • You discussed a simple function of some sort – I can’t imagine what you mean by disorder.

        Entropy is generally considered as a measure of the amount of disorder in a system. A well-ordered subsystem such as a crystal lattice will have low entropy because of the predictable arrangement of the atoms. It becomes disordered if that lattice melts and the information entropy increases. There are accepted ways of characterizing this disorder — I don’t understand why you have difficulty with accepting this concept. If you mix a dye in a bucket of water, the disorder increases until it completely disperses through the water. This is modeled by what is called a uniform probability density function. Do you have difficulty even accepting this borderline pedantic example? Disorder can occur in spatial or in temporal terms, and even if you don’t accept it mathematically, there are certainly intuitive notions that you can apply.

      • Yes I know what entropy is – but fail to fail to see any particular correspondence with a bucket of water and Earth systems. The dye is mixed by simple Brownian motion – the simplest of the stochastic processes. But asserting an application of this to complex systems – and one significantly mediated by biology and therefore negative entropy – is crazy BS.

        You are either deliberately misleading or obsessively unbalanced – either way I don’t give a rat’s arse.

      • There is no such thing as negative entropy. Entropy is defined by the negative log of a probability. Since probabilities are strictly between 0 and 1, entropy can never go negative.

      • Entropy is defined by energy flow in the 2nd law of thermodynamics. Maximum entropy is the state where every point has the same potential energy and thus there is no more energy flux. The probability of this state occurring is unity. Alternatively there is a 100% chance of entropy happening on average outside of a very specific circumstance.

        The very special circumstance is life – life is self organising. The act of getting and eating my breakfast – tea and crumpets – for instance is an example of negative entropy. It is an example of energy flowing into a more highly organised system.

        I expend energy to gain more energy (negative entropy – or negentropy) and avoid dying (maximum entropy).

      • OK, the negative entropy is just a relative or differential entropy where a negative sign is slapped in front of the actual positive entropy.

      • you wrote: People find it funny – but I started programming with punch cards and computers that filled a room.

        I don’t find that funny. There are a lot of us out here that did that.

      • HEH,

        you were the smart ones. I just wired the sorters and collators to sort and rearrange your cards and clean up the mess when you dropped a box or two!! 8>)

      • Chief,
        Please take a look at limnology, and some recent papers on both the number of freshwater bodies and their part in the carbon cycle.

      • Hi Hunter,

        Limnology was a favourite topic many years ago. I love water. I love being on it, in it and under it. Biogeochemical cycling is my speciality and the carbon cycle because of the importance in trophic networks and in diverse chemical reactions is complex.

        I didn’t distinguish between fresh and salt above – and I am quite out of date. I will have a look.

        Cheers

      • Chief,
        You may well find this of great interest then.
        Dr. John Downing was interviewed recently and I was able to hear it.
        He thinks freshwater systems have been neglected in important ways.
        Here is his homepage:
        http://www.public.iastate.edu/~downing/
        He made some very interesting points about recent research results.
        Here is a transcript of the interview:
        http://www.loe.org/shows/segments.html?programID=11-P13-00033&segmentID=2
        I am not able to find the specific paper he is referring to. I think it might have some important implications, however.

    • David L. Hagen

      Chief Hydrologist & Webhubtelescope
      Please drop swords and explore how to model/distinguish causes.
      Chief – please review Webhub telescope’s models, especially peak oil. I think I know enough math that those models look significant.

      Webhubtelescope.
      Please explain further on fat tails and what might drive them vis CO2 with dominant natural sinks/sources >> anthropogenic causes.

      What if we have nonlinear bio feedback with increasing CO2.
      e.g. some trees plants show very rapid increase in growth rates with increased CO2. How is that modeled?

      What if Anthropogenic CO2 is rapidly absorbed in plants?
      What if most CO2 comes from temperature changes.
      e.g. see arctic pulsing.
      IF clouds dominate temperature and cosmic rays & solar modulation controls clouds, then natural causes could dominate temperature changes and thus CO2 form the ocean.

      How would you statistically model, test and distinguish such causes?

      cf Tom Quirk and Fred Haynie and Roy Spencer.

      I see difference in pulsing shapes for Arctic, tropics Antarctic.
      Different bio response terrestrial No hemisphere vs So. Hemisphere.
      Diff bio responses ocean biomass No. hemisphere vs so.
      Diff fossil emissions no hemisphere vs so.

      Differences in clouds and cosmic rays/solar ;during solar cycle; forbush events

      (PS Forget 52 pickup. With punch cards, I seem to remember not wanting to play 520 pickup.)

      • Daivid,
        I think those are good directions to push forward in.

        I have a very simple model that works to explain fat-tails in a number of situations. The fat-tail is mainly with respect to some quantity observed over time. The general idea is that the observable changes transiently in response to some stimulus. As a premise, suppose that the response is largely driven by a velocity or rate, r, that follows an exponential decline: F(t | r) = exp(-r*t)
        Next consider that the rate is actually a stochastic variate that shows a large natural deviation. It will follow an exponentially damped PDF if the standard deviation is equal to the mean:
        p(r) = (1/R) * exp(-r/R)
        If we then integrate over the marginal :
        F(t) = ∫ F(t | r) * p(r) *dr
        this results in F(t) = 1/(1+R*t)
        I consider this a general dispersive result that demonstrates how a measurement which will normally show an exponential decline will switch to a 1/t decline with sufficient disorder. The 1/t dependence is a power-law decline, categorized as a fat-tail because it only shows a median value and no mean value. (This is OK because the physical variate, the rate, r, has a well characterized mean and the higher order moments are also bound. The lack of moments is a common characteristic of reciprocated variates and what causes endless consternation to the statisticians. NN Taleb dances around this topic in his book The Black Swan).

        If you look at the IPCC residence time impulse response curves or Archer’s curves, they are even more strongly fat-tail, fitting to a curve that looks more like 1/(1+k*√t) — which is the reciprocal of the square-root of time. My feeling is that the square-root of time dependence comes from a diffusional growth law mixed in with the dispersion. Fickian growth laws are not linear but grow slowly as a square root of time due to diffusion. It also could be due to a chemical rate equation that proceeds more slowly than first-order kinetics. In either case, the value of k generates the dispersion that smears the dynamics.

        Now what the short-residence-time skeptics gloss over is the quick transient that one will see if this curve is plotted. Yes it does drop down relatively quickly, but the tail is significant and it won’t fall off anywhere near as fast as an exponential.
        Comparisons to the Bern SAR response curves

        So that is an applied mathematics argument as to how a behavior can change from a thin-tail exponential decline to a fat-tail power-law decline through the introduction of disorder or randomness. What this is not is any kind of critical phenomenon that most scientists ascribe to power-law behaviors. Research scientists tend to drool over critical phenomena and become disappointed when it gets explained to them that garden-variety disorder can lead to the same result.

      • Your comments on the almost ubiquitous prevalence of power laws reminds me of Benoit Mandelbrot even more than Nassim Taleb. My own thinking accepts that only in part. I agree fully that Normal distribution with is very thin tails is overused and that tails are often fatter over a wide range of values.

        I’m, however, much more doubtful on the generic value of any specific alternative formula and any specific class of distributions. The empirical evidence is usually reasonably accurate over a narrow range of values, which allows fitting it with many different fat-tailed distributions. Formulations presented by Mandelbrot or by you provide useful parametrizations over ranges where they have been verified empirically, but have little predictive power outside that range.

        As an example, Mandelbrot has looked at several distributions that are known to have a strict extreme limit finding a power law that “predicts” breaking that limit with non-negligible probability. In such cases we know that the power law will fail, when the limit is approached, but we don’t know, where the failure starts to be significant. Finally we can ask, do we really have any power law in the actual distribution at all, or is the power law only consistent with data as long as the accuracy requirement is very modest.

        Similarly I don’t have any reason to doubt the practical value of your approaches, but I’m doubtful as soon as the approach is assumed to have a precise theoretical basis. It’s rather a good rule of thumb that fat tails are very common and can be parametrized with some power law over a limited range. There are many reasons for that. Many of the reasons have a nature similar to what you describe, i.e, the overall distribution can be considered as a combination of many different distributions, some of which have very large variances. The combination may be formed in various ways, one of which applies to the persistence of CO2 in atmosphere.

        More specifically any agreement in far tails with results of Archer is likely to be dependent on, how Archer did his work rather than on the real world facts, because the empirical support for any conclusions on the far tails is statistically extremely weak. Relevant empirical data exists only from very distant past, and the the results on far tails are determined by the methods used to extract information from the far tails. A very small change in the methods makes the outcome totally different.

      • Pekka,
        What people miss is that just taking the reciprocal of an underlying stochastic variate is enough to cause a power-law fall off distribution. That is what I tried to show with my short derivation. Mandelbrot would be the last person to want to admit that something this straightforward could explain the ubiquity of fat-tails, as it would tend to marginalize the mystery behind his fractal story (it doesn’t matter anymore since he recently died).

        Like Normal distributions and other well-accepted approaches, these have significant predictive power, but are curiously rarely applied. If you want to find more, you can look up the term ratio distributions in the applied mathematics literature.
        http://en.wikipedia.org/wiki/Ratio_distribution

      • I agree that Mandelbrot has been careful to avoid presenting anything like simple derivations for power law tails, but I don’t think that he has been as careful in commenting on the empirical evidence on the existence of power laws. Many of his examples have been rather poor from the empirical point of view, i.e. the power law is seen on a too narrow range of values and with insufficient accuracy to tell, whether it’s really a power low at all or just some other fat tail.

      • Agreed, Mandelbrot and Taleb are horrible at actually trying to match their ideas to any empirical evidence. They are pretty sly at not being caught in a situation of having to defend some assertion (Taleb in particular in his books relies on hand-waving).

        In many cases it just takes time to accumulate enough data to show a power-law dependence. The physicist/applied statistician Cosmo Shalizi wrote an article some ten or more years ago claiming that web-link statistics did not show a power-law dependence. Over the ensuing years as more evidence has become available, the actual statistics have converged to a power-law over many orders of magnitude and a very straightforward interpretation (IMO). That said, even if you say that it is power-law over 8 orders of magnitude, the wise-guy will point out that it is not provable because it doesn’t extend to 9 orders…

        Again the subliminal tendency by physicists is to reserve power-laws for critical phenomena and they don’t necessarily want to ascribe it to a pedantic explanation. That is my conspiratorial rationale, otherwise I can’t explain the dismissive attitude that many of them display.

      • Webhubtelescope
        Thanks for clarifying issues.
        What if we have multiple chemical, bio and human emissions?
        e.g. temperature-co2 from arctic pulsations
        Temperature-CO2 from the ocean.
        Ocean algae – about 50% of total biomass net productivity.
        Temperature & soil moisture together for:
        Tropical vegetation & agriculture.
        Temperature vegetation, trees
        Temperature annular agriculture.
        Then weight temperate into No. vs So. by land etc.
        Then anthropogenic CO2.
        The anthropogenic fuel consumption in turn varies by resource/country with growth/depletion curves and economic development. e.g. see
        Hook et al. Descriptive and predictive growth curves in energy system analysis Natural Resources Research, Volume 20, Issue 2, June 2011, Pages 103-116, http://dx.doi.org/10.1007/s11053-011-9139-z

        There should be some predictability there to fit CO2 and distinguish natural annular cyclic, vs natural solar/cosmic cyclic vs anthropogenic.

        However, each would have stochastic disorder components on top of overlapping variable physical causes.
        Would those together give the fat tail distributions you note above?
        What then the impact of the “fatness” on the long term increase/decrease of atmospheric CO2?

      • Would those together give the fat tail distributions you note above?
        What then the impact of the “fatness” on the long term increase/decrease of atmospheric CO2?

        Anything that shows dispersion or significant variation in rates can lead to fat-tails. The fat-tail is in those paths that show a slow uptake in CO2 or a long time to get to the sequestering points.
        The impact of the fatness long-term is that the slow pathways continue to build-up over time. That is what the convolution model reveals.

      • Chief Hydrologist

        David,

        Production curves for oil have been discussed since the 1950’s – http://en.wikipedia.org/wiki/Hubbert_curve

        The issue is that it is a one dimensional understanding of a multi-dimensional economics problem better understood through the idea of substitution. For instance – I commonly use a 10% ethanol blend in my Mercedes SLS AMG. It is made from Queensland sugar residue, is cheaper (not subsidised) and works better. The evidence is that substitution is happening as the oil price remains high.

        There are multiple substitutions possible at some price point. For instance we could with cheap enough electrical energy take CO2 from the atmosphere, blend it with hydrogen from water and make a liquid fuel.

        There are no fat tails just a trailing skinny in an imaginary function.

        Your greenhouse carbon example occurs with unlimited nutrients, light and water. That does not happen in the wild. Terrestrial plants are commonly water limited. They respond to elevated carbon in the atmosphere by reducing the number and size of stomata – limiting gas exchange but also water loss. This is not neccessarily a hydrologically or ecologically good thing.

        Most of the recent warming happened in ‘climate shifts’ in 1976/77 and 1997/1998. NASA/GISS says that most of the rest was caused by cloud changes – especially in the tropics. Jim Hansen doesn’t believe them- but it fits into a pattern of decadal observations of cloud in the Pacific in particular.

        I’m not sure about cosmic rays and clouds – difficult to distinguish from other solar modulated changes in e.g. the NAO, SAM, ENSO, PDO, QBO, PNA.

        There are as well plenty of cloud nucleation sites over oceans in dimethyl sulphide emmissions from phytoplankton – which in turn respond to changes in upwelling of frigid and nutrient rich water at various location around the globe – but prominently in the eastern Pacific.

        Dropping your punch cards is not a recommended SOP.

        Cheers

      • The issue is that it is a one dimensional understanding of a multi-dimensional economics problem better understood through the idea of substitution.

        No, oil discovery and production is a stocks and flows problem, which is exactly how I laid it out in the analysis I call the oil shock model.

      • Chief
        Re: ” a one dimensional understanding of a multi-dimensional economics problem better understood through the idea of substitution.”

        May I encourage looking at both “multi-Hubbert” curves AND economic substitution.
        Economics: Globally we see long term shifts from wood to coal to oil to gas over centuries to decades.
        Peak response: M. King Hubbert modeled US and world oil production as a logistic curve. This fits pretty well for each geographic area. e.g. see Tad Patzak’s multi hubbert curves for US production.

        For such Hubbert analysis, we need to think of a given resource for a given technology within a given economic regime. Then apply multi-Hubbert analysis to summing across multiple regions.
        Constraints on availability raise prices which can push to substitution. Webhubtelescope then adds economic shock impacts.

        With sufficient high demand and prices, that justifies development of alternative resources. However, each region/resource/technology still follows the Hubbert type model pretty well.

        For a good overview, see: Tad Patzek in Peaks Everywhere

        Forecasting World Crude Oil Production Using Multicyclic Hubbert Model
        Ibrahim Sami Nashawi, Adel Malallah and Mohammed Al-Bisharah Energy Fuels, 2010, 24 (3), pp 1788–1800

        g once new data are available. The analysis of 47 major oil producing countries estimates the world’s ultimate crude oil reserve by 2140 BSTB and the remaining recoverable oil by 1161 BSTB. The world production is estimated to peak in 2014 at a rate of 79 MMSTB/D.

        When Kuwaitis get very similar results, it is time to wake up and take notice.

    • So why don’t you just run along and do that?

      It is a simple stocks and flows problem – that could in principle be modelled with commencial software such as STELLA.

      You say you can do quite a lot, but you haven’t shown anything you claim to be legitimate.

      Compliments on your irony, though.

      As far as I am concerned this is more angels on pinheads discourse by the usual suspects. Perhaps that should be pinheads …

      Yes, you are.

  22. Hal asks:

    How long does CO2 from fossil fuel burning injected into the atmosphere remain in the atmosphere before it is removed by natural processes?

    This question presupposes that the IPCC “mainstream” view is correct, i.e. that Earth’s natural CO2 cycle is in “equilibrium”, which is perturbed over longer periods only by human CO2 emissions.

    IOW, it presupposes that the recent suggestion by Professor Murry Salby regarding natural causes for changes in atmospheric CO2 levels is false.

    But back to Hal’s question: how long does CO2 “remain in the atmosphere”?

    Tom Segalstad has compiled several independent estimates of the residence time of CO2 in the atmosphere, arriving at an average lifetime of around 5 to 7 years.
    http://folk.uio.no/tomvs/esef/ESEF3VO2.htm

    In addition [to geologic processes] there is a short-term carbon cycle dominated by an exchange of CO2 between the atmosphere and biosphere through photosynthesis, respiration, and putrefaction (decay), and similarly between aqueous CO2 (including its products of hydrolysis and protolysis) and marine organic matter (Walker & Dreyer, 1988).

    Is this “residence time” of CO2 in the atmosphere as reported by Segalstad equal to the “residence time in our climate system”? The mainsteam view (IPCC) does not believe so.

    The Lam study cited by Judith states:
    http://www.princeton.edu/~lam/TauL1b.pdf

    There exists no observation data to validate this value. In fact, it is not possible to experimentally measure the value of τL (and the claim of its constancy) unless reliable data taken over many centuries (with constant emission rate) are available.

    τL ≈ 400 years is the consensus value of all the published IPCC
    models.

    As Willis Eschenbach has pointed out on other threads, model outputs are only as good as the assumptions, which were programmed in.

    Since “there exists no observational data to validate this value”, it is meaningless as empirical scientific evidence.

    But it does show the basis for the IPCC model assumptions.

    IOW, if the actual value turned out to be significantly different from 400 years, then the IPCC model projections for future temperature rise, based on various assumed model scenarios and storylines (assuming these are correct), would be too high or too low.

    The first IPCC assessment report (Houghton et al., 1990) gives an atmospheric CO2 residence time (lifetime) of 50-200 years [as a “rough estimate”]

    Table 1 of the IPCC TAR WG1 report says that the CO2 lifetime in the atmosphere before being removed is somewhere between 5 and 200 years with the footnote:

    No single lifetime can be defined for CO2 because of the different rates of uptake by different removal processes.

    This appears rather “loosy-goosy” to me. Yet the IPCC models have apparently used a “lifetime” of 400 years as the assumed value as is suggested by the Lam study cited above.

    Zeke Hausfather presented data at a recent Yale Climate Forum, which points to a half-life of CO2 in the climate system of around 80-120 years, with a suggested long tail: i.e. ~80% gone within 300 years and a small residual remaining thousands of years.
    http://www.yaleclimatemediaforum.org/pics/1210_ZHfig5.jpg

    In an earlier exchange, Fred Moolten cited a model-based study by David Archer et al., which points to pretty much the same conclusion as reached by the ZH data, also pointing out that the CO2 lifetime has a long “tail”.
    http://geosci.uchicago.edu/~archer/reprints/archer.2009.ann_rev_tail.pdf

    Let’s ignore the long tail for now and let’s assume that the half-life of CO2 in the climate system is 120 years, or at the upper end of Zeke’s curve.

    In actual fact, no one knows what the residence time of CO2 in our climate system really is, because there are no empirical measurements to substantiate this, as Lam has stated.

    Now let’s switch from model estimates to actual physical observations and check out the ZH estimate.

    The first thing we observe is that there is absolutely no statistical correlation between the annual increase in atmospheric CO2 and the total annual human CO2 emission (over the years this has varied from 15% to 88%, with the balance “missing”).

    [In fact, there is a much better correlation with the change in average annual global temperature from that of the preceding year than with the human emission, which would suggest that the conclusion reached by Professor Murry Salby (that it is natural and temperature-related) is valid, but let’s ignore that for now.]

    Since the annual human emissions show no correlation with the annual change in atmospheric concentration, we have to look at a longer-term average.

    Over the past 10 years, humans have emitted around 305 GtCO2 from all sources (fossil fuels, cement production, deforestation, etc.).
    [data from various sources: be glad to provide links if anyone is interested]

    The mass of the atmosphere is 5,140,000 Gt.

    So, if we assume that CO2 is a “well mixed GHG” and that the Earth’s entire carbon cycle is in “equilibrium” except for human emissions (as IPCC does), we should have seen an increase in atmospheric CO2 concentration of:

    305 * 1,000,000 / 5,140,000 = 59.3 ppm(mass)
    = 59.3 * (29 / 44) = 39.0 ppmv

    Yet over this same time period we only saw
    389.1 – 370.3 = 18.8 ppmv

    IOW 18.8 / 39.0 = 48% of the CO2 emitted by humans “remained” in the atmosphere, and the remaining 52% (or 2.2 ppmv/year) is “missing”.

    This would suggest that the half-life estimate of Hausfather is correct, and that the “missing CO2” is leaving our climate system. At a half-life of 120 years, this would represent an annual decay rate of 0.58% of the concentration or around 2.2 ppmv, which happens (by coincidence?) to be equal to the amount of “missing” CO2 today (based on the data cited above).

    The same calculation holds for the period from 1958 (when CO2 measurements at Mauna Loa were first recorded) to today, with a calculated 49% of the emitted CO2 “missing”.

    So this raises the basic question, which is as yet unanswered: is the “missing CO2” disappearing from the climate system and, if so, where is it going?

    The question concerning the ”missing CO2” remains a mystery, if we believe that human CO2 emissions have been the primary cause for increased atmospheric CO2 levels. [Of course, if it turns out that the postulation by Salby is correct, then this is all a rhetorical discussion.]

    But let’s assume that the mainstream view on this is correct.

    Where is the “missing” CO2 going?

    No one really knows because there are no physical measurements enabling the calculation of a real material balance.

    Some hypothesize that the ocean is absorbing most of it and it is being buffered by the carbonate/bicarbonate cycle there.

    Others believe that much of it may be converted by increased photosynthesis, both from terrestrial plants and marine phytoplankton.

    Since photosynthesis absorbs around 15 times as much CO2 as is emitted by humans, once can see that a slight increase in photosynthesis resulting from higher concentrations could well absorb a significant portion of the human emission.

    On an earlier thread Pekka Pirilä mentioned the exchange rate between the upper ocean and the immense CO2 reservoir of the deep ocean as a possible long-term “hiding place”.

    This reservoir is so vast that the increase from the “missing CO2” would hardly be noticeable, even if human emissions continued over centuries. In addition, there is a finite amount of carbon in all the fossil fuels on Earth; even if these were all burned and the CO2 ended up in the deep ocean, it would barely be noticeable.

    So, in summary, irrespective of the Salby postulation (which, if validated, would make this a rhetorical question), I believe we are unable to answer the question of the “missing CO2” today, despite several alternate hypotheses.

    And as a result, we are unable to answer Hal’s question.

    If anyone here has any hard data to refute this conclusion, I would be delighted to see it.

    Max

    • “Since photosynthesis absorbs around 15 times as much CO2 as is emitted by humans”

      I have not previously seen that figure (my ignorance, I suppose)

      Do we know this for reasonable certainty ?

    • Max, nice post, thanks very much for this. Rob

    • Nicely written argument but several points need to be made. For a fat-tail response, any notion of a mean needs to be reconsidered. As I indicated in a comment upthread, power-law distributions do not show a mean or for that matter any higher order moments. However they can mimic exponentials over the initial transient, and a “half-life” number does appear, but it has nothing to do with conventional notions WRT to exponential decline. That is where Segalstad made his mistake.

      Also significant is this point you made:

      The first thing we observe is that there is absolutely no statistical correlation between the annual increase in atmospheric CO2 and the total annual human CO2 emission (over the years this has varied from 15% to 88%, with the balance “missing”).

      I would change this to state “The first thing we observe is that there is almost perfect statistical correlation between the annual increase in atmospheric CO2 and the total annual human CO2 emission”.

      If you do the convolution of the actual emissions with a fat-tail impulse response, the agreement is stunning. I have done this myself and don’t expect the skeptics to do the same because they don’t believe in the fat-tail residence time. (I also argued this upthread)

      Other than that, good try.

    • Manacker, missing CO2 is your own invented uncertainty. There is no debate in the science community, or even blogosphere, about missing CO2. Ocean uptake and acidification, of course, accounts for it. The carbon cycle explains the flows and balances well. It would actually be more surprising if none of the emitted CO2 went into the ocean because an equilibrium is maintained in their water/air surface ratio, and I don’t think you know about this part from reading what you wrote.

      • Sorry, JimD, there are lots of hypotheses but no empirical data that verify your statement that the “mystery” of the “missing CO2” has been solved. [If you have such data, please correct me.]

        WebHubTelescope

        I’m less interested in the “fat tail residence time” (a rather hypothetical discussion IMO) than in the current “rate of decay”. If, indeed, the 120-year half life is correct, as postulated by the data presented by Zeke Hausfather, which I cited, this would point to a theoretical annual reduction equivalent to 0.58% of the concentration, which just happens (by coincidence?) to be the average annual level of “missing” CO2.

        The observation that the amount “remaining” in the atmosphere seems to correlate on an annual basis with the change in global temperature from the previous year (more “remains” in atmosphere if temperature has warmed, less if temperature has cooled) would point to the suggestion that there is a temperature correlation and that the ocean is absorbing more or less depending on temperature changes (Salby?)

        There are apparently still a lot of unknowns on CO2 residence time in our atmosphere.

        Max

      • Manacker, you need to point to someone who sees the carbon budget as not being closed. Where are people discussing this mystery you speak of?

      • Manacker, check my analysis on CO2 rise and you can see where it fits in. This is the result:
        trend

    • Max,

      I skip now totally our disagreement on the reliability of the main stream view on the persistence of CO2 in atmosphere over the first 100 or 200 years. I only note that I keep to what I have written earlier.

      Concerning the role of deep ocean, I’m still looking for any analysis on it’s real effective size and on the reliability of the estimates on the rate of exchange of CO2 with the deep ocean.

      One comment on its size is that the total amount of carbon in deep oceans is given approximately as 40 000 GtC or more than 50 times that of present atmosphere. In comparison, the total amount of fossil fuels underground is given in IPCC material as 4000 gtC. This is certainly very inaccurate. Whether that much can ever be extracted and burned is certainly questionable. If the Revelle factor of deep ocean is 10, that 4000 GtC added to deep ocean would be in balance with the atmosphere that has doubled its concentration from the level in balance with 40 000 GtC, or most probably the preindustrial level. That’s, however, almost certainly a gross overestimate, of the ultimate achievable extraction of fossil fuels. Even half of that is a high estimate.

      Whether the Revelle factor of deep ocean is 10 or significantly different is unknown to me. Estimating that would require fair knowledge on the chemistry (specifically pH and buffering) of deep ocean.

      An additional question is the speed of transfer of CO2 to deep ocean, which is likely to be slow with an effective time “constant” of hundreds to thousands of years, but again not well known.

      • Pekka

        Let’s address your points one at a time.

        I doubt that we have a disagreement on the “persistence of CO2 in atmosphere over the first 100 or 200 years”. I simply stated that there are no empirical data to substantiate this, as the Tam study concluded (and you have been unable to show me any such empirical data).

        The role of the deep ocean as a sink for future carbon emissions could be immense and the chemical and biological processes involved largely unknown or poorly quantified. We apparently agree.

        We disagree on the amount of carbon remaining in all the optimistically estimated fossil fuels inferred to be in place (whether extractable or not).
        I have cited 2010 estimates from the World Energy Council, which tell me that this could reach a maximum atmospheric CO2 level of around 1,000 ppmv – you have cited other estimates, which put this at a higher level.

        On the other points, we apparently agree.

        Max

      • Max,

        My point was not an actual disagreement, but I wanted to emphasize that the amount of carbon that the deep ocean will absorb given enough time is badly known. It’s not as huge as the present amount of carbon would indicate, when the Revelle factor is not taken into account, but it’s certainly very large even so. When a low value is used, the share of CO2 remaining in the atmosphere in balance with oceans may be around 20%, for a larger estimate the ratio of atmospheric carbon would be smaller.

        I would like to find scientific work that has studied this question, but I haven’t seen such. The answer influences also the reasonableness of ideas to store CO2 from sequestration in deep ocean. This idea was discussed widely at one stage, but not recently. I have failed in my search for proper justification for this change. Underground storage alternatives are certainly preferred at first, but the question of the suitability of deep ocean for storage is an interesting question as well.

  23. Every year, from May to September/October there’s a 5 – 6 ppm drop in atmospheric CO2 (in dry air). The efflux from the atmosphere is very large during this period.

  24. Nick,

    Any argument that relies on the Earth “knowing” anything is a bit suss, IMO.
    If you aren’t convinced by my few lines of argument , maybe you should take a look at:

    http://www.ipcc.ch/pdf/assessment-report/ar4/wg3/ar4-wg3-ts.pdf

    I’m not saying anything different to what the IPCC have already said. They make the point that to reduce CO2 concentrations, human CO2 emissions have to be reduced by more than half.

    • Atmospheric CO2 concentration will be reduced according to the change of climatic factors. When the cooling really gets going (SST decrease, sea ice increase), CO2 will be reduced. Human CO2 emissions are almost irrelevant.

  25. Edim,

    You’re obviously of the school of thought that carts push horses rather than horses pull carts. Your opinion is noted but I think we’ll have to agree to differ on that point.

    • Yes, we will agree to differ on that point.

      By the way, in your analogy temperature is the horse and CO2 is the cart.

  26. Dr Curry: Between them Nick Stokes (August 24, 2011 at 6:08 pm) and Fred Moolten|(10.39 pm) seem to have answered your question, at least to a ballpark figure. If annual anthropogenic CO2 emissions are sufficient to increase atmospheric content by say, 3ppm per year, but the actual increase is, say, 1.5ppm, then the remainder is being absorbed by sinks at a rate of 1.5ppm per year. If this process is occurring when CO2 is at around 390ppm, while the background before notable anthropogenic emissions was about 280ppm, then it will take (390-280)/1.5 years (so 73 years) for the anthropogenic addition so far observed to be absorbed by sinks. However as Fred has pointed out, if emissions ceased now it would take longer than that to revert to the background value, as the process would likely be logarithmic.
    With the above scenario, if emissions were halved now, the atmospheric level would stabilise at the present value.
    The mean residence time of CO2 in the atmosphere is scientifically interesting, but a separate issue. It may be about 5 years.
    There are plenty of weaknesses in the IPCC account of climate science, but this issue is not favourable ground for climate skeptics to attack the orthodox position.

    • You continue to make the impossible assumption that sources and sinks aren’t variable. You persist in simple calcs – simple linear concepts – for complex systems. It is all a very silly charade.

      • Have you actually done the convolution calculation yourself?

        The concept that you and others seem to miss is that probability and statistics in the large can often simplify complex systems.

        You continue to make the impossible assumption that sources and sinks aren’t variable.

        Using probability models allow us to model this variability. You seem to be very conflicted in your stance.

      • Chief Hydrologist

        The mad theory that you perpetuate is that data is need to define probabilities and statisitcs.

        I had to look this up.

        ‘In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functions f and g, producing a third function that is typically viewed as a modified version of one of the original functions.’
        http://en.wikipedia.org/wiki/Convolution

        No it is just nonsense – to think that we could apply probabilities without experience has no possible meaning. The probability that it varies x is this y that and z a herd of giraffe – great use for the infinite improbability generator.

        Why do you think you are entitled to bother me with nonsense?

      • Chief Hydrologist

        Definitely annoyed and incoherent –

        The mad theory that you perpetuate is that data is not needed to define probabilities and statisitcs…

        I have had enough of your insanity as well.

      • It may not be your fault that you haven’t been exposed to the approach. I have noticed that certain branches of engineering and science never introduce the topic of convolutions. I am not exactly sure why this is but it is certainly short-sighted. Convolutions form the basis for everything from the Central Limit Theorem (applied probability) to linear and nonlinear response functions. The fact that it gets used for to explain both deterministic and stochastic behavior makes it a very general approach.

        “It is difficult to exaggerate the importance of convolutions in many branches of mathematics.” — William Feller, “An Introduction to Probability Theory and Its Applications”

        You may want to bone up on the topic. Climate scientists use the approach all the time because they realize that forcing functions are not idealized delta impulse functions and thus they need to convolve the input with the response function to model the result.

      • You might want to be a little less condescending to people like Pekka. ‘

        ‘More specifically any agreement in far tails with results of Archer is likely to be dependent on, how Archer did his work rather than on the real world facts, because the empirical support for any conclusions on the far tails is statistically extremely weak. Relevant empirical data exists only from very distant past, and the the results on far tails are determined by the methods used to extract information from the far tails. A very small change in the methods makes the outcome totally different.’

        ‘In probability theory, the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that the probability mass function or probability density function of a sum of random variables is the convolution of their corresponding probability mass functions or probability density functions respectively.’ A simple concept involving the sum of two or more nominally random variables.

        The key term however is having empirical support for observations of what Pekka calls the far (as opposed to fat) tail. Without observation there is no rational basis for applying probabilities.

        Show me first the variability of the specific CO2 compartments – and then the sum of those in the atmosphere? Rather – don’t bother because I won’t believe you without more detailed data than we have available. We can fit a curve to anything – or make one up out of blue sky – but it is just maths and not science.

      • The Taylor series approximation for exp(-k*t) and a power law like 1/(1+k*t) is exactly the same to first-order.
        That explains why the short residence crowd gets the time constant wrong.

        The next issue is that the convolution of random variables can go to one of two classes of stable distributions as a central limit, one is thin-tail and the other a fat-tail.The Normal distribution is considered a thin-tailed stable and it requires thin-tail distributions for the input variates. The other analytic stable distribution are the Levy and Cauchy, which both have fat-tails and require fat-tails as input. The Cauchy PDF has a power law which follows an inverse square dependence.

      • Is this a game? Throw in a few inapplicable functions and pretend it is meaningful? Fool the sceptics because they are idiots and won’t know any different?

      • Is this a game? Throw in a few inapplicable functions and pretend it is meaningful? Fool the sceptics because they are idiots and won’t know any different?

        Listen buddy, I am serious as a heart attack about this stuff. I have no idea what your deal is, but you quote something and I can’t tell if you are referring to some imaginary friend. I do the best I can to respond.

      • Passionate about applying functions to a problem with no empirical base. One of them might fit – but how would you know?

        I keep saying that and you refer somewhere else to volcanic emissions – without reference to anything – but they are so small and the problem remains is the old one of too many unknowns in too few equations. We need more knowns and not more functions.

        Now there is no solution in what you say – just another theorist driven mad by the climate wars with a line of enquiry that no one appreciates.

      • Passionate about applying functions to a problem with no empirical base.

        What are you talking about? The empirical base is all around us. We are soaking in it. Egad man, do you not see the record of CO2 increase. Do you just want to close your eyes and stick fingers in your ears?

        One of them might fit – but how would you know?

        That is the role of a probability analysis, as one tries to understand the situation with limited information.

        I keep saying that and you refer somewhere else to volcanic emissions – without reference to anything – but they are so small and the problem remains is the old one of too many unknowns in too few equations.

        You do not understand how impulse response functions work. I surmised this when you had to look up what a convolution was. The records of a volcanic event can be used to judge the response function independent of the size of the event. Maybe you can also learn something about signal processing.

        Now there is no solution in what you say – just another theorist driven mad by the climate wars with a line of enquiry that no one appreciates.

        It must make you very angry that I am a citizen scientist. My interest in the environment is broad reaching and this is confirmed in the fact that I spent only 17 pages of my 750 page book on energy on the topic of CO2 rise and residence time (I didn’t write anything on AGW temperature rise since I am still learning). You like to cast aspersions my way but all I can say is that I have a serious interest in this subject and am not beholden to anyone’s agenda. I am also a fan of human behavior and I find it interesting to see how wound up some people can get when another person just talks.

      • ‘That is the role of a probability analysis, as one tries to understand the situation with limited information.’

        I think I am getting a clue as to the role of ‘probability analysis’ – it is filling in the gaps with imaginary data and then believing it absolute certainty.

      • Just getting up to speed eh? Nowadays news weather forecasting is 100% probability analysis. All insurance is probability analysis. All betting is probability analysis. These all use limited information because they are trying to anticipate something that will happen in the future and no behavior will repeat in exactly the same way .

      • Weather forecasts are based on initialised models – accurate for a week at most. Long range forecasting of rain is sometimes expressed as a broad probability – if we have a La Nina there is a probability of higher than average rainfall. This is based on correlation with persistent patterns of SST. Insurance is based on demographics and the law of large numbers. Gambling is either a zero sum game or the house takes a slice.

        You have a single variable – the concentration of CO2 in the atmosphere -that you think you can divine by means of an ontological argument alone.
        That is what you think it looks like can be described by a function of some sort – and it must be right because it is based on what you think it should look like.

        I can call black 17 – but it doesn’t mean I am right. That’s called gambling?

      • OK, so it looks like you do believe in the utility of probability.
        Ontologically speaking, the environment is ruled by disorder. Tell me something that doesn’t show disorder in the environment and we can stop using probabilities.

      • No – weather and climate are not predicted using probability at all. You are conceptually all over the in trying to defend a nonsensical, impractical, misguided and deeply unscientific approach. You are defending solutions without data.

        The world is ruled by cause and effect – entropy may occur but it tells you nothing of how energy or matter moves through the world. There is nothing random even in the fall of the roulette ball – it is all deterministically chaotic. We define the statistics of events where we cannot identify cause and effect and it works in simple applications – climate is far from simple.

      • No – weather and climate are not predicted using probability at all. You are conceptually all over the in trying to defend a nonsensical, impractical, misguided and deeply unscientific approach. You are defending solutions without data.

        For as long as I can remember, weather forecasts are presented with probabilities. For example, probability of precipitation (POP) is routine. This is all so common sense to me and so I started looking into this and yes, some years ago “The American Meteorological Society endorses probability forecasts and recommends their use be substantially increased.”
        http://www.ametsoc.org/policy/enhancingwxprob_final.html
        AccuWeather has a thing they call AccuPOP and forecasters seem to use ensemble averaging.

        The world is ruled by cause and effect – entropy may occur but it tells you nothing of how energy or matter moves through the world. There is nothing random even in the fall of the roulette ball – it is all deterministically chaotic. We define the statistics of events where we cannot identify cause and effect and it works in simple applications – climate is far from simple.

        You do not seem to know that of the origins of the master equation, the Fokker-Planck equation, the Navier-Stokes equation and the entire class of problems that introduce the concept of diffusion. All these at the core are based on probability and on conservation of mass, i.e. through the analysis of divergence and gradients. You seem not to realize that a Monte Carlo simulation needs to draw from a probability density function to even work. I suppose you are going to tell me that no one uses Monte Carlo, egad.

        Once again you are engaging in a moot philosophical discussion. Yes, unless you get to the quantum level, actions are deterministic. Most physicists and scientists realize this and then move on and apply practical ensemble approaches.

        I can engage in my own philosophy and find that most of the most ardent determinism followers are also religious dogmatists who believe everything is preordained and there are no shades of gray. I’ve worked with these people and gone to school with them, and I find that there is no reasoning with them. I kind of doubt this applies to you, but you never know.

    • That would imply every other non-human CO2 process is identical from year to year. Hard to believe.

  27. http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf

    The annual CO2 growth rate varies between 0.3 and 3 ppm. In 1998 it was 2.98 ppm and it was the maximum growth (el nino). In 1999 it was only 0.9 ppm. I hope nobody hides the coming decline.

    year ppm/year
    1998 2.98
    1999 0.90
    2000 1.76
    2001 1.57
    2002 2.60
    2003 2.30
    2004 1.55
    2005 2.50
    2006 1.73
    2007 2.24
    2008 1.64
    2009 1.89
    2010 2.42

    • Thanks, Edim for these figures. The observed
      pattern seems reasonable for short-term fluctuations (due to regular (seasonal) and less regular changes in the balance of CO2 sinks and sources)
      superimposed on a steadier, larger amplitude background trend (the addition of new CO2). It’s like waves which modify the water level short-term by small amounts while the larger, longer-term trend is due to the rise or fall of the tide.
      And unless the Mauna Loa data are fudged, there is extra CO2. The important thing is (as Tall Bloke suggests below): does it matter?
      Skeptics and critics of the climate orthodoxy might be better employed in assessing whether the increased CO2 is reflected in worryingly – or even measurably – increased surface temperatures.

  28. Tomas Milanovic

    I cannot believe that people want to tackle this extremely complex and hitherto poorly understood problem with primitive equilibrium models, Henry’s laws and “boxes” with time constants.
    Consider just this paper : http://www.ess.uci.edu/~jranders/Paperpdfs/2001ScienceBehrenfeld.pdf.

    We learn there that the NPP (Net Primary Production of oceanic and terrestrial plants) is around 110 billions tons carbon per year what is about 14 times the CO2 emissions.
    We learn farther that not only is the NPP highly variable by amounts roughly equivalent to the CO2 emissions but also sensible to oceanic oscillations.
    We have here nothing that “averages out” and this is no simple physics with Henry’s law.
    About half of it is phytoplancton. This is a huge living carbon storage governed by biology and metabolism depending on parameters like light availability (e.g cloudiness), nutrient availability (e.g oceanic currents), salinity, temperature.
    This can’t absolutely be treated by a “box” with one time constant.
    It is a huge stock + flow poorly understood system with no conservative parameter, chaotically fluctuating on large time scales (bieannal, decadal and more) on orders of magnitude that are equivalent to the variable we are talking about.

    Of course one must take this paper with a huge grain of salt. The observations cover only a short time period and the models don’t agree precisely with each other.
    My purpose is merely to tell to interested readers that the CO2 fluxes and their dynamics especially for larger (decadal and multidecadal) time scales are very far from naive 2 or 3 “independent box” models each described by one time constant.

    • I agree with Tomas Milanovitch’s assessment of the situation.
      http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104528

      In any case, before we get worried about the residence time of co2 in the atmosphere, we need to work out whether changes in co2 levels are worrying anyway.

      The magnitude of the total energy contained in the ocean is such that minor fluctuations in radiative balance between the components of the longwave flux are pretty much irrelevant except on extremely long time scales.

      http://judithcurry.com/2011/08/19/planetary-energy-balance/#comment-104533

    • The problem with your argument is that the biological carbon storage of oceans is actually very small, only of the order of 0.5% of the carbon of the atmosphere and almost negligible compared to the inorganic carbon of surface ocean. Because it’s so small, it cannot contribute to the changes in atmospheric CO2.

      The biologic processes that remove carbon from the surface ocean and move it down to deeper ocean are important, but the uncertainty in the net transfer is not so large that it could change the overall conclusions.

      The biosphere of land areas is much larger and interacts strongly with the carbon in soil. How much these reservoirs may have changed can be estimated and such limits obtained that conclusions cannot change much from the main stream views. On annual level the variations are large, but over longer periods they cannot be more than a small fraction of the increase in atmospheric carbon.

      • Tomas Milanovic

        Pekka you are being silly now.
        The variation of the yearly fluxes is of the order of the total CO2 emissions and the storage is 15 times that amount.
        You are using in the other direction another silly argument which says that 2 ppm emission per year is so small compared to the carbon in the atmosphere that it can’t possibly matter.
        I am sure that you agree that the second argument is silly so now you only need to realize that the first is equally so.

        I hope you DO realise that a process that removes 15 times as much CO2 from the atmosphere than what you put in it and is highly variable will definitely have an influence on the RATE with which the CO2 concentration in the atmosphere changes.
        And how variable it is, just read the paper I linked.It is not so long.

      • The continuous exchange has a large volume, but it cannot move over longer periods more than the reservoirs can take. Oceans 3 GtC of biological carbon is really small. It’s total amount is recirculated in less than a month. Thus it cannot have much influence even on seasonal level, and really no effect on annual or longer periods.

        The variations in the net uptake in continental biosphere and soil appears to be the main reason fro year to year variability, but that variability cannot continue so strongly in the same direction for long, because the changes in the related reservoir are not changing that much.

        I may have a different opinion on, who is silly and leaves essential factors out of consideration.

      • “Oceans 3 GtC of biological carbon” “It’s total amount is recirculated in less than a month.”

        I would be interested in reading your references if you happen to have them handy.

      • This is interesting. The 105 petagrams is 105 gigatons of carbon or about 315 gigatons equivalent carbon dioxide reduction due to photosynthesis per year. The oceans release a net 50 gigatons carbon per year. So this is headed to the down welling debate issue where its the net that counts. :)

      • Plankton savor the
        Sultry ultry-violet.
        Layers of turtles.
        =========

      • The uv and shorter wavelength impact on the deeper ocean is interesting. But that small percentage, tail if you will, is evidently not as popular as other tails.

      • Pekka Pirilä

        You write to Tomas Milancovic:

        The problem with your argument is that the biological carbon storage of oceans is actually very small, only of the order of 0.5% of the carbon of the atmosphere and almost negligible compared to the inorganic carbon of surface ocean. Because it’s so small, it cannot contribute to the changes in atmospheric CO2.

        It is correct that the carbon sink (storage) in the oceans is estimated to be only a fraction of that on land (including vegetation and solis), but the rate of CO2 absorption (which impacts the global carbon balance and henece the changes in atmospheric CO2) is roughly equal.

        See:

        http://www.pnas.org/content/100/17/9647.full

        A rich diversity of marine phytoplankton, found in the upper 100 m of oceans, accounts only for ≈1% of the total photosynthetic biomass, but this virtually invisible forest accounts for nearly 50% of the net primary productivity of the biosphere ( 1).

        http://www.sciencemag.org/content/281/5374/237.full

        Integrating conceptually similar models of the growth of marine and terrestrial primary producers yielded an estimated global net primary production (NPP) of 104.9 petagrams of carbon per year, with roughly equal contributions from land and oceans.

        Max

      • Max,

        Your quotes confirm what I have written.

      • It might confirm what you say. It rather depends on how quickly it is incorporated into the food chain. The carbon is still locked in biomass regardless of where on the food chain it lies.

      • Pekka

        Right.

        It basically confirms that photosynthesis (terrestrial and marine) removes roughly 15 times the annual emissions from humans.

        Max

      • But it also tells that this cannot affect the CO2 content of the atmosphere nearly as much. In particular the amount of carbon in the marine life cannot vary much, because it’s always so small that there’s almost nothing to vary.

        How can it be that the gross rates are brought up time after time although only net rates matter, and net rates cannot change without corresponding changes in the amount of stored carbon.

        Understanding this cannot be so difficult.

      • Pekka

        Gross rates?

        Net rates?

        What counts is the rate of CO2 conversion on an annual basis (or over any other multi-seasonal time scale).

        This rate of conversion is estimated to be roughly 15 times the rate of human emissions over the same time frame.

        If this rate of conversion shifts (as a result of higher atmospheric or oceanic CO2 concentrations or any other factor), it can become significant.

        What then happens to the terrestrial plants or phytoplankton is another story. Do the phytoplankton end up in the marine food chain, in some cases re-creating CO2 which is absorbed in the ocean’s buffering process or released to the atmosphere or in other cases going into sea shells which end up eventually sinking to the ocean bottom?

        Who knows?

        I’m afraid that the unknowns here exceed the knowns.

        Max

      • I can only repeat:

        It cannot be so difficult to understand.

      • There may be a climatic impact through UV interaction with stratospheric ozone. There may be a climatic impact through UV interaction with oceanic phytoplankton. If both exist, might not they interact in concert with the UV variability?

      • Hi Pekka,

        You are in my world now. The sequestration of carbon in oceans is by two processes – the so called biological and chemical pumps. http://earthguide.ucsd.edu/virtualmuseum/climatechange1/06_3.shtml

        The chemical pump is rate limited by the supply of calcium from rock weathering – which is influenced by temperature, groundwater moisture, carbonic acid in rain and mechanical breakdown by roots and fungi. There is no compelling reason to suggest that this is static.

        The biological pump is limited by macro and micro nutrients – including (important for carbon sequestration) silicate for diatoms and carbonate for coccolithophores. The ecosystem is light limited – reliant on phytoplankton for primary production as the basis of the food chain. The difference between this soup of micro-organisms on the surface of oceans and the terrestrial plants can be explained in terms of life cycle. Some terrestrial plants can live for a 1000 years – when they die the most of the carbon is returned to the atmosphere through the respiration of microscopic heterotrophs. The life cycle of oceanic plankton is a matter of days – whereby some sinks into the depths where it forms thick organic, silcate and carbonate layers at the ocean bottom. The micro-organisms with silicate and carbonate shells sink relatively quickly.

        The abundance of these organisms varies especially as nutrients are returned to the surface in deep ocean upwelling. The major area for this is in the region of the Humboldt Current off the coast of South America.
        Upwelling does change considerably – firstly in the ENSO cycle leading to short term booms and busts in oceanic and terrestrial biology but also in decadal, centennial and millennial timescales that we know of. The well known Pacific Decadal Oscillation involves decadal changes in upwelling in the Pacific north east. The frigid and nutrient rich upwelling is super-saturated with carbon dioxide – but also leads to blooms in carbon fixing organisms.

        All in all – I suspect that an assumption that these systems don’t change over time cannot be substantiated.

      • Rob,

        While I’m certainly not an expert in your field, I’m aware on what you describe, and I have tried to formulate my statements so that they are valid over a wide range of variability and uncertainties of the type you describe.

      • Chief Hydrologist

        Hi Pekka,

        It is a pleasure to hear from you. I hope you are well and happy and take care to stay so.

        Cheers

      • Hi Rob,

        I’ll try to maintain my physical and mental health better next week, which may mean lead to little or no contribution to these discussions over that period.

        I hope that you’ll also take care.

      • Chief,
        I believe that when you read up on what limnologists are doing you will find a new dimension tot he carbon sink question.

      • Dang, Chief, I wanted to put my question here about the possible interactive climate effect of UV on oceanic phytoplankton and stratospheric ozone.
        ========

      • kim, Right on.

      • Chief Hydrologist

        Hi Kim,

        The micro-organisms release dimethyl sulphate into the atmosphere – and so provide their own sunscreen and are thus impervious to UV. Then it becomes very cloudy and rains a lot blocking IR and causing the stratospheric oxone to cool in turn inducing a runaway cooling effect with NAO and PDO feedbacks and the inception of the nest glacial – due in about a week. Don’t forget to rug up.

        Cheers

      • The micro-organisms release dimethyl sulphate

        Do you mean Dimethyl Sulphide?

        BTW, this has a very short residence time.

      • Pekka
        That is a large assumption you are making.

      • Thank you for your reply above

        When I read the 15x figure, I knew it would cause angst

      • Depends on the type of phytoplankton.

        “Until now, it was thought that all the photosynthetic algae and bacteria living in the ocean drew carbon dioxide out of the air and used it to build sugars and other carbon-rich molecules to use as fuel. But two new studies by researchers at Stanford and the Carnegie Institution show that Synechococcus, a type of cyanobacteria (formerly called blue-green algae) that dominates much of the world’s oceans, has evolved a mechanism that short-circuits photosynthetic carbon-dioxide fixation while still producing energy. The alternate approach is found in regions of the ocean where some of the ingredients necessary for traditional photosynthesis are in short supply.

        “The amount of carbon dioxide being drawn down by the phytoplankton in nutrient-poor oceans might turn out to be significantly lower than we thought,” said Shaun Bailey, a postdoctoral researcher working in the Carnegie Institution’s Department of Plant Biology with Arthur Grossman, a staff scientist at the institution and a professor, by courtesy, in Stanford’s Biology Department.”

        http://news.stanford.edu/news/2008/april2/plant-040208.html

    • Thanks Tom, a good starting point of actually discussing the issues, except Cheif has already tried to bring in the the stocks and flow model with variable rates, but it appears, that some do not want to discuss these issues. One you left off your list is the acidic effect on releasing nutrients that tend to increase the rate limiting variable for a net effect of increased flux, that which would indicate astatic constant for the fat tailed Bern model tends to overestimate the residence time. ANd being fat tailed means that small changes can greatly increase of decrease the residence time. Nor do some seeem to appreciate that such stiff equations in a model well past the ability to assume or measure means that such claims are speculative, not verified nor validated.

  29. Here is Railback’s take on residence time. Their take on the issue is certainly interesting.

    http://www.gly.uga.edu/railsback/Fundamentals/AtmosphereCompV.jpg

    The range of 2-10 years is reasonable considering the unknowns involved.

  30. This thread is better described as one discussing the current understanding of CO2 lifetime in the atmosphere. Salby’s paper, while dismissed by many, has not been actually read. Until it is available, it is not reasonable to claim this is resolved.

  31. Even Phil Jones of CRUgate was forced to admit that there has been no significant global warming since 1995. After all of the shenanigans, involving data corruption and data gone missing, the global warming house of cards collapsed in the UK.

    We then learned that the raw data for New Zealand had been manipulated; and, NASA’s data is the next CRUgate: satellite data shows that all of the land-based data is corrupted by the Urban Heat Island effect.

    Manipulation of the data is so bad that the recent discovery concerning a weather station in the Antarctic where the temperature readings were actually changed from minus signs to a plus signs to show global warming almost comes as no surprise.

    And then, there was a peer-reviewed study showing the ‘tarmac effect’ of land-based data in France where only thermometers at airports–in the winter–showed any warming over the last 50 years. Since then, the problem of data corruption due to continual snow removal during the winter at airports where thermometers are located–while all of the surrounding countryside is blanketed in snow–has been shown to extend far beyond the example in France (e.g., Russia, Alaska).

    In reality, there essentially has been no significant global warming in the US since the 1940s. The only warming that can be ferreted out of the temperature records is in the coldest and most inhospitable regions on Earth, such as in the dry air of the Arctic or Siberia where going from a -50 °C to a -40 °C at one small spot on the globe is extrapolated across tens of thousands of miles and then branded as global warming.

    Warming before 1940 accounts for 70% of the warming that took place after the Little Ice Age ended in 1850. However, only 15% greenhouse gases that global warming alarmists ascribe to human emissions came before 1940. Obviously, the cause of global warming both before and after 1940 is the same: solar activity during that period was inordinately high. It’s the sun, stupid. Now we are in a period where the sun is anomalously quiet; and, now we are in a period of global cooling and have been for almost a decade.

    And what about the measurement of atmospheric CO2? We learned that the CO2 readings are based on measurements taken on the site of an active volcano (Mauna Loa) and have been completely fabricated out of whole cloth by a father and son team who have turned data manipulation into a cottage industry for years. (e.g., “Time to Revisit Falsified Science of CO2.” by Dr. Timothy Ball)

    • Wagathon,
      While there is good reason to question the amounts of CO2 i the atmosphere and the accuracy with which they are measured, and even to question the consensus view regarding long term CO2 resdience times, I htink attributing motives to the Mauna Loa group is out of place.
      Additionally, CO2 is measured at several points and these results seem to concur with Mauna Loa.
      Skeptics do not need to argue against the GHG/Tyndel theory to point out that the idea co a cliamte crisis caused by CO2 fails.
      The lack of crisis- the lack of any meaningful trend lines in cliamte events- makes that argument.
      The juicy bits- climategate, Mann’s ‘fudge factor’ hockey stick, the lack of OA, the obvious politial bias of many AGW promoters, the utter lack of any actual mitigation policy/technology, the profiteering, etc. etc. etc. are all icing on the cake.
      If Salby’s paper, when finally available for actual review, holds up, then great.
      But that will not change the basic point, no matter if Salby’s paper survivies or not: the world is not facing a climate crisis, and additionally the AGW community has offered nothign in the realm of reality to mitigate this crisis in the first place.

      • hunter

        Thanks for a very concise summary of the situation.

        Max

      • Do you appreciate by how many parts per million the atmospheric CO2 levels at Mauna Loa can vary–in a single day?

      • Wagathon,
        Yes, but if it trends up or down the daily fluctuations are not important.
        Last time I checked, the other CO2 stations around the world corroborate the trend, but that has been awhile.
        Now that is not directly addressing the change in CO2 measurement regimes from the 19th century to the present. That may be worth revisiting.
        And the persistent underlying probelm of data source credibilty that you bring up is also important.
        But think on this: evenif the climatocracy is getting it wrong and evenmore are indulging in nobel cause corruption that cliamtegate showed, what have they actually demonstrated?
        Nothing that is actually out of historical ranges of climate.
        If there is systemic book cooking in the AGW promotion industry, it will be found out over time. Climategate certainly gives us some pretty strong hints. But be patient. It will come out in the end.
        But even before then, AGW still fails based on simply critically reviewing what is claimed.

      • Facts are facts. “Carbon dioxide is 0.000383 of our atmosphere by volume (0.038 percent). Only 2.75 percent of atmospheric CO2 is anthropogenic in origin. The amount we emit is said to be up from 1 percent a decade ago. Despite the increase in emissions, the rate of change of atmospheric carbon dioxide at Mauna Loa remains the same as the long term average (plus 0.45 percent per year). We are responsible for just 0.001 percent of this atmosphere. If the atmosphere was a 100-story building, our anthropogenic CO2 contribution today would be equivalent to the linoleum on the first floor.” ~Joseph D’Alea

      • Wagathon,
        That is why I think that the carbon cycle story is not well understood yet.
        And I believe that the cliamte is demonstrating significant capability to handle forcings of different sorts with few problems.
        After all, we have been changing land use, vegetation, massively urbanizing, moving surface water around, etc. for a very long time with a climate that has pretty much ignored us.
        I was at meeting in my lovely drought stricken city last night with some very educated people. Many of whom deeply believe in AGW.
        When the panelists, which included the Chairman of the State water planning board, were asked why this current drought was not being included in the planning, the bottom line answer was because this famous drought is no where near as bad as the the one we had ~60 years ago.
        Each and every AGW scare ends with that same whimper. Katrina, Russian heatwave, Pakistan floods, Australian drought and flood, etc.

  32. Hunter

    Well-summarised. Scepticism at it’s healthy best.

  33. The problem with the prevailing paradigm is something which is unobservable to the mainstream of climate science, because they are focusing on deterministic systems with deterministic forcings. But, nature is not deterministic. We are dealing with a stochastic system here.

    One year, there will be a spate of volcanic eruptions. Other years, there will be massive forest fires. Then others, regions will become deserts, while others will bloom. All kinds of variations can be associated with ocean dynamics. Small random events, perhaps. But over time, they accumulate. And, an accumulation of independent random events begets a random walk, whose dispersion from an initial condition grows with the square root of time.

    The notion of a long residence time means that, over a substantial portion of that timeline, the system behaves essentially as a pure accumulator. So, in addition to accumulating any and all excess emissions (distributing them over all reservoirs), it will accumulate any random events as well, and it will show increasing dispersion from the initial condition approximately increasing as the square root of time, and topping out at a level proportional to the dominant time constant in roughly that timeline.

    The variability, the krinkles and so forth you see in the data, should increase within an approximately square root envelope from a given initial time when viewed over a timeline associated with the residence time. Moreover, the one-sigma upper bound should be approximately equal to the year-to-year variability multiplied by the time constant. What we actually see is a variation from the long term quadratic factor of at most about +/- 1 ppm. If we assume half of that variation is going into the land and ocean reservoirs as posited by the reigning paradigm, then we might be able to argue that there is +/- 2 ppm variability. A 100 year time constant would then mean that the year-to-year variability is at most on the order of about 0.02 ppm, or 20 ppb. That appears to me to be an absurdly small value.

    What we see is actually a tight regulation of CO2 levels in the high frequency regime. This is characteristic of a high bandwidth (short residence time) system. In such a tightly regulated system, any large excursion necessarily comes about from a very strong input, or a change in the balance of forces which establish the equilibrium.

    This, IMHO, is what is happening. It could be deep ocean upwelling from the distant past which is driving an increase in atmospheric partial pressure. Or, it could be a temperature dependent swing due to the temperature rise since the LIA. This latter possibility is consistent with the observed interannual variation in CO2 due to global temperature variation if the dominant time constant is on the order of perhaps ~30 years.

    • I can actually understand what you are saying. Have you considered simulating the envelope via a model and then plotting the result?

      • That’s great! You may be the first one.

        However, in recalculating the effect, I must note that I was wrong to say “A 100 year time constant would then mean that the year-to-year variability is at most on the order of about 0.02 ppm”. The proper factor is the square root of the time constant, so this would increase to 0.2 ppm. Perhaps that is not quite so absurd. This, I will have to think about…

  34. Bart,

    “The problem with the prevailing paradigm is something which is unobservable to the mainstream of climate science, because they are focusing on deterministic systems with deterministic forcings. But, nature is not deterministic. We are dealing with a stochastic system here.”

    I think I must be allergic to the word “paradigm”! It usually crops up in these kind of sentences which look like an output of those gobblegook generators and its brings me out in a hot flush!

    BTW. What does the hell does this mean , anyway?

    • I had expected I might need to review Elementary Signals & Systems with some commenters. I have to admit I didn’t prepare to field questions about English 101.

      • Bart,

        The ultimate solution has to involve systemised logistical and optional incremental contingencies using parallel 21st Century modular capability and knowledge-based exploratory innovation. At base level, this just comes down to four-dimensional third-generation concepts…….

        Look, you’ve got me at now!

        For all your talk of “square root envelopes” and “one sigma upper bounds”, what I think you mean in your post is that mainstream science is overconfident in its ability to make accurate predictions. But, because some events are random, such as volcanic eruptions, and therefore associated risks have to be treated statistically, this may well be misplaced.

        Its a convoluted variant of the the “they can’t predict the next week’s weather so how can they predict the next century’s climate” type of argument.

        PS I’ve just noticed, in your post, you’ve used the word ‘paradigm’ twice!

      • No. You aren’t following the argument at all. It has nothing to do with prediction. It has to do with how natural processes evolve, and what properties they tend to exhibit.

      • Emission of large amounts of carbon from burning fossil fuel is not a natural process but it does force the natural system to respond. Understanding how it responds is the same thing as prediction.

      • Again, this is beside the point. I am talking about the dispersion properties from natural variations which should be observed if the residence time were long.

    • It’s inflation. Originally it was “a penny for your thoughts.” Then it went up to “just my two cents.” Kuhn and Lakatos raised the ante to twenty cents.

    • Tempterrain

      You question the meaning of the word “paradigm” when discussing climate science. Let me see if I can give you my take on that.

      Wiki has a fair definition of the word “paradigm”, and it’s meaning in the scientific sense.

      The classical description of how paradigms work in science is given by:
      Kuhn, Thomas S. The Structure of Scientific Revolutions, 3rd Ed. Chicago and London: Univ. of Chicago Press, 1996. ISBN 0-226-45808-3

      A paradigm is neither “good” nor “bad”.

      It can keep people from “re-inventing the wheel” over and over again, but it can also keep people from “thinking outside the box”.

      In the worst case, as described by Kuhn, it can lead to important data points, which lie outside the “paradigm”, being ignored or written off as meaningless “outliers”. This has been described as “paradigm paralysis”.

      The current “paradigm” in climate science is the (so-called) “consensus” view on AGW, as promoted by IPCC.

      As was shown on other threads here, some of this “consensus” may well have been contrived via a corrupted IPCC process.

      If something totally new comes along, which falsifies an existing “paradigm”, this can lead to a “paradigm shift”, whereby the new finding eventually becomes the new “paradigm”.

      A “paradigm shift” usually does not occur smoothly, as individuals who have invested in the old “paradigm” will tend to defend it against new threats from the outside.

      All of this makes sense to me in many fields outside climate science (including business), and I see no reason why it should not apply to climate science equally well.

      Max

  35. Since only about half of our annual CO2 emissions are contributing to the increasing atmospheric CO2 level, where is the other half going?

    If we abruptly stopped emitting CO2 altogether, presumably the half of our emissions that has been accumulating would stop accumulating.

    But would the half that is being removed somehow continue to be removed, or would it equally abruptly stop being removed?

    I don’t see any possible mechanism for the latter, which is surely driven by level of atmospheric CO2 and not rate of our emissions. How could the mechanism removing that half tell that we’d suddenly stopped emitting?

    But if it’s the former, that would imply that when we stopped emitting, CO2 would not remain steady but would decline at 2 ppmv per year, at least in the short term, this being the rate at which the removed half is currently being removed. Or even more if the removed portion is actually 60% rather than 50%, which it may well be.

    That rate of decline would presumably slow down as the level decreased, but would it slow to zero as the traditionally held 280 ppmv level is approached, or would it overshoot?

    I’m not trying to defend any sort of skeptic position here, I’m just doing the obvious math and asking the obvious questions.

    • “But would the half that is being removed somehow continue to be removed, or would it equally abruptly stop being removed?”

      Under the assumption that residence time is long, it would essentially stop abruptly, and very slowly drain away. Under the assumption that the residence time is short, and other processes are responsible for the rise, it would effectively change nothing, or at least very little.

      “But if it’s the former, that would imply that when we stopped emitting, CO2 would not remain steady but would decline at 2 ppmv per year…”

      No, because it would mean that the rise was not a result of human forcing, so the termination of our contribution would change nothing, or at least very little.

      • Under the assumption that residence time is long, it would essentially stop abruptly, and very slowly drain away.

        Since “residence time” is a meaningless concept in other contexts like your bank account, what is different about CO2 that makes “residence time” meaningful in the atmosphere?

        If for the sake of financial equilibrium in your life you balance your monthly expenditures with your monthly income, what is the “residence time” of the money that flows into your account? And is the answer any different if you balance your annual expenditures with your annual income?

        In any cycle that has large flows in and out, the only meaningful numbers for predicting future levels, whether of your bank balance, or of CO2 in the atmosphere, or of heat accumulation in the atmosphere, are net flows.

        For those who believe “residence time” is a well-defined concept independently of the straightforward logic of analyzing net flows in terms of flows in and out, the possibility of CO2 dropping precipitously as a result of abruptly terminating human CO2 emissions will make no sense whatsoever. For me it is residence time that makes no sense.

        No, because it would mean that the rise was not a result of human forcing, so the termination of our contribution would change nothing, or at least very little.

        No, because forcing is a radiation concept. Our understanding of forcing has nothing to do with our understanding of how much CO2 is entering and staying in the air.

        We know that we’re currently adding a little over 9 GtC (gigatons of carbon) to the atmosphere each year, based on fuel records, gas flares in oil fields, cement production, etc.

        And we’re monitoring the CO2 in the atmosphere with excruciating care at Mauna Loa and observing that it is rising by only 4.9 GtC a year. So the rate of accumulation is only 4.9/9 ~ 55% of the rate of emission.

        The other 45% has to be going somewhere. Unless it’s going into outer space, it has to be going into the oceans and land including vegetation and other consumers of CO2. If we reduce our 100% of emissions to 0%, is that 45% that’s currently going into the surface and vegetation going to stop instantly too? How could that happen? Would the planet notice we’d suddenly stopped?

        I don’t see how it could.

      • “…what is the “residence time” of the money that flows into your account?”

        The time it takes for banking fees to drain it.

        “No, because forcing is a radiation concept.”

        I am speaking of “forcing” in a general sense, in this case, as the CO2 input from human activity.

        “The other 45% has to be going somewhere.”

        It is going into the oceans and land. Some gets re-emitted into the atmosphere in a cycle, some gets sequestered (at least semi-) permanently. The question before us is, how quickly is the sequestration process occurring?

        The current mainstream thinking is, very slowly. If the answer is, in fact, quite rapidly, then this demands another mechanism for rapid replenishment into the system to replace, or even surpass, that which is being taken out.

        “If we reduce our 100% of emissions to 0%, is that 45% that’s currently going into the surface and vegetation going to stop instantly too?”

        You have to look at the entire set of reservoirs, atmosphere, oceans, and land. That 100% is divided amongst them. If there is very little sequestration and replenishment going on, as the reigning paradigm holds, then it will just stay in those repositories.

        Under the reigning paradigm, vegetation has negligible net effect, because rotting vegetation is releasing CO2 as quickly as growing vegetation soaks it up. (However, even if that is true to a significant extent, the flows are so large that, even a fairly small deviation would upset all the calculations.)

        Again, under the reigning paradigm, the main repository is the oceans. This reservoir soaks up CO2 in proportion to the partial pressure in the air. If that partial pressure stops increasing, then the accumulated CO2 in the oceans stands pat in equilibrium.

        I hope that clears things up a bit. Keep in mind that in explaining the reigning paradigm to you, I am not endorsing it.

      • The current mainstream thinking is, very slowly.

        That would be the part of the mainstream that can’t subtract 5 GtC from 9 GtC. If you have a big bucket like the atmosphere and you pour 9 GtC into it and 5 GtC remains, 4 GtC has left the bucket.

        Of course by some definitions 4 GtC/yr is “very slowly,” in case that’s what you meant. However any bucket with a 4 GtC/yr leak in it will continue to leak 4 GtC/yr when you suddenly stop pouring in 9 GtC/yr. Instead of the level in the bucket continuing to rise at 5 GtC/yr, it will immediately start falling at 4 GtC/yr.

        I have been trying to visualize a scenario in which what you describe happens. The only way I can do it is if I imagine some mechanism kicking in to replenish the 4 GtC after we stop our 9 GtC. Such a mechanism might be operated by invisible pink unicorns, not sure what else.

        If all the world except me is right on this one, I have very suddenly suffered an unexpected aphasia and had better get someone to drive me to the hospital since I wouldn’t be safe on the road. My brain must be severely damaged!

        I just can’t imagine a rational world in which people can think the way you describe.

      • You’re still not seeing it. Think of it this way. You’ve got three buckets labelled Earth, Wind, and Water, respectively. The Earth bucket is relatively small in diameter, but the Wind and Water buckets are roughly the same diameter in a square root of 55/45 ratio. They’re all connected together with a pipe near the bottom.

        You start pouring water into the Wind bucket. Because of the pipes between them, they all start gaining water. You stop and look, and see that each one holds water at the same height, but you’ve only got roughly 55% of what you poured in in the Wind bucket. But, the volume in all three buckets is the same as the total you poured in.

        That’s how it all works. A more apt analogy would have holes in the Earth and Water buckets, which drain water slowly in the “long residence time” case, and quickly in the “short residence time” case. And, there is an additional external water feed, which adds additional water quickly or slowly, respectively, as needed to maintain the buckets at a particular level.

      • er… And, there is an additional external water feed, which adds additional water slowlyor quickly, respectively, as needed to maintain the buckets at a particular level.

      • It’s not necessary to speculate on all alternatives, because we know something on, what’s going on.

        The atmosphere is almost instantly in balance with the surface ocean. From the point of view of temporal development it’s thus better to look at the combination of the atmosphere and surface ocean. Unfortunately that’s not well defined, as the limit between the surface ocean and the rest of ocean is not precise. Taking the Revelle factor into account the surface ocean may take rapidly 10-15% of the variation. Thus the other uptake processes take perhaps 40% and this part would continue for a while at the same speed even, if the emissions would stop totally. Reducing the emissions to 40% of the present should keep the atmospheric concentration constant as long as the other processes of uptake continue to be as effective as they are now.

        Even emissions at the level of 40% of the present would gradually fill the ultimate potential of uptake. The biosphere cannot grow forever, and even the soil has it’s limits although they are highly dependent on the state of biosphere. The same applies to the deep ocean (the surface ocean was assumed to stay in balance with atmosphere and wouldn’t change in this scenario).

      • “Residence time” is not a meaningless concept in financial contexts. Days of inventory on hand or days working capital are useful metrics, for example. But those figures are most useful when they change; an adverse change can be an early warning of fiscal problems.

        I agree that the absolute estimate of average residence time for CO2 is not very useful…unless it’s changing.

    • Vaughan Pratt

      Let me give you my answer to your question, which is somewhat different from Bart’s.

      Currently, the equivalent of around 4 ppmv CO2 enter the atmosphere from human emissions.

      IF (the big word) we assume that human emissions are the only factor causing change in a global carbon cycle, which is otherwise at equilibrium, the atmospheric concentration should increase by 4 ppmv/year.

      However, it does not.

      It only increases at around 2 ppmv/year.

      Residence time estimates range all over the map (from 5 years to 400 years – see earlier post), but one set of studies puts the half-life of CO2 in our climate system at 80 to 120 years.

      IF (the big word again) 120 years is the half-life of CO2 in our climate system, then the “decay rate” is 0.58% of the atmospheric concentration, which turns out to be around 2 ppmv/year.

      IOW, the current observations would tend to validate a CO2 “half-life” in our atmosphere of 120 years, IF (the big word) humans are the principal cause for the increase.

      IF one assumes a “half-life” of 120 years, and no further additions into the system, it is easy to calculate how long it would take until CO2 levels came back down to pre-industrial values. But this is a purely hypothetical calculation, which has absolutely no real importance, primarily because OTHER FACTORS have not been considered.

      Max

      • Max, as you point out we know reasonably precisely how much CO2 we’re responsible for emitting into the atmosphere, and even more precisely how much is staying.

        In any other system of checks and balances where we know the flows in and out, we know very accurately what would happen as a result of a large abrupt change to any flow.

        The mere fact that we are clueless as to the “residence time” of a quantity whose fluxes we know accurately should be one hint that it is a meaningless quantity. Another hint should be that it is an undefined quantity.

        If you define it in terms of how long it takes the level of CO2 to change in response to a change in emissions, you cannot then turn around and claim that knowledge of residence time allows you to calculate how long it will take CO2 to decline, because that’s how you defined residence time in the first place! It would be circular logic.

        The only concept worth understanding here is how fast the CO2 level would change in response to a significant change in emissions. There is no such thing as residence time other than that concept. That concept is fundamental to standard banking practice as well as standard in any analysis of network flows.

        No one talks about “residence time” of the money in their bank account because it’s meaningless unless you define it in terms of response to change in flows. But then that’s the concept you should be using, not “residence time.”

      • Vaughan

        I can agree that there are so many very basic unknowns surrounding our planet’s carbon balance that trying to guess at “residence time” of the small piece emitted by humans seems like looking for a needle in the haystack. [The discussions on this sort of remind me of the heated debates regarding how many angels can dance on the head of a pin.]

        I only wrote that IF we assume that human emissions and natural decay are the ONLY variables, then the current observation would lead to a suggested half-life of CO2 in our climate system of around 120 years.

        But don’t forget the IF.

        That’s the BIG word here.

        Max.

      • This “residence time” stuff is proof positive that if you talk about something as though it has a real existence long enough, regardless of whether there’s any logical sense to it, people start to believe in it.

        That’s how religions get started: if everyone around you is talking about God as a real entity, with us made in His image, it’s hard to resist the impression that God is a real being like us. Only when you try to reason logically about God do you run into worries about whether he’s black or white, etc.

        Augustus De Morgan was far more logical than his colleagues at Cambridge University, who insisted on adherence to theological dogma and gave him a hard time about his atheism. He was therefore very glad to escape to London, where he joined the faculty of the newly started London University, now University College London.

        When it comes to the doctrine of residence time, I’m a devout atheist. The concept makes no sense to me, other than to the extent that it can be defined in elementary bookkeeping terms. Paraphrasing Bertrand Russell, “I have no need of that axiom.”

      • The only need I can think of for a “residence time” is estimating how effective biofuels may be. For that the 5 to 15 year estimate is fine, indicating that older growth trees used as fuel would not be as effective reducing carbon as rapid growth trees of plants. 15 years with an uncertainty fudge implies twenty year old growth is more effective left growing or converted to long term carbon storage commodities.

      • Max, your reasoning is right, except for the fact that you take the total CO2 in the atmosphere, i.e. that CO2 levels could go to zero. The real excess is 100 ppmv above equilibrium, which was 290 ppmv. That gives a half life time of 36 years at the current sink rate.

        The huge tails considered in different models (Bern, Archer), are for enormous emissions, going from 3000 GtC to 5000 GtC, far beyond the few hundreds GtC human have burned in the past 160 years. In that case also the deep oceans are affected, which is hardly the case by now. If you have several decay rates (as is the case), the combined decay rate is faster than the fastest, until saturation is happening, which is the case for the oceans mixed layer at 10% of the atmospheric increase, but by far not (yet) for the deep oceans and long term sequestering by vegetation.

    • Vaughan,

      I think you’ve understood the problem pretty well. I wouldn’t say there is any danger, or even the remotest possibility, of ever getting below 280pppmv in the foreseeable future. Its much more likely that some damage will have occurred to the Earth’s ecosystem, the spring has been overstretched, so to speak, and so the CO2 level wouldn’t go back to pre-industrial level in the same timescale as its risen.

      • Assuming perfect elasticity, I’d envisage an exponential decay back to preindustrial CO2 (280 ppmv if that’s the value) if CO2 emissions suddenly dropped to zero, with the initial rate of decay being 9 – 5 = 4 GtC/yr and slowing at some unknown rate but enough not to go below 280.

        And yes, if some elastic limit or yield point had been reached and plastic flow of some kind had occurred, then the decay would be back to some higher value, but still starting at 4 GtC/yr.

        I suppose if there were such a thing as “complete” plastic flow it would result in virtually no decay at all, the system would just remain at the level we’d brought it to. The 4 GtC/yr decrease would still happen in the beginning, but decay to a zero rate of decrease extremely quickly.

        But even with perfect elasticity the response can depend on inertia, with different components having different inertias and hence different response times to sudden changes.

    • Vaughan, the idea that the 2ppm/yr decline would continue if emission of CO2 stopped is incorrect. The 2ppm/yr is only occurring on condition of a rising amount in the atmosphere. It is maintaining an equilibrium between the ocean (mostly, biosphere partly) and atmosphere and their ratio of CO2. This equilibrium has a fast time-scale (5 years) that maintains it, so it is why the ocean is responding so quickly. You can see that if the CO2 emission stopped, the ocean surface would already be close to equilibrium and not absorb any more. The only way to get additional absorption by the ocean is to bring up water to the surface that has less CO2 in it, which has the time scale of the ocean circulation, maybe decades to centuries.

      • No disagreement there, that could well happen. The important point is to have numbers for these various fluxes and not try to turn them into some notion of “residence time.”

        Incidentally where did you get the 5 year response time for the ocean? This seems to assume a good understanding of what happens all the way to the ocean bottom. I wasn’t aware we had such an understanding. For all I know the ocean time constants could be 20 years or more, I don’t know any way of inferring 5 years.

      • I have described my understanding of the situation in a message of this thread

        http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104904

      • The 5 years is the atmospheric response time to the ocean. I believe it is derived as the CO2 in the atmosphere divided by the downward (or upward) sea-surface flux of CO2. The ocean response is harder to determine because it involves its mixing rate and circulation. This means that in 5 years the CO2 has been able to directly interact and equilibriate with the ocean surface. I believe the actual chemical equilibriation is faster at that surface, so the 5 years is the time scale for replacing atmospheric with ocean CO2 as it comes to the new equilibrium.

      • I think you would find that ocean upwelling is much richer in CO2 than surface water and has a much lower pH – it is a result of both respiration by organisms at depth and volcanic venting.

      • If that was generally true of upwelling it removes one of the hopes for long-term sinks to absorb this new CO2, so it only makes it worse.

      • Why would you doubt and not simply look it up?

        ‘Our technology uses kinetic wave energy to bring up higher-nutrient deep water. In the presence of sunlight, and assuming appropriate ocean environmental conditions, the enhanced nutrients generate blooms of phytoplankton which absorb dissolved CO2 and generate oxygen through the process of photosynthesis. When the phytoplankton are consumed by higher trophic levels such as zooplankton and fish, or when the phytoplankton die, some of the absorbed CO2 as well as other biochemical contents sink. Some of this is remineralized and suspended in mid ocean depths, some sinks to the ocean floor, and some is sent back up to the surface by natural upwelling events (currents, storm-generated upwelling, heating/cooling cycles such as El Nino, etc.). This “biological pump” is the principle physical process responsible for the higher concentrations of nutrients, and CO2, which are found beneath the upper sunlit zone (typically 50 to 80 meters) of the ocean. Within the upper ocean’s sunlit zone, however, the nutrients are quickly consumed, with the result that phytoplankton blooms diminish until upwelling brings up more nutrients.

        Until recently, conventional wisdom regarding limits to phytoplankton productivity in the upper sunlit zone of the ocean cited the Redfield Ratio as the limiting factor to how much net benefit could accrue from wave-driven ocean pumps. The Redfield Ratio limits the amount of carbon that each phosphate atom can recycle. For the average of all the ocean is it 106 carbon atoms for every phosphate atom. If CO2 recycling efficiency is limited by phosphate, and deeper water contained proportional concentrations of nitrate, phosphate and dissolved CO2, then net additional absorption from upwelling of phosphate would be balanced by the higher concentrations of CO2 brought upward – at best a zero sum game.’ http://www.atmocean.com/sequestration.htm

        The ‘Atmocean’ geo-engineering proposal involves encouraging nitrogen limited micro-organisms and thus increasing the net carbon sink. These nitrogen fixing organisms are typically very toxic – so I would have my doubts.

        Someone hoping CO2 deficient sub-surface water would rise and scrub the atmosphere?

      • The question is whether the average CO2 in the deep ocean that will eventually surface exceeds the CO2 in contact with a high-CO2 atmosphere, especially as CO2 doubles. I assumed it wasn’t going to match that surface layer CO2, in which case it would be a further sink, but with a long time scale.

      • The CO2 is supersaturated when it upwells – it bubbles out of solution as the pressure and temperature decline. The surface in oceans is nowhere near saturation at most times – although biological activity at times will increase carbon dioxide levels at some locations.

      • You are giving a mechanism by which the ocean returns carbon to the atmosphere, but it is just part of the cycle, not a net source or sink, since the ocean ultimately got this carbon from the atmosphere.

      • No I was addressing your misapprehension aspects of the carbon cycle – but if you want to move the goals to something else – feel free.

        The ocean is a large net sink – through both the biological carbonate and organic C pathways. Some of it is returned to the surface in upwelling.

        You had this simple concept whereby deep ocean water was not exposed to high atmospheric concentrations of CO2 and therefore had lower concentrations. I was simply very politely correcting you without making much of a fuss about it – but if you don’t want correct information and simply want to argue from some preconceived position and then to subtly disparage me – then I cannot be bothered with you either.

  36. How about the chicken or the egg question…..
    Does warming raise atmospheric CO2 levels – by what rate?
    Does raising atmospheric CO2 cause warming – by what rate?

    How about some laboratory experiments to settle this?

    • BLouis79

      Does warming raise atmospheric CO2 levels – by what rate?
      Does raising atmospheric CO2 cause warming – by what rate?

      How about some laboratory experiments to settle this?

      Experimental data show that sea water can dissolve more CO2 at lower than at higher temperature. This gives solubility at various salinities and temperatures:
      http://www-naweb.iaea.org/napc/ih/documents/global_cycle/vol%20I/cht_i_09.pdf

      The amount of CO2 dissolved is also related to the atmospheric concentration or partial pressure of CO2. Some more stuff on this:
      http://cdiac.ornl.gov/oceans/co2rprt.html#co2sysinsea

      AFAIK, there are no empirical data based on reproducible experimentation or physical observations providing evidence that higher atmospheric CO2 concentrations will lead to higher temperature; these estimates are based on model simulations backed largely by theoretical deliberations supported by the GH theory.

      Max

      • Regardless of what’s causing the temperature to rise, we know that it’s risen about half a degree in the past four decades. We can therefore ask whether the oceans emitted additional CO2 as a result of that warming, without having to bring up the scarlet letter A for anthropogenic.

        The estimate of 9 GtC of human emission of CO2 includes things like cement production (about 4.5% of the 9 GtC) and gas flaring (about two-thirds of a percent). However it does not include increased CO2 emission from the oceans.

        I’d be inclined to put that on the ledger as a negative component of the CO2 nature is taking down from the atmosphere. That is, subtracting the known increase of 5 GtC/yr from the known human emissions of 9 GtC/yr gives 4 GtC/yr as downtake, but if a warmer ocean is emitting say 1 GtC/yr more than at its 1970 temperature then the downtake, net of that extra ocean emission, is really 5 GtC/yr. But since ocean temperature changes only slowly, for prediction purposes only the 4 GtC/yr figure is relevant to the question of what would happen if the 9 GtC of our annual emissions dropped suddenly to zero. We’re actively producing the 9 GtC in the sense that it would drop to close to zero if aliens moved us all to a zoo, but we’re not actively causing the ocean to emit extra CO2 in that sense.

      • but not the last decade

  37. The eco-whacko CO2 fearmongering by the secular, socialist neo-communist bureaucracy has finally ‘jumped the shark’ with the UN-approved EPA science authoritarians’ proposed ‘poisonous ‘ CO2 controls.

  38. The concept of residence time is perhaps the strangest in the entire, circus like, debate. It involves the following:
    Instant cessation of human emissions, which is impossible.
    The resulting behavior of all natural sources and sinks for the next 100 to 1000 years, which is unknowable.
    The assumption that the CO2 increase is all anthro, which is controversial.
    A prediction which is untestable.

    Where is the science in this impossible, unknowable, untestable, controversial conjecture? Nowhere, but the propaganda value is high so we read about it all the time. This is the sad state of the science.

    • Talk about exaggerating uncertainty

    • Instant cessation of human emissions, which is impossible.

      It’s certainly impossible for scientists to do the experiment, but that’s true of a lot of climate science.

      It is not impossible however that consuming carbon based fuels more efficiently while also cutting over to non carbon based fuels could reduce CO2 emissions by say 10% or 30% or some number. Calling this impossible is defeatist.

      It is then interesting to know in advance by how much the CO2 level will change.

      The general belief as I understand it is that CO2 will continue to rise at the same rate as when our CO2 emission was at that level previously.

      I would project a quite different outcome: that it would drop considerably faster than expected.

      As I understand things, that puts me in neither camp: I’m neither a skeptic in the usual denier sense nor a “warmist” in the doom-and-gloom sense.

      I do however foresee doom and gloom if the defeatists among us win out.

      • (Just to be clear about “doom and gloom,” I’m only referring to the projected 2 °C rise by 2100 assuming business as usual. I don’t have any strong views as to the benefits and downsides of a warmer planet, which is a lot harder than projecting something simple like temperature.)

      • Peter Davies

        The general state of pessimism that is evident in both camps is symptomatic of existing global recessionist thinking which is affecting consumer behaviour and business confidence world-wide, with the notable exceptions of China and India.

        While I am not over-optimistic that entrenched attitudes can be changed within the next few years I believe that once the economic cycle turns the corner once more, the pessimists will be overidden by the pragmatic and sensible.

        I would personally prefer to see the gradual replacement of dirty technolgy with cleaner and more sustainable processes and a better understanding by western cultures that we need to reduce our environmental footprint.

      • I am not pessimistic. I am happy that the greens are losing their ideological war. As McCauley put it, every political movement ultimately expires from an excess of its own principles. Climate change was that excess.

      • Peter,
        As soon as windmills are seen for the waste of space and resources they are, and food is not being wasted as fuel and the CO2 obsession passes- and that day appears to be coming sooner than later- we can indeed make great progress in improving our stewardship of the planet.

      • Windmills will remain the sad relics of one of the greatest accomplishments of man, the erection of the monumental political and financial structure on the massively inadequate foundation of CO2’s radiative effect on climate. That the financial bubble and the ungrounded power grab were so magnificent is testament to the greatness of man, and to the greatness of his folly.
        =============

      • we don’t need to reduce our environmental footprint. We need to keep our environmental footprint clean. We need to know what is harmful and what is helpful. CO2 happens to be helpful. The best times in earth’s history for plants and animals were during times of higher CO2. Double CO2 and life would get better. Cut it in half and life as we know it would not be sustainable.

      • Herman, you can’t prove that double CO2 will be good anymore than anyone can prove double CO2 will be bad. I can make a better case that double CO2 will increase the chance of a new ice age.

      • Vaughan, I did not say that a, for example, 30% reduction over 40 years was impossible, although I certainly think it unwise and unlikely. I am addressing the state of the science, which on this residence issue is mere speculation, spanning a huge and irreducible range of possible values.

        Speculation is important to science because that is where new ideas come from, but speculation per se is not science. Science is the process whereby speculation is turned into knowledge. In this case there is no knowledge to be had, so claims of knowledge by the IPCC and others are a clear case of false confidence. False confidence is the crime against science that pervades CAGW. This so-called residence issue is the worst case of false confidence that I know of, in the general debate. That is my point.

  39. A contributor to my blog has performed an empirical experiment to determine the degree to which back radiation slows the rate of cooling of the ocean surface.

    http://tallbloke.wordpress.com/2011/08/25/konrad-empirical-test-of-ocean-cooling-and-back-radiation-theory

    • The “experiment” didn’t test that, TB. It wasn’t clear to me why the “IR reflector” should have affected cooling rate much or in which direction. I would have predicted little or no effect, and the small differences recorded may simply have represented experimental variability unrelated to the independent variables. Clearly, back radiation was substantial with or without the “reflector”, and without it (e.g., in a very cold room), the cooling would have been faster, although it would have been hard to separate the roles of conduction, radiation, and latent heat transfer.

      There is no easy but decisive home-made experiment for back radiation, although a number of better ones have been discussed in previous threads.

      • For clarity, my point was that without as much back radiation (as in a very cold room), cooling would have been faster, even if one corrected for conduction/convection and latent heat transfer differences.

      • There is no easy but decisive home-made experiment for back radiation, although a number of better ones have been discussed in previous threads.

        That’s great Fred, which of them have been conducted and had their results published?

      • See the most recent SkyDragon thread and the Postma thread, where I believe you’ll find examples. I know Pekka described one device which utilized back radiation for a substantial heating effect (commercial, though, not home made). I think Eli Rabett described others, and even more were described by other participants.

      • Fred, they are both massive threads. Needle in a haystack doesn’t cover it. But surely a more sophisticated experiment has already been done and published by climate scientists?

      • How do experiments on 200 ml Tupperware containers extrapolate to ocean-sized ones?

        One might expect the response times for a 200 ml Tupperware container to be very different from those for a Tupperware container as deep as the ocean. It will take time for the heat to reach the bottom, and that time will depend on how convection works in the ocean, which might be different from an ocean-depth Tupperware container unless it was also ocean-wide and subject to similar tidal and wind forces etc.

      • Hi Vaughan. All valid criticisms, and I’m sure when you or Fred shows us the better financed and more rigorously conducted experiment that must, surely, have been done by the climate professionals, we’ll be able to learn much about ways in which we might improve the experimental setup.

        Just waiting for the link.

      • Those crickets are chirping loudly again tonight.

      • when you or Fred shows us the better financed and more rigorously conducted experiment that must, surely, have been done by the climate professionals, we’ll be able to learn much about ways in which we might improve the experimental setup. Just waiting for the link.

        Here you go (h/t Michael Roderick).

        http://www.nature.com/nature/journal/v453/n7198/pdf/nature07080.pdf

        Domingues et al, “Improved estimates of upper-ocean warming and
        multi-decadal sea-level rise”, Nature 453 1090-1094 (19 June 2008).

        “To estimate ocean heat content and associated thermosteric sea-level
        changes from 1950 to 2003 (see Methods), we use temperature
        data from reversing thermometers (whole period), expendable
        bathy-thermographs (XBTs; since the late 1960s), modern and more
        accurate conductivity–temperature–depth (CTD) measurements
        from research ships (since the 1980s) and Argo floats (mostly from
        2001).”

        No Tupperware containers, I imagine their budget didn’t run to one that big.

      • Vaughan, rehashing XBT data and splicing in ARGO data is not an empirical test. Moreover, domingues, and Levitus and all the rest have it badly wrong with the pre-ARGO data.

        Read and absorb this when you can find 20 minutes:
        http://tallbloke.wordpress.com/2010/12/20/working-out-where-the-energy-goes-part-2-peter-berenyi/

      • Vaughan,
        The experiment was not designed to replicate the exact conditions of the oceans or isolate the 15 micron LWIR band. The purpose was to see if backscattered LWIR could alter the cooling rate of water that is free to evaporatively cool. The experiment while simple, was able to show that materials that are free to evaporatively cool cannot be handled with black body radiation equations as climate science tries to do.

        The experiment shows a readily detectable difference in the cooling rate of water that cannot cool evaporatively when subjected to differing amounts of backscattered LWIR.

        The experiment shows no detectable difference in the cooling rate of water that can cool evaporatively when subjected to differing amounts of backscattered LWIR.

        I would suggest that while it is highly likely that backscattered LWIR around the 15 micron band does have some effect on the cooling rate of the oceans, the empirical evidence suggests that the effect is far less that climate scientists assume.

      • The role of evaporation in dissipating the thermal energy absorbed from about 330 W/m^2 back radiation plus about 160 W/m^2 absorbed solar radiation is known to be quite a small fraction of the total – about 80 W/m^2 based on empirical data for global evaporation/precipitation rates and the energy involved. That 80 will come from both the solar and backradiated component, but even if it all came from the latter, it would leave most of the back radiation to be absorbed as a contributor to total ocean heat content. In that sense, the described experiment was unnecessary, because we already have quantitative data on the fractional role of evaporation.

        As a test of the ability of backradiated IR to slow cooling, the test is uninformative, because with or without the “reflector”, the water is receiving extensive IR back radiation, and perhaps some “forward” solar contribution from visible light. The reflective material is presumably interfering with some of this while contributing its own IR. The only way to avoid this problem would be to conduct the test in a very cold room with no incident visible light, but then one runs into other problems. The lack of analogy with true ocean conditions of depth, humidity, wind speed, convection, and other variables cited by Vaughan above and others elsewhere also precludes meaningful interpretation.

        Mainly, though, I’m not sure why there is a need for this experiment. The substantial magnitude of backradiated IR absorbed by the oceans is measurable and well established. It’s ability to contribute to ocean thermal energy after absorption in the skin layer is a simply a function of thermodynamics – surface temperature can be thought of as valve controlling the rate at which heat from below can escape via the surface. If anyone doubted this, they could probably confirm it with a simple experiment in which water continually heated from below at a defined wattage is covered with a thin film of material whose temperature can be varied. When the film temperature is raised, one can predict that the temperature of the entire water content will rise, but if someone wants to test this, they should proceed.

      • Fred,
        It is easy and it is decisive. The cost is not high. I urge you to try it for yourself. I specifically designed the experiment to be replicable by others. I have run the experiment many times now and I am confident you will get similar results and that these results are not the result of “experimental variability”. I have found that the mechanism of evaporative cooling for liquid water significantly alters the impact of backscattered LWIR on it’s rate of cooling.

      • I think Pekka’s observation that the air above the ocean is a lot more humid than under the test conditions, thus allowing greater evaporation has to be taken into account. That will reduce the differential found in the tests. I think that you are likely correct that the radiative flux doesn’t restrict cooling as much as is parameterised in the models, but you need to do more runs with a wetter local atmosphere to see what difference that makes.

      • Yes, I would agree that the air over the oceans is more humid than that used in the test, however evaporation would need to be very restricted as in Test B before backscattered LWIR had much influence. From the continuing water cycle on planet Earth we can see that evaporative cooling is alive and well over the oceans. One of the benifits of small 200ml test containers is that they cannot fit too many of Pekka’s red herrings :)

        That said, the entire experiment runs on batteries (even the new peltier cooled “Sky”) and I live two streets away from Sydney harbour. Moist air and sea water is available.

      • Herrings. Lol.

        Well then, what a cool location you live in! The deck of a yacht in Sydney harbour would be the perfect place to re-run the experiment. you can simulate some ocean wave turbulence by wandering across the deck with your camera. Don’t spill any of that G&T into the test fluid though.

        We want pics! ;)

      • Don’t spill any of that G&T into the test fluid though.

        The Gerlich and Tscheuschner paper had always struck me as more the sort of thing a troll would write than serious theoretical physicists. Thanks to your chance remark here, tallbloke, it occurs to me that this pseudonymous troll might well have chosen names whose initials were those of the beverage sustaining him (or her) while he penned his magnum opus.

        This would explain a lot.

      • The experiment may be a step in the right direction but not decisive. The 10mm styrofoam with foil face has a much higher R value than the Saran wrap. It proves the value of foil faced insulation though. To get the experiment to indicate IR impact only, the R values of both “ceilings” should be the same or the difference in R value calculated. You will probably not see a measurable difference. A better test would be Saran wrap versus mylar foil, about the same R value.

      • My comment at !0:47 PM – comment 105157 was written after all the above and was intended as a response to the various comments.

        There is no clearly practical way to duplicate ocean absorption and emission of energy via a simply home-made design, but the described experiment, despite its good intentions, gets the quantitation wrong and is subject to too many confounding variables to tell us very much. If it is an attempt to demonstrate that back radiation doesn’t play a major role in contributing to ocean heat content, it is probably futile, because that role is well established by evidence described in part by me above and by many of us in additional detail elsewhere. If it is simply designed to show that latent heat transport is an energy dissipating mechanism, there is no problem with that goal, but the fractional contribution of latent heat transport is already known, and while not trivial, is relatively small compared with upwelling IR, which is the main mechanism by which the combined energy of solar radiation and backradiated IR is emitted to space.

      • Fred, His experiment won’t prove anything other than radiation is one of the ways heat flows, that has pretty much been figured out already. It may, if he gets it right, show that convection and conduction also are ways that heat flows. That is not particularly a new concept either. Since his experiment is at roughly standard temperature and pressure, it will probably show that the latent heat component of the convection initiated by conduction is the primary source of cooling of the water followed by conduction and radiative cooling. No Nobel prize there either. Now if he could prove that there is a clear radiative window from the upper atmosphere to the surface that did not require radiative energy from the upper atmosphere to excite GHG molecules which are statistically more likely to transfer thermal energy to nitrogen and oxygen molecules which are statistically more likely to transfer that thermal energy to other nitrogen and oxygen molecules, then there may be a prize. I think that is the conduction that Dr. Pratt was talking about.

      • The impossibility of a fictional concept contributing to any real world effect has been comprehensively demonstrated by myself and and numerous others in numerous posts. The fictional concept arises from making a conceptual distinction between radiative upwelling and downwellig radiation. Radiation in the open atmosphere moves in all directions over relatively short distances (the mean free photon path) that varies from centimetres to kilometres – but on average outward from warmer to cooler.

        The concept of directional measurement is misguided. One can point an instrument down and get a measurement, point it up and get another measurement, point it sideways get another. If, however, there is a vector addition of the paths of IR radiative flux – all of the radiative paths sum to loss of energy from oceans to the atmosphere and thence to space.

        The loss of energy from the ocean in the IR is shown in the temperature observation of the top microns of the ocean. IR radiative flux in and out occurs from the top 10 microns or less. The ‘skin’ temperature is typically cooler than the underlying water showing without any doubt that energy in the IR band is being lost by the oceans to the atmosphere. The oceans cannot be heated by IR radiation from the atmosphere because more IR is being emitted by oceans than is being received.

        The ocean is heated by the Sun and loses heat from the surface to the atmosphere through, for the most part, evaporation and net outward radiation – in about equal parts. For some incomprehensible reason agreement on a even a simple concept of energy dynamics seems impossible. There is little purpose in further discussion as this has already been canvassed extensively from many directions. I will leave it to the reader to look at the logic of the evidence and decide who is right or wrong.

      • Fred and Vaughan’s disdain for actual experiments speaks volumes. They firmly believe that the theoretical basis of their belief in the ability of downwelling radiation to directly heat the oceans trumps any possible physical observation. Fred further makes his judgement of the value of this experiment on the basis of what he sees as the two possible motivations of the experimenter, while discounting the possibility that the experimenter is simply motivated by a desire to uncover the scientific truth about the way nature works.

        Climategate in a nutshell.

        The net flux is smaller than the sum of the convective components by a long way. It’s not 80W/m^2 vs 320W/m^2 downwelling IR. It is 80 evapo-transpiration + 20 thermals =100 vs 66 net radiative flux.

        How much this radiative flux actually slows the cooling of the ocean can’t be determined by a simple experiment such as this, but Konrad’s experiment indicates it doesn’t slow the cooling as much as theory based on Fred’s erroneous calculation says it will.

      • Tall Bloke, I don’t think Vaughan has that big an issue with the experiment, just that the conclusions being drawn are a leap. The small water dishes will show a smaller IR impact than a deeper ocean would because of the radiation window of water. There is a big difference in the spectrum of water vapor and water liquid that varies with temperature. The ocean temperature varies a lot with depth. So any experiment will have some limitation. Konrad’s experiment has limits with the size of the source and the insulating value difference of the ceilings. Vaughan boxes has the same, though a little smaller magnitude issues. Using the real salt windows that are very small increases the issue of scale. Eli’s wrapping a light bulb with foil shows that light bulbs get hotter if you wrap them with foil. Foil is a good reflector of light in most wavelengths, so it is not a great experiment to show the radiative impact of foil wrapped around the earth because the Earth emits different wavelengths. Some where with any experiment you have to calculate how big the differences are between the experiment and the real world.

      • Some of the above discussion represents different ways of looking at the same thing, but there are some misconceptions as well. The following, illustrated by the TFK Energy Budget – see, for example Fig. 2b – is one accurate way of looking at the involved processes. I will focus on the oceans, but land data are roughly similar.

        During the day, the ocean is heated (its temperature rises). This comes from absorbed solar radiation (about 170 W/m^2) and a larger quantity of absorbed back radiation from the atmosphere (about 340 W/m^2). This total (about 510 W/m^2) is paralleled by a set of heat loss mechanisms by which the ocean releases heat from its skin layer to the atmosphere and to space. The quantities will vary over the course of a day, but most of the heat loss is via IR radiation (about 400 W/m^2), a smaller fraction by latent heat transfer (about 95 W/m^2), and the rest (about 15 W/m^2) by conduction. It is not unreasonable to subtract the 340 back IR radiation from the 400 upwelling IR radiation to conclude that the net IR flux is upward at about 60 W/m^2, but it is also useful to realize that this is the difference between two large fluxes with different sources and destinations, and that the back radiation is quantitatively the most important heating source during the daytime. It’s also important to realize that each of the heat loss mechanisms releases thermal energy from both the solar and back radiated component and not from either one alone. In other words, the upwelling IR is not a response to the back radiation but to the total absorbed energy, and the same is true for latent heat and for conduction.

        At night, the ocean cools because the solar component is absent, and so the emitted energy exceeds the energy absorbed by back radiation. Here again, it is reasonable to refer to the back radiation as responsible for “reduced cooling”, but this is only true on a statistical level. At the level of individual water molecules, the back radiation is still increasing absorbed thermal energy (each absorbed photon is causing the kinetic energy of water molecules to rise), and the upwelling IR is a separate process by which IR photons are emitted from the surface accompanying a loss of kinetic energy in the water. Thus, the “net” process is informative mathematically, but the separate molecular events paint a clearer picture of what is actually happening.

      • Fred, part of the misunderstanding is the use of the average energy fluxes. You say during the day there is 170 Watts from solar with 340 back radiation. It is more like 340 solar in at the surface plus 140 solar into the atmosphere. In the band between 30N and 30S about 70% of the solar is absorbed, so the individual fluxes per grid cell vary considerable by latitude and by time of day. Most of the warming impact of the atmosphere would be felt in the higher latitudes where the thermal energy is transported by weather patterns. So you are using a macro view to communicate a micro effect.

      • Dallas – Yes, there are variations by latitude. Both absorbed solar and back radiated energy are greater at low latitudes than at high latitudes (the higher the ocean temperature, the greater the upward IR and thus the amount of energy that is redirected downward as back radiation). The 170 W/m^2 solar is an average, as is the 340 W/m^2 back radiation. The ratio between the two will vary by latitude but the total energy absorbed from back radiation is about twice that contributed by the solar component.

      • Regarding time of day differences, solar radiation only contributes to ocean heat content during half a day while back radiation contributes for a full 24 hours (although more during the day than at night). There will therefore be many times during daylight hours, particularly at lower latitudes, where the solar contribution exceeds that of back radiation, and exceeds 340 W/m^2, so that the 170 W/m^2 solar contribution is essentially a 24 hour average as well as a latitudinal average. The ocean ultimately responds to these contributions averaged out over time.

      • Fred,
        are you actually claiming that the phase change of water molecules during evaporation is not the dominant cooling process for the oceans?

      • Konrad, define “dominant.”

        If you’re claiming that, averaged over a day, the ocean is losing heat, then shouldn’t it all freeze after enough days?

        If as one would naturally expect the oceans neither gain nor lose significant heat per day when averaged over say the last million years, then cooling and heating should balance out. In that case one can equally well ask about “dominant heating.”

      • Fred and Vaughan’s disdain for actual experiments speaks volumes.

        Said the man who wrote

        Vaughan, rehashing XBT data and splicing in ARGO data is not an empirical test

        How is that not “disdain for actual experiments”? Are you saying that measuring the temperature of a 200 ml Tupperware container of water is an “empirical test” of the number of years it takes for changes in surface temperature to be felt at a depth of 2 km in the ocean, while actual measurements at depth in the ocean are not?

        Tallbloke, what am I missing here? If there’s an Olympic event for the world’s most whacked-out notions of “empirical test” and “actual experiment,” why are we not seeing it on the sports pages?

      • Vaughan,
        I find your questions of Tallbloke puzzling. If you wanted to test if backscattered LWIR can slow the cooling of water that is free to evaporatively cool how would you do it? Fool around with incomplete ocean temperature measurements from disparate sources involving small quantities of LWIR acting over years? Or would you just directly test the impact of a larger amount of LWIR on a small sample of water that is free to evaporatively cool?

        Climate scientists just assumed that backscattered LWIR could actually slow the cooling of Earth’s oceans. I don’t believe they have done the simplest of checks in the lab.

      • Climate scientists just assumed that backscattered LWIR could actually slow the cooling of Earth’s oceans.

        You have the advantage over me, Konrad, of having read papers by climate scientists who have stated such a thing.

        Planets tend to drift into equilibrium until some asteroid or life form comes along and shakes them out of it. Whatever contribution “backscattered LWIR” might have been making, it would tend to have been balanced by an equal and opposite term or terms as long as the planet remained in equilibrium.

        Had you been speaking of an equilibrium situation, the relevance of your experiment might have been clear. However you’re claiming relevance to cooling, and slowing thereof, which are departures from equilibrium.

        I would be very interested in what perturbations in a 200 ml Tupperware container can tell us about perturbations in an ocean many kilometers in depth. As far as I can tell you have not made this at all clear.

      • Vaughan Pratt says:
        Are you saying that measuring the temperature of a 200 ml Tupperware container of water is an “empirical test” of the number of years it takes for changes in surface temperature to be felt at a depth of 2 km in the ocean, while actual measurements at depth in the ocean are not?

        Hi Vaughan, did you read the article on my blog i pointed you to? The Pre-ARGO measurements and/or the splice between XBT and ARGO are out of whack:
        http://tallbloke.wordpress.com/2010/12/20/working-out-where-the-energy-goes-part-2-peter-berenyi/

      • Both the experiment and the real case of oceans involve several subprocesses. When interpreting the results it’s important to understand the relative strengths of the subprocesses. Evaporation affects all cases, where it’s allowed. In some cases it’s very strong and dominates strongly over all others. Your experiment without cover is such a case. As the evaporation is much stronger than any other factor, it’s difficult to observe the others at all.

        Your conclusion that evaporation dominates is a correct conclusion, when it dominates. It did in your case, but it doesn’t in the case of oceans. It’s significant there as well, but it doesn’t dominate. Therefore your fail to see the influence of IR although your other experiment did show that it does exist. Therefore your case which did allow dominating evaporative cooling is useless for understanding what happens in other situations, which are not dominated by evaporation.

        Open surface does lean do strong evaporation, when the air is so dry that the skin temperature of water is much higher than the wet bulb temperature of air as it’s in your experiment. On ocean surface the wet bulb temperature of the lowest air is close to that of the ocean skin. Then the evaporation is much weaker.

        Try to repeat the experiment in a closed space, where the relative humidity is kept at 95%. That would be more relevant.

      • Pekka,
        The experiment remains relevant for any body of liquid water on the planet where evaporative cooling is possible in some measure. That would include almost all of Earth’s oceans. If backscattered LWIR affected liquid water the way climate models have it then Test A would have shown a faster cooling rate than Test B (which it does), however it would have also shown the same rate of temperature divergence between the two water containers (Which it does not).

        To claim that backscattered LWIR can heat liquid water that can evaporatively cool what is needed is empirical evidence obtained by a controlled lab experiment. I have yet to see such results. At present I have demonstrated that the impact of LWIR cooling of water is readily detectable when evaporative cooling is prevented. I have also demonstrated than the effect of LWIR is far less when evaporative cooling is allowed, possibly negligible .

      • I have also demonstrated than the effect of LWIR is far less when evaporative cooling is allowed, possibly negligible .

        And here’s one of the reasons why:

        Belmiloud, Djedjiga, Roland Schermaul, Kevin M. Smith, Nikolai F. Zobov, James W. Brault, Richard C. M. Learner, David A. Newnham, and Jonathan Tennyson, 2000. New Studies of the Visible and Near-Infrared Absorption by Water Vapour and Some Problems with the HITRAN Database. Geophysical Research Letters, Vol. 27, No 22, pp. 3703-3706, November 15, 2000
        http://www.tampa.phys.ucl.ac.uk/djedjiga/GL11096W01.pdf

        Abstract.
        New laboratory measurements and theoretical
        calculations of integrated line intensities for water vapour
        bands in the near-infrared and visible (8500-15800 cm−1) are
        summarised. Band intensities derived from the new measured
        data show a systematic 6 to 26% increase compared
        to calculations using the HITRAN-96 database. The recent
        corrections to the HITRAN database [Giver et al., J. Quant.
        Spectrosc. Radiat. Transfer, 66, 101-105, 2000] do not remove
        these discrepancies and the differences change to 6 to
        38 %. The new data is expected to substantially increase the
        calculated absorption of solar energy due to water vapour in
        climate models based on the HITRAN database.

        Conclusions
        Table 1 (Final column) also sets out values for the comparison
        of our “best” total intensities of the water polyads
        with those given in HITRAN-COR. It should be stressed
        that the line intensities of our observations and the HITRAN
        database differ from line to line and that the given values
        are only valid for room temperature. The measurements
        at 252K yielded slightly different ratios, but are omitted
        here for clarity. For a major fraction of the lines, the principal
        part of the change takes the form of a re-scaling of
        the data on a polyad-by-polyad basis, we recommend the
        use of the factors set out in Table 1 as an interim solution.
        Other databases, such as GEISA [Jacquinet-Husson et
        al., 1999], are based on the same laboratory data and will
        therefore require the same correction. Detailed line-by-line
        data including both experiment and theory will be published
        [Schermaul et al., 2000].
        There is another lesson to be learned. Making sure the
        database is valid is necessary foundation for all modelling
        of atmospheric radiation transfer, especially so when theory
        and observation fail to agree.

      • That raises a very good point – one which I have alluded to many times in the past – that if back-radiation reduces the net radiative loss from the surface, there simply has to be more joules available for evaporation.

      • Konrad,

        You are plainly wrong. You have not shown, what you claim. There are many effects that limit the evaporative cooling. Your results apply only, when those are not important, but they are important for the real oceans. Therefore your results do not apply to real oceans.

        You haven’t even tried to show that your results would apply to anything else than to your own setup. Showing that requires careful analysis, but you don’t have any analysis at all.

        Your results are in agreement with well known physics to the accuracy that may be expected. That same well known physics tells also, why they do not apply to oceans.

    • David Springer

      Wow. A lot of factually incorrect information regarding oceanic heat budget. Google ‘oceanic heat budget’. A great many sources will be found that show latent heat loss (evaporation) is the predominant mode of ocean cooling. Typically latent heat loss is about 70% of the total, radiative heat loss is 25%, and conductive is 5%.

      Someone upthread mentioned that joules received from back radiation become available for evaporation and that is exactly right. Evaporation is an awesomely efficient means of getting rid of heat which is why we sweat water instead of sand.

      Over land where evaporation is restricted by lack of water and where radiative cooling is similarly constrained by back radiation the only other path is conductive cooling. Conductive cooling is far slower than evaporative cooling of course. Conductive cooling requires a temperature difference between the surface and the air in contact with the surfact. The greater the difference the greater the conductive cooling rate. So over land back radiation blocks the path of least resistance for heat to escape and forces it through the slower conductive avenue. The surface temperature ends up rising to a greater temperature which increases the conductive cooling rate and equilibrium (or something close) is re-established.

      Bottom line – Konrad is correct and is supported by theory, observation, and experiment. Deal with it.

  40. I think those that are ignoring the potential influence of biological processes in the oceans as being to small to matter are underestimating their potential impact on atmospheric co2.

    “A major portion of this primary production is recycled within the food web above the thermocline. The remaining fraction escapes from the upper ocean to the thermocline and below, where most of it is recycled and only a minor fraction is deposited in the sediments. It is the escaping fraction that effectively regulates the concentration of CO2 in the upper ocean. Since this layer is in contact with the atmosphere this quantity has important consequences for the global carbon cycle and climate change. In the absence of ocean primary production, surface total carbon dioxide CO2 would be 20% higher, and at equilibrium with such a surface ocean, the atmosphere would have a CO2 concentration close to double present levels (Sarmiento et al., 1990). Biological processes, therefore, have a profound impact on the carbon cycle yet this impact is very poorly understood and the subject of significant debate (Broecker 1991, Longhurst 1991, Banse 1991, Sarmiento 1991). ”

    CO2 in the Oceans

    • Let me try that link again

      http://www.mbari.org/staff/reiko/co2/primary.htm

    • Great point Steven, and nicely put. Your numbers are impressively large. People who think that the only role the oceans play is simple gas exchange are missing most of the biosphere. Moreover, given that these biological populations should oscillate, as all biological populations do, the idea of a steady state anywhere in the system is wrong headed.

      • More generally, I sometimes use a little rubber ball in presentations. The ratio of the ball’s mass to my (200 pound) mass is roughly that of the atmosphere to the ocean. I toss the ball up and down and make the metaphorical point that trying to understand the behavior of the atmosphere, without understanding the behavior of the ocean, is like trying to understand the behavior of the ball without noticing me. Other parameters besides mass are equally applicable, if not more so.

      • I tend to agree, if you want to understand the climate you really do have to understand the oceans. Not that this is a claim that I do of course. I just point and say “look at that”.

      • I agree that the ocean plays an important role in some aspects of atmospheric physics, especial global energy flows. For local energy flows, which are also of interest, oceans play less of a role in say central US or central Brazil.

        Steady state however is an ok concept when applied appropriately. Obviously one doesn’t say the weather is in a steady state, but one might say that the climate is in a relatively steady state.

        At the molecular level air is not in a steady state because its constituents are moving randomly at 500 m/s. On a larger scale however the air in a sealed bottle can be said to be in a steady state, the motion of its molecules notwithstanding.

      • What aspect of climate do you feel is at relatively steady state? Everything oscillates, across a wide range of scales.

      • Everything oscillates, across a wide range of scales.

        For any m << n, the average temperature over m days oscillates more than over n days. The heart of climate skepticism is the denial of this elementary consequence of the law of large numbers.

        As an example, compare the variation in temperature from winter to summer with that from the average of 1900-1950 to the average of 1950-2000. If these two variations were the same we would not recognize seasonal variation as such, and we would be much more inclined to say that the weather had remained in a relatively steady state during the year.

        Climate skepticism requires its own logic in order to succeed. The logic used in science works to the detriment of climate skepticism. Only by insisting that the logic of climate skepticism is the correct one, and that all climate scientists are illogical except a few such as Richard Lindzen, Fred Singer, Roger Pielke, and Tim Ball, can climate skepticism make a compelling case for its position.

      • I fail to see the point you’re trying to make.
        It seems that, by your logic, it should be perfectly safe to grab hold of power lines – after all, the average voltage is zero.

      • Peter, that’s a really excellent point. (Why can’t Arfur, Sam NC, etc. come up with good points like that?)

        It would appear that life adapts to the prevailing routines. If the temperature fluctuates daily and annually, life figures out how to deal with it.

        If power lines kept falling around us and we kept grabbing hold of them, most of us would be killed. But depending on the voltage, a few might survive and breed survivors. However if the voltage were high enough no one would survive.

        One could take this as a model of the failure of life as we know it to evolve on Venus and Mars: the voltages on the power lines were too high, so to speak.

      • Peter317,

        Birds are save grabbing power lines.

      • I don’t see the logic of climate skepticism as being very original at all. It is as basic as having a stock broker that constantly tells you to buy futures and you keep losing money. Buy futures in stratospheric cooling. Buy futures in tropospheric hot spots. Buy futures in 0.2C warming per decade. Buy futures in ocean heat content. Your broker may have perfectly good reasons why he thinks these futures will outperform the market and perhaps they eventually will, but unless he starts to show some reliability in making good predictions you are likely to move your money. The question is, how long does your broker have to be wrong before you switch?

      • Steven, a stock can have great fundamentals but still take ten years before it proves itself. Global temperature is subject to relatively unpredictable events on time scales up to 11 years, in particular solar cycles, El Ninos, etc.

        Beyond that I am unaware of any longer-term climate phenomena that we cannot predict on the nose, provided only that we have good estimates of preindustrial CO2 (currently pegged at 275-285 ppmv) and the time required for a change in CO2 to impact global temperature — 10-30 years is a popular range, with the IPCC liking the 20 year figure in its concept of “transient climate response.”

        As long as voters continue to elect people who think climate is what they experienced in the last decade, the world will continue to be run on the premise that climate is unpredictable and therefore not something it pays to worry about.

      • For any m << n, the average temperature over m days oscillates more than over n days. is simply false. Skepticism has reasons not its own logic.

      • One of the 2 errors of some scientists when it comes to climate.

        Climate is a system with competing feedbacks and little known thresholds subject to abrupt and non-linear change at all timescales. It is dynamically complex – chaotic non-linear temporally and spatially. If you don’t understand that in your bones – and in the scientific literature – e.g http://www.biology.duke.edu/upe302/pdf%20files/jfr_nonlinear.pdf, http://www.nosams.whoi.edu/PDFs/…/tsonis-grl_newtheoryforclimateshifts.pdf – well you can talk about a sceptic mindset but are closed to the superb
        and surprising reality.

        The other error is clouds – to keep pretending that cloud cover doesn’t change except as global warming feedback is incredible.

      • The other error is clouds – to keep pretending that cloud cover doesn’t change except as global warming feedback is incredible.

        Very compelling point, CH. As I look up at the sky I see huge changes in cloud cover from day to day.

        But suppose all 7 billion of us look up at the sky and average the cloud cover we see. Do you have some reason for supposing that this average fluctuates at all? And if so, by how much, would you say?

      • All climate influencing factors fluctuate and oscillate. It’s unbelievable that somone can think that they don’t!

      • Fluctations averaged over n years are only 1/sqrt(n) the magnitude of those averaged over 1 year. So while it’s true that “all factors oscillate,” it is not true that they all oscillate to the same degree.

        Small oscillations are easier to manage than large. Admittedly it is inconvenient to have to wait n^2 times as long to reduce an oscillation by a factor of n, but only if the wait time were exponential in the reduction rather than quadratic would it become completely infeasible.

      • So you use techniques that apply disorder to the system.

      • The papers are about chaos. I really don’t know what you imagine order and disorder is.

        ‘In physics, the terms order and disorder designate the presence or absence of some symmetry or correlation in a many-particle system.
        In condensed matter physics, systems typically are ordered at low temperatures; upon heating, they undergo one or several phase transitions into less ordered states. Examples for such an order-disorder transition are:

        – the melting of ice: solid-liquid transition, loss of crystalline order;
        – the demagnetization of iron by heating above the Curie temperature: ferromagnetic-paramagnetic transition, loss of magnetic order.’

      • Disorder is randomness. A gas is disordered. A gas can be modeled by a probability distribution such as Maxwell-Boltzmann statistics. Put the statements together and you can get the picture.

      • ‘Disorder is randomness. A gas is disordered. A gas can be modeled by a probability distribution such as Maxwell-Boltzmann statistics. Put the statements together and you can get the picture.’

        Too many pointless references and sophistic arguments. Water vapour is more disordered than ice. Maxwell-Boltzmann distributions describe particle speeds in gases. There is no application to climate systems in any of what you say.

        It is a little sad that you are so obsessive – but that is not my problem. Bye.

      • It is a little sad that you are so obsessive – but that is not my problem. Bye.

        You used the definitions of order and disorder from condensed matter physics. Condensed matter are not the only states of matter, which is why I pointed out that gases are also disordered. Why you think that me pointing this out is obsessive, I have no idea.

      • Robert,

        2nd link is bad.

  41. Another point about residence time vs. network flows of the kind found in the carbon cycle, hydrological cycle, etc. just occurred to me.

    In a sufficiently simple cycle the concept of residence time might serve as a proxy for the flows.

    But in a cycle with a number of flows that interact with each other in interesting ways, one may be able to define a notion of residence time in terms of the net effect of those flows, yet be unable to infer those flows from the residence time. If the point of the notion of “residence time” is to give a handle on system behavior, it doesn’t help much for complex systems.

    As a case in point, suppose you have an inflow of 2 gnus/fortnight and two outflows each of 1 gnu/fortnight. This is a system in equilibrium.

    Suppose further that you know that shutting off the inflow automatically shuts off both outflows.

    One can then infer that shutting off the inflow leaves the equilibrium undisturbed.

    But now suppose that shutting off the inflow only shuts off outflow A leaving outflow B running.

    Now the system is out of equilibrium and losing 1 gnu/fortnight.

    The concept of residence time could be defined in terms of the behavior of these flows. However their behavior can’t be inferred from residence time.

    This sort of thing can happen with the carbon cycle because it consists of a number of flows each with its own behavior. CO2 exchange at the ocean-atmosphere interface is governed to a considerable degree by Le Chatelier’s principle, which responds immediately to changes in CO2 level. Vegetation on the other hand consumes CO2 in proportion to its biomass, and while I’m no biologist I would not expect its biomass to respond instantly to a change in CO2.

    Rather than trying to come up with a notion of residence time, it seems to me to be better to understand how the flows interact with each other when trying to predict the result of changing one or more of the flows. One can then define some notion of residence time based on the net outcome if needed.

    What is not possible, except in very simple situations, is to start from residence time and infer the various flows.

    • Le Chatelier describes a chemical equilibria.

      There are processes involving both biology and secondary chemical reaction that lead to carbon depositing on the seafloor in both organic and carbonate forms.

      So there is a flow of carbon from the atmosphere to long term sequestration at the bottom of oceans – rather than an atmosphere/ocean equilibrium.

      One more thing – carbon is not generally a limiting nutrient and a biomass response to elevated carbon in the atmosphere may not be huge.

      • Right, Le Chatelier’s principle governs how adding CO2 to the carbonate-laden water dissolves the carbonate to form bicarbonate, a reversible reaction. There are also other homeostatic processes at work, whence my “to a considerable degree.” You’re right about carbon flowing down, but it’s by no means a one-way street.

        One more thing – carbon is not generally a limiting nutrient and a biomass response to elevated carbon in the atmosphere may not be huge.

        In fact elevated CO2 can even reduce net primary production, as I pointed out back in December at

        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-25854

        “Another beneficiary is plants. Warming along with increased precipitation and nitrogen deposition are all beneficial consequences of increased CO2 for plant life. Since plants utilize CO2 in photosynthesis one might also expect plants to benefit directly from the increased CO2. However experiments at Stanford’s Jasper Ridge preserve over the past decade, described in the very interesting paper Grassland Responses to Global Environmental Changes Suppressed by Elevated CO2, indicate that CO2 in conjunction with the above side effects of CO2 suppresses root allocation thereby reducing net primary production. One possible cause is that the minerals in the soil are already maxed out.”

      • Carbon sinks to the ocean floor – some of this returns in ocean upwelling.

        http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-104798

      • “One more thing – carbon is not generally a limiting nutrient and a biomass response to elevated carbon in the atmosphere may not be huge.”
        Plants on land are limited by lack of CO2. Plants will grow faster if they have 1000 ppm of CO2. Commerical greenhouse add CO2 so it’s around 1000 ppm. And many studies show enriched CO2 in natural settings increase growth.

        I am aware that most of the ocean lacks needed minerals, and therefore more CO2 would only deplete these minerals to greater extent. So understand how increase CO2 levels wouldn’t make much difference for most of the ocean, but in coastal areas where there are abundant nutrients, is there ever a significant lack of CO2 which inhibits growth?

      • Your 7:28 pm comment crossed paths with my 7:27 pm one, which bears on your point.

        And many studies show enriched CO2 in natural settings increase growth.

        Indeed. But what those studies don’t do is duplicate the conditions resulting from global warming, which includes higher temperatures, humidity, and other factors. These too benefit plant growth; the surprising result however, that I referred to above, is that increased CO2 in conjunction with these other concomitants of global warming, turns out to actually reduce net primary production.

      • ‘Simulated global changes, including warming, increased precipitation, and nitrogen deposition, alone and in concert, increased net primary production (NPP) in the third year of ecosystem-scale manipulations in a California annual grassland. Elevated carbon dioxide also increased NPP, but only as a singlefactor treatment. Across all multifactor manipulations, elevated carbon dioxide suppressed root allocation, decreasing the positive effects of increased temperature, precipitation, and nitrogen deposition on NPP. The NPP responses to interacting global changes differed greatly from simple combinations of single factor responses. These findings indicate the importance of a multifactor experimental
        approach to understanding ecosystem responses to global change.’

        I think you will find that NPP doesn’t decrease in any of the treatments – ‘decreasing the positive effects of…’ As I said – I wouldn’t assume a great change in terrestrial vegetative biomass.

      • I think you will find that NPP doesn’t decrease in any of the treatments

        Agreed. The point as I understand it is that when global warming raises those other factors (warmth, moisture, nitrogen), less CO2 can be even more beneficial. Even if you don’t reduce CO2 there is a net benefit from global warming (your point), but reducing CO2 when that stage is reached further increases that benefit.

        Gotta go, I have to answer a one-word email from Dan Quayle: “What?” ;)

      • Drawing too many conclusions from a single experiment.

      • Good point. Do you have an experiment demonstrating the opposite? I can only work with what has been reported to date, however slim the evidence.

      • The need to examine the strength of evidence remains – including when making generalisations from meso scale biological experiments.

      • Terrestrial plants in the wild typically are nutrient and water limited. That’s why we add nutrients and water to out gardens. Plants decrease stomata size and density in response to elevated carbon dioxide – limiting gas exchange but also water loss. Not necessarily a good thing for ecosystem functions.

      • Plants grow better with less water in higher CO2. How can that not be better?

      • I’m not clear that there is CO2 fertilisation – where other factors are limiting. There is an advantage for the plant in conserving water. On the ecosystem level – what does that imply for terrestrial hydrology? How does this affect recruitment – i.e. seed germination with reduced local moisture? How is surface water affected in for instance rainforests – and what affect does that have on dependent ecologies?

        Are we contributing to CO2 in the atmosphere? Well it is at levels not seen for a long time and we are emitting 30 billion metric tonnes a year – and likely all things being the same to increase 10 fold this century.

        Uncertainty around a range of issues means that we are in unknown territory.

      • “Terrestrial plants in the wild typically are nutrient and water limited.”
        A swamp isn’t :)
        Desert areas are water limited unless near a river or other source of water. And desert areas are large portion of the earth’s land area.
        If you are in rainforest area- tropical or temperature, the shortage of water can be seasonal but most of the time isn’t a factor. In such areas the shortage is sunlight- blocked by other plants. Plants are also engaged in various types of chemical warfare with other plants [and animals/pests] which inhibit plant growth.

        “That’s why we add nutrients and water to out gardens.”
        We also weed gardens.
        And we are generally growing a domesticated plant- genetically selected to increase yield- whether yield is type of flower, or fruit, or foliage. These domesticated plants generally wouldn’t survive well in the wild.
        These plants are also unlikely to be native to the particular region they are being grown in.

        “Plants decrease stomata size and density in response to elevated carbon dioxide – limiting gas exchange but also water loss. Not necessarily a good thing for ecosystem functions.”

        Good for an ecosystem, is like interfering with another country. Or as any governmental economic stimulus program.
        Increasing CO2 levels will have an effect upon the environment, and if any type of change is “not good”, then changing CO2 levels is “not good”.

      • But all change is not bad.

      • Its quite likely that agricultural plants, many of which are annuals, can be selectively bred to suit higher CO2 levels. That may well have happened unwittingly through the selection of the most successful varieties.

        But what about other plants? The generational cycle of a tree species may be measured in decades or even centuries. Yes, they will evolve to cope with changing CO2 levels, in time, but can they do it fast enough?

        I don’t believe there is any evidence to suggest they can.

      • They respond quickly to CO2 changes by changing stomata size and density – http://www.scientistlive.com/European-Science-News/Biotechnology/How_plants_adapt_to_climate/21329/

        This is well known.

      • It’s not only an issue of whether some plants can adapt, but also that some plants will adapt better than others. That might mean a thriving species will out-compete and eliminate one less able to thrive in the new conditions. Much the same issue exists with CO2 fertilization itself – some plants will fare better than others.

        This isn’t some local change of course, CO2 is being elevated at an unprecedented rate planet wide. When you factor in the ocean acidity changes, the plant fertilization changes and of course the radiative effects of the CO2 rise, then life may be in for quite a serious shakeup over the next 100 years.

        Future generations may very well look back and wonder how on earth we allowed such a major perturbation to the carbon cycle take place.

      • CO2 fertilisation is a myth outside of controlled actual greenhouse conditions. Plants reduce stomata size and density to restrict when they can restrict gas exchange and also reduce water loss.

        But you are barking up the wrong tree – so to speak. The contribution of CO2 to ‘recent warming’ is minimal, the world is cooling for a decade or three at least as the cool Pacific mode intensifies, I would worry about the hydrological changes from smaller stomata more as CO2 fertilisation is a myth and ocean pH depends much more on biological activity and deep ocean upwelling than anything we have seen from atmospheric concentrations of CO2.

        We are not sure how much we are contributing to changes in the carbon cycle – and are not inclined to worry about it too much. There are bigger problems in the world.

        What we need is 3%/year growth in energy and food for the rest of the century – about a 15 fold growth. Figure out how we can do that while reducing emissions and we may have some basis for discussion.

        Your so called science is BS – and then you appeal to the authority of the future? Please.

      • The world is definitely not cooling.

        In fact there’s no slowdown in longterm global warming in recent years if ENSO and the solar cycle are accounted for. UAH afterall had 2010 as joint warmest on record despite the El Nino being weaker than 1998 and the recent solar minimum. And the recent La Nina bottomed out very high in UAH plus temperature now during ENSO neutral in UAH seems to be at levels associated with El Ninos in the past!

        The PDO has been dropping for 30 years. Related to this ENSO indexes also show a downward trend since around 1980.

        In my book that’s a cooling impact if anything.

        And yet the world has warmed over that period.
        http://www.woodfortrees.org/plot/jisao-pdo/plot/jisao-pdo/from:1980/trend

      • lolwot

        In fact there’s no slowdown in longterm global warming in recent years if ENSO and the solar cycle are accounted for.

        Nassim Taleb has an expression for this rationalization, which occurs when a prediction has turned out to be totally false (as was the IPCC forecast of 0.2C warming per decade)..

        He refers to it as the “my prediction was correct except for…” rationalization (add in anything convenient after “except for”).

        Max

      • CH,

        “The contribution of CO2 to ‘recent warming’ is minimal…”
        Do you have any credible scientific reference to support this statement?
        And you’re accusing others of BS science?

      • Of course I have peer reviewed scientific references for this. Most ‘recent warming’ happened in 1976/77 and 1997/98. These are ENSO ‘dragon-kings’ as defined by Sornette 2009 – e.g Tsonis et al 2007, Swanson et al 2009.

        But regardless of the reasons for such large variability at those times – just look at the numbers in the temperature record and then at the Wolter MEI – http://www.esrl.noaa.gov/psd/enso/mei/ – to see what was happening with ENSO.

        Then look at e.g. – Wong et al 2006 to see what clouds were doing.

        Or NASA – http://isccp.giss.nasa.gov/projects/browse_fc.html.

        ‘In the first row, the slow increase of global upwelling LW flux at TOA from the 1980’s to the 1990’s, which is found mostly in lower latitudes, is confirmed by the ERBE-CERES records. ‘ Relative cooling in the IR!!!!!

        ‘The overall slow decrease of upwelling SW flux from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements.’ Relative warming in the SW!!!!

        ‘The overall slight rise (relative heating) of global total net flux at TOA between the 1980’s and 1990’s is confirmed in the tropics by the ERBS measurements and exceeds the estimated climate forcing changes (greenhouse gases and aerosols) for this period. The most obvious explanation is the associated changes in cloudiness during this period.’

        But it cooled in the IR!!!!!

        Now if you can’t believe NASA/GISS – who are you going to believe.

        It is a nonsense to think that clouds don’t change – especially in relation to SST changes and especially in the tropics and sub-tropics.

        There is much more on a website accessible by clicking my title. The only thing I would add is something on the longer term variability of ENSO – http://www.nonlin-processes-geophys.net/16/453/2009/npg-16-453-2009.htmlhttp://www.clim-past.net/6/525/2010/cp-6-525-2010.pdf

        As opposed to this we have an assertion that the PDO has been ‘dropping for 30 years’ and that ENSO has been trending down ‘since about 1980’. The PDO changes on about 20 to 40 year periods from a cool mode to a warm mode. It is associated with a change in the frequency and intensity of ENSO.

        ‘The PDO was named by Mantua et al (1997), who demonstrated a connection between salmon abundance and SST in the northern Pacific. SST varied over 20 to 30 year cycles in phase with changes in salmon abundance. SST were cooler than average for 20 to 30 years – a cool mode of the PDO, and then warmer than average over 20 to 30 years, a warm mode. A warm mode PDO is associated with reduced abundance of coho and chinook salmon in the Pacific Northwest, while a cool mode PDO is linked to above average abundance of these fish. The biology responds to cold but nutrient rich sub surface water upwelling in the north eastern Pacific. The abundance of salmon was greatest in the period between the mid 1940’s and mid 1970’s, least in the period 1976 to 1998 and has, in recent years, rebounded to values not seen since the 1970s.’ (JISOA- Climate Impacts Group) – http://cses.washington.edu/cig/pnwc/pnwsalmon.shtml

        Verdon and Franks (2006) used ‘proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the PDO and ENSO. During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’

        Verdon, D. and Franks, S. (2006), Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records, Geophysical Research Letters 33: 10.1029/2005GL025052.

        So we are looking at another decade or 3 of intense and frequent La Niña. If the suggestion that solar UV drift is involved in modulating ENSO variability is correct – then we may be looking at increased La Niña frequency in the longer term – a centennial and millennial variability with well known associated cloud feedbacks.

        One potential cause of Pacific Ocean variability is shown by Lockwood et al (2010). ‘During the descent into the recent exceptionally low solar minimum, observations have revealed a larger change in solar UV emissions than seen at the same phase of previous solar cycles. This is particularly true at wavelengths responsible for stratospheric ozone production and heating. This implies that ‘top-down’ solar modulation could be a larger factor in long-term tropospheric change than previously believed, many climate models allowing only for the ‘bottom-up’ effect of the less-variable visible and infrared solar emissions. We present evidence for long-term drift in solar UV irradiance, which is not found in its commonly used proxies.’

        Judith Lean (2008) commented that ‘ongoing studies are beginning to decipher the empirical Sun-climate connections as a combination of responses to direct solar heating of the surface and lower atmosphere, and indirect heating via solar UV irradiance impacts on the ozone layer and middle atmospheric, with subsequent communication to the surface and climate. The associated physical pathways appear to involve the modulation of existing dynamical and circulation atmosphere-ocean couplings, including the ENSO and the Quasi-Biennial Oscillation. Comparisons of the empirical results with model simulations suggest that models are deficient in accounting for these pathways.’

        Lockwood, M., Bell, C., Woollings, T., Harrison, R., Gray. L. and Haigh, J. (2010), Top-down solar modulation of climate: evidence for centennial-scale change, Environ. Res. Lett. 5 (July-September 2010) 034008 doi:10.1088/1748-9326/5/3/034008

        Lean, J., (2008) How Variable Is the Sun, and What Are the Links Between This Variability and Climate?, Search and Discovery Article #110055

        ‘Stay tuned for the next update (by September 10th, probably earlier) to see where the MEI will be heading next. La Niña conditions have at least briefly expired in the MEI sense, making ENSO-neutral conditions the safest bet for the next few months. However, a relapse into La Niña conditions is not at all off the table, based on the reasoning I gave in September 2010 – big La Niña events have a strong tendency to re-emerge after ‘taking time off’ during northern hemispheric summer, as last seen in 2008. I believe the odds for this are still better than 50/50. If history ends up repeating itself, the return of La Niña should happen within about two to three months.’ Claus Wolter

      • look for la nina, its coming, we’ve seen it coming since June.

      • CH, your TOA LW flux looks suspiciously like the surface warming effect seen in the window region. If you look at your Zhang et al. reference, you will see TOA LW is more or less flat. I didn’t check your other figures for you.

      • I guess your link was also to a NASA site, but this one seems to supersede the TOA LW plot, being more recent. They do have contradictory data on this.

        http://isccp.giss.nasa.gov/projects/flux.html

      • Hi Judith,

        Yes – upwelling in the Humboldt Current strongly over the SH winter – and the state of the SOI – seemed to me to suggest a strong possibility.

        La Niña could be seen in the evolving cold tongue in the equatorial Pacific from early August and is now kicking off strongly.

        http://www.osdpd.noaa.gov/data/sst/anomaly/2011/anomnight.8.25.2011.gif

        It started showing up in SST in the central Pacific around the middle of the month.

        http://stateoftheocean.osmc.noaa.gov/all/

        Cheers

      • Chief, both ECMWF and CFS (NOAA) have been predicting this for several months

      • Judith,

        As of the middle of August and for September to October the CFS and the GloSea models indicate neutral to cool conditions while the other models suggested neutral conditions.

        Summary of models:

        http://www.bom.gov.au/climate/ahead/ENSO-summary.shtml

        Unless we have different information – there are links to the sites. I tend to discount ENSO models at any rate.

        Cheers

      • Jim D,

        ‘CH, your TOA LW flux looks suspiciously like the surface warming effect seen in the window region. If you look at your Zhang et al. reference, you will see TOA LW is more or less flat. I didn’t check your other figures for you.’

        I don’t have a Zhang et al reference – I have a Wong et al reference that talks about decadal variability in ERBS TOA flux. More IR is emitted because the planet warmed? But that would only result in equilibrium – not ‘relative’ cooling.

        Jim D | August 27, 2011 at 7:20 pm |
        I guess your link was also to a NASA site, but this one seems to supersede the TOA LW plot, being more recent. They do have contradictory data on this.

        http://isccp.giss.nasa.gov/projects/flux.html

        Oh Jim – what you have is exactly the same ISCCP-FD data expressed as net flux. By convention – showing a positive slope as the planet gaining energy and a negative as the planet losing energy. So a relative warming in SW and a relative cooling in the IR.

      • CH, their net TOA LW has no slope, and certainly nothing like the one in your NASA link that was older. Both refer to Zhang et al.

      • Jim D

        I see Zhang et al was the NASA reference for the ISCCP-FD dataset. Nonetheless – it is the same ISCCP-FD data I referred to. Although net rather than upward flux as I said.

        You have to take into account little things like the scale – try saving the graph and stretching the y axis and you might be able to see a slope. Maybe not.

      • CH, I am looking at the brown net LW TOA line on the second graph in the link you list last above. This looks very flat with both upward and downward excursions in the period. The end value is close to the beginning one (near 5). And we should expect TOA LW to be constant unless it is opposed by a significant solar or albedo change in the net TOA SW.

      • The comparison in the tropics – ignore the AVHRR

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-4-4-1.html

        The values of IR are influenced by cloud of course and in no way should be expected to be constant. Relative warming in the SW and relative cooling in the LW – from 2 platforms in the tropics.

      • CH, You say “Most ‘recent warming’ happened in 1976/77 and 1997/98.”

        No-one else is saying that.

        Maybe you’d like to take look at this graph and explain why you believe this to be the case?

        http://farm6.static.flickr.com/5020/5419352107_01659a31a4.jpg

      • tt,

        Most recent warming happened in 1976/77 and 1997/98 – it is blindingly obvious. How can it get any simpler? Exclude those years and get the residual trend between 1978 and 1996.

        http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

        At least catch up with realclimate – they can give you some better arguments for minimising the problem of non warming.

      • Bob,

        Take another look at the graph. I’m not in disagreement with guys at Realscience .My graph looks just the same as theirs. As the article you linked to suggested:

        “anomalous behavior is always in the eye of the beholder.”

        I suppose if you look hard enough, and tell yourself often enough that its cooling, I suppose it might be possible to fool yourself.

      • Chief,
        You mention that CO2 increases the heat content of the atmosphere.
        This is a summary of heat content of various atmospheric components:
        http://en.wikipedia.org/wiki/Heat_capacity
        I do not see much about heat capacity between CO2 and O2 that seems significant.
        What am I missing about what you said?

      • This is a summary of heat content of various atmospheric components:
        http://en.wikipedia.org/wiki/Heat_capacity

        Hunter, heat content is in joules, heat capacity is in joules per degree.

        One can raise the heat content of a substance without raising its heat capacity. Naturally the substance’s temperature increases when you raise its heat content.

        To further confuse matters a distinction is drawn between extensive and intensive heat capacity, the former being the above and the latter being in joules per kilogram per degree. For the atmosphere as a whole one wants the former, for each kilogram of atmosphere one wants the latter.

      • Robert,

        ” The contribution of CO2 to ‘recent warming’ is minimal”. Why and what make you think there is minimal CO2 contribution to ‘recent warming’. CO2 helps to cool the N2 and O2 (i.e. air) in the atmosphere and the Earth with very effective IR energy radiated to space and gain energy from collisions with O2 and N2 and absorption from the Earth surface radiations. It cools the atmosphere and cools the Earth. The more ppm CO2 in the atmosphere, the cooler is the atmosphere and the cooler the Earth.

      • I don’t know why IR cools the atmosphere. When absorbed by CO2 or water vapour – the molecules have a higher energy state – they vibrate transferring energy to other molecules in the atmosphere. Occasionally they emit a photon and move to a lower energy state.

        The higher energies are related to heat content – more CO2 in the atmosphere means that more heat is retained in the atmosphere.

      • CO2 helps to cool the N2 and O2 (i.e. air) in the atmosphere and the Earth with very effective IR energy radiated to space

        How can this help to cool? By your logic, the radiation is already on the way out the door, so what would it matter if CO2 was there or not?

        and gain energy from collisions with O2 and N2 and absorption from the Earth surface radiations.

        Huh? The gases are locally isothermal — big deal.

        And then you say that CO2 does absorb radiation on the way out, which contradicts the first point.

      • ” The more ppm CO2 in the atmosphere, the cooler is the atmosphere and the cooler the Earth.”

        It strikes me that you’ve actually understood how the Greenhouse Effect works, and you’ve had a great idea!

        If IR radiation in the outer atmosphere radiates heat out in to space then surely we can argue that it cools the Earth!

        So, the question is: are you really that obtuse? I’d say not. You’ve just spotted an opportunity to come up with another denialist argument! Well done, maybe you’d like to submit it to http://www.skepticalscience.com !

      • Robert,

        ” When absorbed by CO2 or water vapour – the molecules have a higher energy state – they vibrate transferring energy to other molecules in the atmosphere. Occasionally they emit a photon and move to a lower energy state.”
        Yes, we agree on the above.

        “The higher energies are related to heat content – more CO2 in the atmosphere means that more heat is retained in the atmosphere.” CO2 does not retain heat, it immediately release energy after gaining collisional energy from the N2 and O2. AGWers have shown that CO2 and other GHGs are more IR radiation efficient than other gases.

        WebHubTelescope,
        You did not digest. Why would 1st point contradicts 2nd point or vice versa?

        tempterrain,
        “So, the question is: are you really that obtuse? I’d say not. You’ve just spotted an opportunity to come up with another denialist argument! Well done, maybe you’d like to submit it to http://www.skepticalscience.com
        No, I don’t waste time in other blogs especially when you say “are you really that obtuse”. That makes the blog there low. Dr Curry’s blog is the only site worth my time.

      • still pushing this horsesht then?

      • tempterrain,

        do you deny that GHG’s including CO2 can cool their parcel that include N2 and other gasses that aren’t GHG’s??? That this is how the energy is radiated from the planet once it is convected upwards?? (which is sped up by the GHG’s warming the parcel?)

      • There are two opposing effects of GHGs. They increase absorption of photons from the earth, which is a warming effect, and they emit photons to space and towards the earth, which is a cooling effect in the atmosphere. The net effect turns out to be a cooling one in the troposphere, but the emission towards earth is a warming effect on the surface. The tropospheric cooling balances the warming effect of convection that provides heat from the surface more efficiently than radiation..

      • Jim D,
        “There are two opposing effects of GHGs. They increase absorption of photons from the earth, which is a warming effect, and they emit photons to space and towards the earth, which is a cooling effect in the atmosphere. The net effect turns out to be a cooling one in the troposphere” Basically, no problem but you can express it better, like Alex did.

        “but the emission towards earth is a warming effect on the surface. ”
        This statement does not comply with Stefan-Boltzmann radiation violated radiation law.

        “The tropospheric cooling balances the warming effect of convection that provides heat from the surface more efficiently than radiation.”
        Basically, no problem but you can express it a lot better without the use of “The tropospheric cooling balances” .

      • Sam NC,

        There is no violation of any law when a molecule of GH gas absorbs from the ground. and then re-radiates a IR photon, in a random direction.

        If the molecule “knew” which way was up, and which way was down, and therefore “knew” to re-radiate the photon out into space, then you might just have some sort of a point.

        I’m sure the idea of GH gas molecules being aware of what they are supposed to do must violate some law. I’m not sure which one – though. Maybe I’m the first one to state this and it can, from now on, be called “Martin’s Law” !

      • tempterrain,

        “There is no violation of any law when a molecule of GH gas absorbs from the ground. and then re-radiates a IR photon, in a random direction. ”

        This is correct. The incorrect part is the lower energy photons from CO2 heat up the higher energy higher temperature ground surface that violates radiation law.

      • SamNC, the downward photons no more heat the surface up than a blanket (not an electric one) heats you up at night. It is better to say it keeps the surface warm. The heat comes by other routes (solar). I know I said it has a “warming effect”, but more accurately it has a keeping-it-warm effect on the surface which is trickier to phrase well.

      • Jim D,

        No. Downward photons from CO2 at best can reduce the Earth surface cooling, not warming. This is the serious mistake of the AGWers thinking that the CO2 photons radiated down to the Earth can heat up the Earth.

      • SamNC, good, you got the point I was making about the blanket effect.

      • JimD and Sam NC,

        We are all in agreement then? Blimey that’s unusual!

        I like the distinction between “keeping-it-warm” and ‘warming”. That’s a good way of putting it.

      • HaHaHa,
        No, you’ve forgotten, now that CO2 is at a less energy state as it has released IR energy to the space and emitted photon to all other directions. The CO2 at a lower energy state that increased cooling of the Earth surface as well as the cooling of the atmosphere. Got it?

      • Sam NC,

        There was I thinking that you might have actually learned something but I do feel you should now be demoted to blogging on Wattsupwiththat or one of those crazy websites like “I love my CO2”!

        Its quite simple really. CO2 molecules in the atmopshere pick up some IR photons from the ground which would have been heading out into space had they not been intercepted.

        Then they re-emit the photons but only some of them are sent off in the direction of space. Some get re-radiated back towards the ground. The molecules end up in exactly the same condition as they started. But the ground ends up warmer than it would otherwise be which causes it to emit more IR photons than it would do if it were cooler. Does that make sense so far?

        A new balance is therefore set up and the number of IR photons going off into space is the same as before. But with the ground warmer than before. Have I said that before? The ground is warmer than it would be with those friendly GH gas molecules. So some of them are definitely a good thing.

        But I’m sure you know that you can have too much of a good thing. Goldilocks and all good cooks understand that. Not too little. Not too much. Everything has to be just right.

      • temterrain,

        I stopped visiting any climate discussion including Wattsupwiththat or “I love my CO2″ about 9 months ago. I visit and comment only here based on my experience and knowledge with thermodynamics and radiations. You may like to know that I am not influenced by any climate websites, all my original thinking. I am ready to believe in either camp but so far non are persuasive enough. Yes, I learn a lot here, especially, not my field of expertise. Chief Hydrologist is a good example among them.

        “Its quite simple really. CO2 molecules in the atmopshere pick up some IR photons from the ground which would have been heading out into space had they not been intercepted.” OK.

        “Then they re-emit the photons but only some of them are sent off in the direction of space. Some get re-radiated back towards the ground.” OK

        “The molecules end up in exactly the same condition as they started.” This is where I disagreed. The molecules have already emitted photon energy and need energy from outer sources (such as collisions with N2 and O2 at a higher energy state or stronger IR photons from the Earth) to restore its same condition as they started. Or, they become perpetual photon energy suppliers which cannot be true in any universe.

        “But the ground ends up warmer than it would otherwise be which causes it to emit more IR photons than it would do if it were cooler. Does that make sense so far?” No. The 1st sentence is confusing especially with those ‘it’s. I am sure Alex would have expressed it a lot better and clearer.

        I think you need a lot more efforts in the above explanations to be persuasive.

      • Sam NC,

        It’s like teaching a slow learning child.

        A GH molecule has energy X and a IR photon has energy Y. Photon is absorbed by molecule so now molecule has energy X+Y. So, for a time the molecule does have some extra energy and therefore the atmosphere will be warmer.

        Photon is emitted from molecule with energy Y towards ground where it is absorbed. Molecule now has energy X , just like it had before , so there is no need for it to take up energy from anywhere else. Now ground has extra energy Y.

        So this makes the ground warmer than it would be otherwise, until the IR photon is radiated back again.

      • tempterrain,

        ir up, ir down, aren’t you forgetting something?? Conduction from the ground to the air is very slow. Convection wouldn’t start for hours at least if that is what we depended on to get it started. GHG’s are really good at absorbing a bunch of IR and colliding with their parcels transferring it speeding up convection by just a whole bunch. Oh wait, that means that energy isn’t available to be radiated anywhere until those bits collide with GHG’s again!! That could be halfway to the Strat or more!!

        So not only does half of the emitted go up, all the transferred goes up. Kinda cuts down on how much can go down now doesn’t it??

      • Someone’s casting aspersions on slow-learning children by such comparisons.

        What child thinks convection moves faster than the speed of light?

        Or that any amount of altitude can convert ‘down’ to ‘up’?

        Though I admit some children might have trouble grasping that convection pretty much stops at the tropopause.

        ‘Stop’ is such a hard word.

      • Sam NC, by your reckoning adding CO2 cools both the air and earth. This would be unfortunate because then the IR radiation out to space would be reduced even further than the CO2 reduced it, and the earth could not radiate more energy to return to balance. How do you propose the earth radiates more energy again if it isn’t warming? Your climate system will accumulate energy somewhere, and you have just lost it somehow.

      • This is really a waste of time. Sam NC , Kuhnkat and some others are obviously determined that there shall be no GH effect. It’s like an 11 th commandment. Unless, somehow, in some tortuous way it can be argued that GH gases cool the Earth. Then that would be OK , of course.

        I might leave off this for now and go on to a creationist blog / website. and see if I can persuade those guys that there may be something in Evolutionary theory, despite what they may hear in their Church sermons.

        Can it be any harder than this?

      • temp,

        GHG’s are a MODERATING factor. Without GHG’s the equator would be unliveable during the day!! At night you would want a lot more than a Snuggy to keep warm!!

        Sadly the Climatati decided to make it a one way street that is obviously bunkum.

      • Yes, of course, the the whole of the atmosphere has a moderating effect on the Earth’s temperature. The extremes of temperature on the moon are much greater than on Earth. It’s not just GH gases that play a role in this.

        However, on average the Earth is still warmer. The Earth has a GH effect and the Moon doesn’t.

        The average temperature of the moon is -23 degC while the average temperature of the Earth + 15 deg C.

      • Temp,

        without GHG’s the atmosphere would be a lot hotter. The surface gas would be warmed conductively and convect away from the surface warming the atmosphere. The layer that cooled at night would not cool much else as it could only conduct in turn. This cycle would continue until the whole atmosphere was close to the peak temp reached by the surface unprotected from the sun’s direct rays!!

        Bart R,

        “What child thinks convection moves faster than the speed of light?”

        What has a higher bandwidth, a 300 baud modem or a station wagon full of backup tapes travelling at 70mph (old 9 track reel to reel if you think it will help your case)??

        You children are so silly sometimes!!

      • kk

        Your station wagon has 17 km worth of fuel. When it goes 17 km, it finds itself in a place only 300 baud modems work, and tapes are degaussed.

        But that’s okay, because there’s 600 km ahead of nothing but 300 baud modems packed tighter in proportion to Moore’s Law every generation.

        What part of ‘stop’ is so hard?

      • Sam, if you hug someone who’s wearing a sweater, they feel colder to you than if they’re just wearing a thin shirt.

        Yet they tell you that with the sweater on, they feel warmer.

        Now if you add CO2 to the atmosphere, the Earth looks colder when observed by a satellite.

        But down on the ground it feels warmer.

        If you see an essential difference here that makes this a bad analogy, it would be helpful if you would explain how the analogy breaks down. Preferably in simple terms.

      • Well Bart,

        Since you tried to make fun of an old revered analogy that us old time Computer Geeks used to get a laugh out of, I don’t know whether you understood it or not. Did you do the math on parallel versus serial transmission of data and realize that the station wagon going 70mph can actually transfer data faster than a lightspeed 300 baud modem (providing it doesn’t run out of gas, have a flat or get carjacked…)??

        So, let’s do the baby steps for climate. We are talking about transferring energy. Photons, or waves, transfer miniscule amounts of energy at lightspeed. Convection moves large amounts of molecules containing the stored energy from rather large numbers of photons/waves in water vapor, molecular vibration, kinetic energy, and atomic states. In sum, it is a horse race depending on how much evaporation is happening in the environment etc. The photons/waves rarely make it all the way to space on the first step and often end up in the convective station wagon anyway!!

        Kiehl & Trenberth show about 66 w/m2 radiation making it to the upper trop while about 102w/m2 make it by thermals and evapotranspiration. The old tortoise stationwagon wins again and the lightspeed Rabbett loses!!

      • I think that CO2 added to the atmosphere by people is as hot as it is ever going to get – it is after all the product of that famous oxidising and exothermic reaction. We set fire to it.

        So it can’t be the case that adding CO2 to the atmosphere causes the planet to look colder from space.

        Sam – I think you are the only sensible person here. They are all thinking in terms of nonsense analogies – sweaters for instance. Although we call them jumpers – and have a joke about what you get when you cross a kangaroo with a sheep. A woolly jumper. (LOL)

        I have just understood what you were saying. N2 and 02 impart energy to C02 and H2O in collisions – which then occasionally emit a photon to space cooling the world.

        Well – yes – that seems a reasonable part of the puzzle. Although N2 and O2 don’t actually need CO2 to cool down. Anything above absolute zero will emit radiant energy – so a warmer atmosphere will emit more energy to space and by quite a lot more than proportionately by the Stefan-Boltzmann equation. But it seems to me to be the case that the atmosphere in general will be more energetic if there are more molecules that interact with photons in the IR frequencies. Then the planet will be warmer and emit more energy to space with an exponentially increasing radiative flux with temperature. Cooling off again – because it can’t emit more energy than comes in from the sun.

        There is a puzzle here that I need to think on. If anyone wants to continue this – I suggest that we take it to the bottom of the thread.

        Catch you later Sam.

      • Vaughan,

        With what you had written that simple analogy, my conclusion is you did not read what I, Jim D and tempterrain had written. Go and re-read them and then come with a better question or a better statement for me to respond.

        tempterrain,
        I said CO2 cools itself when it released IR photons to all directions. Where are the energy come from to restore CO2’s original energy state?

        When you are unable to reason, don’t divert to childish statements which only show you are running away from a good discussion without a good reason. You cannot even reason and expect me to learn from you?

        Bart R,
        As usual, your reply is childish. This blog is serious discussion and there is no point to reply childish comments. Come back again when you can write something like a grown up.

        Jim D,
        When you are unable to think something better in reply, you divert. Let me ask you one more time ‘When CO2 radiates its IR energy thru releasing photons to all directions, it cools itself. Where are the CO2 energy come from to restore to its original energy state? CO2 will be getting cooler and cooler by emitting more photons if there is no energy to replenish to restore to its original energy state.’

      • SamNC, I was carrying your idea to its unworkable conclusion. With more CO2, less radiation gets out to space. How should the earth adjust to get more radiation out? There is one obvious answer.
        You should know the answer to your question too. The IR cooling is opposed by convection from the surface which warms and balances it.

      • kk

        Wasn’t trying to make fun of the old revered analogy.

        Was successful at dissing your hopeless misunderstood use of it.

        But as you’re now getting Perry #226 backwards too, I can leave the old wood panel modem alone and ridicule you on that instead.

        Of all the many versions of the fable, there is one common element: the Hare stops before the race is over.

        Let’s call the point the Hare rested, gave up, or was fooled, the tropopause point.

        From the tropopause (where convection practically stops) to the end of the race, the TOA, is another 600 km.

        Your 17 km is on the scale of a rounding error, and could be described as miniscule.

        Which still doesn’t matter, because heat isn’t transmitted to the vacuum of space by convection, but almost only by radiation (because we mostly ignore the really miniscule magnetic transfer).

        Also, a mere quibble, but on the scale of 102, 66 is hardly miniscule. (It’s over half, in case the math troubles you.)

        Likewise, convection existed before 1750; the net increase in heat carried up by convection (remembering convection also returns air down, and that air returns marginally warmer now than it did before 1750), really would be miniscule. It would also vary with temperature, humidity and other factors, so not exactly a predictable number, other than ‘approaching insignificant’.

        Likewise too, the effects of convection at elevating the solar tide would only amplify the GHE by increasing the net distance to TOA, albeit a minor consideration overall.

        On the other hand, an increase in concentration from 280 to 390 ppm, since ppm is a volumetric concentration and single photons travel in paths, the mean path length increases by some function of a cube as concentration increases linearly.. which it gets to do all the way to the end of the race.

        Unlike convection.

      • BartR,

        “Someone’s casting aspersions on slow-learning children by such comparisons.

        What child thinks convection moves faster than the speed of light?

        Or that any amount of altitude can convert ‘down’ to ‘up’?

        Though I admit some children might have trouble grasping that convection pretty much stops at the tropopause.

        ‘Stop’ is such a hard word.”

        It is amazing how arrogant people get caught over the simple stuff.

        http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/TFK_bams09.pdf

        Take a look at Trenberth’s cartoon. In it you will see 17w/m2 called THERMALS!! What else would I include this in except convection?? As to your extending this outside the atmosphere it is silly. Your statement did NOT include any parameters about WHERE radiation is faster than convection. Since Convection was in question it can only reasonably be interpreted as where convection actually exists. Where convection actually exists is between the surface and the tropopause. In that area convection transfers more energy than radiation every day all day and nights also.

        Sorry you are such a sore loser over forgetting the basics.

        Now, your last post appears even more scattered than mine often are. I hope you get better!!

  42. Pekka Pirilä, 8/26/11, 10:38 am, CO2 residence time discussion

    PP: You can easily find thousands of references related to the Revelle factor.

    The ones that count are those reported by IPCC in TAR or AR4, or from references cited in those Reports. Nothing else is significant because those reports supply the model for the existence of AGW. If another report agrees with IPCC in some regard, that report must be shown to be an independent source for that regard. Otherwise it is redundant and not determinative. If that other report disagrees, it is merely argumentative, not determinative. The IPCC Reports must be debunked, if at all, on their own merits.

    PP:One example of lecture material explaining it is here.

    The link showed a URL, but unfortunately would not open. I searched for the course number, ES427, along with Revelle at the site eng.warwick.ac.uk, and found course material titled Ocean Chemistry from Dr. G. P. King, rev. 11/1/04. I am familiar with this document, and could find nothing of value in it but overly restrictive assumptions (e.g., surface layer equilibrium again) and an approximation for a thin, diffuse layer that disappeared in his derivations.

    King’s development makes d[CO2]_ml (a differential molecular CO2 concentration in the mixed layer) vanishingly small. That is a bootstrap fallacy, also known as the fallacy of begging the question. If the Revelle factor exists, then the atmosphere becomes a buffer holding excess CO2 (the sea a buffer against; and not as assumed by IPCC and King, holding only the ACO2 species) and the differential from equilibrium in the mixed layer would be negligible, AND Henry’s Law coefficient would have a novel sensitivity to the state of the surface layer. Thus King’s derivation assumes what it wants to demonstrate: the existence of the Revelle Factor.

    What appears to be reality is that the Revelle Factor is a fiction, that Henry’s Law holds. That is, CO2 readily dissolves in a surface layer not in equilibrium, and Henry’s coefficient is unknown. In fact IPCC and its authorities rely on CO2 dissolving more readily as wind speed increases, which causes the surface layer to become more turbulent. Because the surface layer is not in equilibrium, it can and must act as a buffer holding excess CO2. IPCC errs by moving that buffer of excess CO2 from the surface layer to the atmosphere.

    PP: You didn’t only criticize the Georgia Tech course based on the irrelevant comment that Beer-Lambert law is valid also for multichromatic radiation, when the absorption coefficients happen to be the same, but you implied clearly that this trivial extension would change essentially the outcome. Furthermore you claimed that changes to the presentation of Beer-Lambert law would be done to justify somehow suspect conclusion on the behavior in the atmosphere. This proves that your point was not only irrelevant sophistry, but you really tried to perpetuate wrong conclusions.

    Your characterization of my critique is not accurate, and the implication you allege puts words in my mouth. You have a professional duty as a scientist to quote me exactly in both regards, and to do so as a professional courtesy. I dispute the correctness of either, and the merit of your subsequent conclusions.

    PP: If you are now willing to admit that the Beer-Lambert law does not at all work for the total IR radiation in atmosphere, that’s fine. Is that your present view?

    Allow me to explain an experiment I performed. It is (1) too complex to present fully here; (2) not yet ready to publish on my blog, in part due to lack of motivation, because (3) (a) radiative transfer is irrelevant to radiative forcing (because RT, as precise as it is, is too sensitive to instantaneous atmospheric conditions, which are unknown, compounded by it being nonlinear) and (b) global warming is predictable from the Sun with a simple transfer function, making the GHE and radiative forcing irrelevant at the outset.

    The experiment comprised computing radiative forcing using David Archer’s online MODTRAN routine across its full dynamic range of CO2 from 1 to one million parts per million. The computation was for the Standard Atmosphere, s(C), is shown logarithmically at this link.

    http://www.rocketscientistsjournal.com/_res/F22|4BEERS_slog.jpg

    The curve s(C)_est(σ) is the best fit approximation to s(C) using four Beer functions of the form Beer_sj(α,&Delta) = Δ*(1 – exp{-αC), where C is the concentration. In the graph shown, σ = 0.53 Wm^-2, computed for 10 integer values of C per decade for a total of 60 points. The parameter Δ is the saturation level for each Beer function. Repeating the experiment for the tropical atmosphere produced similar results.

    To the extent that Archer’s MODTRAN program is representative, and within the accuracy reported, the following conclusions hold. Atmospheric absorption is well-represented by a set of four Beer’s Law equations of approximately equal radiative forcing, with approximately evenly spaced absorptivity coefficients. This is approximately equivalent to dividing spectral absorption strength into four 1.5 decade wide bands and computing an empirical absorption coefficient for each band. It is equivalent to approximating the absorption coefficient as a function of wavenumber by a new set of just four absorption coefficients, one for each of four discrete levels. MODTRAN has no significant region of zero absorption, that is, no observable window. The deepest Beer functions may be conjectures.

    To answer the explicit question posed, the Beer-Lambert Law survives. It holds for the atmosphere as modeled by the radiative transfer algorithm. Radiative forcing as estimated by radiative transfer is approximately the sum for four Beer-Lambert Law equations. The first two Beer-Lambert Law equations are sufficient for estimating the climate sensitivity for as much as double the present day level of CO2 concentration.

    This model for RT holds promise for yielding a computationally efficient estimate of atmospheric forcing, including producing a practical average over various models for lapse rates, CO2 concentration variability, and diurnal and seasonal effects, among others.

    PP: The controversy on the Bern model is very similar in it’s nature. You try to discredit it based on false arguments.

    Scurrilous nonsense. Similar to what, pray tell? Your naked accusation of false arguments has no merit. Next time you bother anyone with your opinion about the validity of the Bern equation, you should first explain how the four reservoirs, a_0 to a_3, in that equation could be feasible in nature.

  43. Another relatively inert gas to look at for residence time behavior is Nitrous Oxide (N2O). This has a residence time estimated anywhere from 5 to 200 years.
    http://www.gly.uga.edu/railsback/Fundamentals/AtmosphereCompV.jpg

    It also has a similar hockey stick increase to atmospheric CO2 rise, but occurring earlier IMO.
    http://www.lenntech.com/images/pastn2o.gif

    This comparison may allow us to resolve some arguments and to reinforce others. For one, I contend that this large uncertainty over the N2O residence times suggests that it also has a fat tail. As I said elsewhere in this thread, fat-tail distributions often don’t have a statistical mean and so a range of numbers are typically given.

    As a control we can also look at Methane (CH4). This gas also shows a significant relative increase yet it is much more reactive and thus has a much shorter residence time.
    All these follow either the industrial revolution or the ramp up of large-scale cattle farming or other agricultural practices.

    • Webby,

      “Another relatively inert gas to look at for residence time behavior is Nitrous Oxide (N2O).”

      Could you please give us a definition of relatively inert????

      • The gross dividing line is whether the molecule is endothermic or exothermic on transformation. If it’s reactive it won’t stay around for long. The inert gases are all the column 8 elements. N20 and CO2 are relatively inert because they are endothermic. Methane is exothermic so it has a shorter residence time.

      • Webby,

        “The gross dividing line is whether the molecule is endothermic or exothermic on transformation.”

        I see. So since it is relatively inert due to being endothermic, or needing energy to react, It doesn’t react with much. So it wouldn’t react much with exothermic specie or due to the sun shining and providing the energy or…??

        You did say relatively less reactive, so, even if it reacts a lot it would still be reacting less than something that reacts even more, relatively speaking.

      • The chemists language for this is completely oxidized. It makes more sense because endo/exothermic requires stating relative to which products.

      • True. Since there are many possible product paths you can aggregate these into the endothermic paths and the exothermic paths, and you can estimate which overall path is more probable. For example, methane with oxygen is the obvious, very common exothermic pathway:
        CH4 + 2*O2 -> CO2 + 2*H20

        The carbon in this case is oxidized making the reverse very unlikely to occur spontaneously.

    • subscribe

  44. Vaughan Pratt

    Further up this thread you stated:

    It is not impossible however that consuming carbon based fuels more efficiently while also cutting over to non carbon based fuels could reduce CO2 emissions by say 10% or 30% or some number. Calling this impossible is defeatist.

    It is then interesting to know in advance by how much the CO2 level will change.

    That’s a relatively easy one, Vaughan, because we are talking about a specifically quantified estimate of the reduction in CO2 added into the system..

    Humans now emit 34 GtCO2/year into the system, so a 30% reduction would mean a reduction of 10.2 Gt CO2/year.

    Let’s say the reduction started in 2020.

    And let’s calculate the net reduction in atmospheric CO2 by year 2100.

    10.2 * (2100 – 2011) = 816 GtCO2
    Half of this “stays in atmosphere” = 408 GtCO2
    (see many previous posts regarding this observation)
    Mass of atmosphere = 5,140,000 Gt
    Net reduction in atmosphere = 408 * 1,000,000 / 5,140,000 = 79.4 ppmv
    Equals 79.4 * 29 / 44 = 52.3 ppmv

    Business as usual case
    IPCC “Scenario B1”
    Atmospheric CO2 concentration by year 2100 = 580 ppmv

    If CO2 emissions cut back 30% below today’s value in 2020,
    Atmospheric CO2 concentration by year 2100 = 580 – 52.3 = 527.7 ppmv

    Using the logarithmic relation between CO2 and temperature and the 2xCO2 climate sensitivity assumed by the IPCC models (3.2°C mean value), we have:

    Business as Usual (IPCC Scenario B1):
    C1 = 390 ppmv = CO2 concentration in 2011
    C2 = 580 ppmv = CO2 concentration in 2100 with “business as usual”
    C2/C1 = 1.4872
    ln(C2/C1) = 0.3969
    ln 2 = 0.6931
    dT (2xCO2 per IPCC models) = 3.2°C
    dT (2011-2100, BaU) = 3.2 * (0.3969 / 0.6931) = 1.8°C
    (checks with IPCC AR4 WG1 projection)

    With 30% reduction of emissions:
    C1 = 390 ppmv = CO2 concentration in 2011
    C2 = 527.7 ppmv = CO2 concentration in 2100 with 30% cutback
    C2/C1 = 1.3530
    ln(C2/C1) = 0.3023
    ln 2 = 0.6931
    dT (2xCO2 per IPCC models) = 3.2°C
    dT (2011-2100, 30% CB) = 3.2 * (0.3023 / 0.6931) = 1.4°C

    Net reduction in global warming by 2100 resulting from 30% emission reduction in 2020 = 1.8 – 1.4 = 0.4°C

    This would be the estimate for net reduction in warming by 2100 resulting from a 30% cutback in emissions by 2020 using IPCC model-based assumptions..

    But let’s do a quick reality check on these assumptions using the actual past record on global temperature (HadCRUT3) and atmospheric CO2, as used by IPCC
    The HadCRUT3 record started in 1850
    Since then temperature has risen by ~0.7°C

    IPCC AR4 WG1 report tells us that CO2 level was ~290 ppmv in 1850

    IPCC also tells us a) that all natural forcing (solar, etc.) caused around 7% of the warming since pre-industrial times and b) that all other anthropogenic forcing components other than CO2 (aerosols, other GHGs, etc.) cancelled one another out over this period

    So the increase in CO2 from 1850 to today caused 0.93 * 0.7 = 0.65°C increase in temperature

    Using the same logarithmic calculation, we can use the actually observed CO2 and temperature data from 1850 to today to project the future “business as usual” and “30% emission cutback” cases:

    C0 = 290 ppmv = CO2 concentration in 1850
    C1 = 390 ppmv = CO2 concentration in 2011
    C1/C0 = 1.3448
    ln(C1/C0) = 0.2963
    dT (1850-2011) = 0.65°C (attributed to increased CO2)

    Business as Usual (IPCC Scenario B1):
    C1 = 390 ppmv = CO2 concentration in 2011
    C2 = 580 ppmv = CO2 concentration in 2100 with “business as usual”
    C2/C1 = 1.4872
    ln(C2/C1) = 0.3969
    dT (2011-2100, BaU) = 0.65 * (0.3969 / 0.2963) = 0.9°C

    With 30% reduction of emissions:
    C1 = 390 ppmv = CO2 concentration in 2011
    C2 = 527.7 ppmv = CO2 concentration in 2100 with “30% cutback”
    C2/C1 = 1.3530
    ln(C2/C1) = 0.3023
    dT (2011-2100, BaU) = 0.65 * (0.3023 / 0.2963) = 0.7°C

    Net reduction in global warming by 2100 resulting from 30% emission reduction in 2020 = 0.9 – 0.7 = 0.2°C

    So we have two estimates to answer your question.

    First, if we accept the IPCC model-based 2xCO2 climate sensitivity estimate of 3.2°C, we have a net reduction in global warming from today to 2100 by reducing worldwide emissions by 30% of 0.4°C

    If instead we use the actually observed change in CO2 and temperature since 1850 as the basis, we have a net reduction in global warming from today to 2100 by reducing worldwide emissions by 30% of 0.2°C

    Maybe someone here wants to challenge the calculation here with a better one, but that is the savings in global warming I can see from a 30% reduction in worldwide CO2 emissions.

    Max

    • This is a relative reduction in accumulation assuming a model in which there are no permanent sinks (i.e., infinite residence time). Factoring in residence time makes the calculations far more intricate.

      Unfortunately, I do not see any way at this time to resolve the question of whether residence time is short or long apart from the cum hoc ergo propter hoc arguments of the advocates of a long residence time. I thought I had found one in my dispersion argument advanced previously but, as I stated at August 26, 2011 at 3:03 am, that argument is not so clear cut as I thought it was.

      • I should say, I do not see any way at this time to convincingly resolve the question of whether residence time is short or long.

        On the one side, I have the cum hoc ergo propter hoc arguments of the advocates, but what they take as being a remarkable coincidence otherwise is really not so remarkable. Low frequency time series can either be increasing or decreasing at a given time. If they are both increasing, you can always determine an affine relationship which appears superficially to relate them.

        Some would argue that, yes but we have theories which also predict this behavior. But, these are really only hypotheses, which cannot be proved in a closed loop fashion in the lab. In ages past, eminent knowledgeable people had hypotheses on other matters which also matched observed behavior, from leeches to epicycles to many others, which proved to be wrong. We like to imagine ourselves as somehow better than they, more advanced, more knowledgeable, and better able to discriminate. But, this is mere hubris. Those were brilliant people also, in their time, and they were projecting their hypotheses based on the best information they had up to that time. The only thing we have to differentiate ourselves with them is the accumulated tenets of the scientific method, which abjures leaps of logic and calling it fact before the evidence has clearly established it as truth.

        On the other side, I see unexplained variations in the affine relationship, though these could be due to simple measurement error or small unaccounted effects. I see a very low frequency transfer function between the variables, and I know that such low bandwidth regulatory systems tend not to be very robust – s**t happens to them. And, I see a great many founding assumptions, from the reliability of the proxy record to the assumed quantification of all sources and sinks, which all have to be correct for the prevailing paradigm to hold water (or, CO2 in this case). I am keenly aware of the geometric dilution of likelihood when you have many non-verifiable links in a chain of logic.

        I do not trust the ice core measurements, and they are integral to the entire narrative. There are a great many non-verifiable assumptions involved regarding the properties of entrapment over thousands, or even mere hundreds, of years. They assume particular amplitude characteristics. They assume a particular time lag. They assume frequency independence of both these items. IMO, we have only the data from 1958 to work with, and this is relatively a very short time.

        There is a marked deceleration in CO2 accumulation happening right now. It will be very interesting to see if the evident ~60 year cycle in the global temperature metric continues, and we find ourselves in a cooling spell for at least the next 20 or so years, and if the CO2 measurements start to decrease with a characteristic time lag, as would be expected if the system behaves in what I can only describe inchoately at this time as a natural way.

      • Bart

        This is a relative reduction in accumulation assuming a model in which there are no permanent sinks (i.e., infinite residence time). Factoring in residence time makes the calculations far more intricate.

        This is correct. The calculation is, by definition, a simplification.

        But it tells us that reducing human CO2 emissions by 30% (in 2020) would reduce the IPCC “model-guess-timated” level of 580 ppmv by around 52 ppmv to 527 ppmv, which in turn would theoretically reduce global warming by 2100 by 0.2C to 0.4C, depending on the 2xCO2 climate sensitivity assumed.

        This was in response to a question by Vaughan Pratt asking for an order-of-magnitude impact of cutting back CO2 emissions by 10% or 30% (10% would obviously have less impact)..

        “Residence time” is “factored in” in the first method of calculation in the sense that the amount of the human emission “remaining” in the atmosphere is estimated to be around 50%, as has been observed (with the IPCC assumption that only human emissions are changing our planet’s natural carbon cycle equilibrium, itself a very doubtful assumption).

        In the second.method I just used the IPCC model-based exponential rate of increase of atmospheric CO2 (scenario B1), which is basically a continuation of the CAGR observed over the past 5 or 50 years ~0.44%/year, IOW “business as usual”. The two methods give the same answer.

        So the answer basically shows an upper limit impact on our climate of cutting back CO2 emissions and confirms that we cannot, in actual fact, change our planet’s climate perceptibly, no matter how much money we throw it it. – which is what Vaughan apparently wanted to know.

        Max

        PS I would agree with you that there is no way “to convincingly resolve the question of whether [CO2] residence time is short or long, and IPCC seems to concur in its statement that the residence time is between 5 and 200 years (even though the IPCC model projections are apparently all based on a residence time of 400 years).

      • “even though the IPCC model projections are apparently all based on a residence time of 400 years”

        Good point. Once again, a reminder of the paucity of the argument that AGW is “science” dictated by “well known physics”. Well known physics give one a model. Numerical predictions from that model rely on parameterizations of it. And, the parameters are not “well known”.

      • “Paucity” is not the word I was looking for there. I was reaching for “flimsiness”.

    • Norm Kalmanovitch

      The only challenge to your calculation is the 5.35ln(2)=3.71 Watts/m^2 used as the forcing parameter in the IPCC climate models for a doubling of CO2.
      This parameter is base on observed warming of 0.6°C resulting from an observed increase in CO2 of 100ppmv over the past century.
      The longterm global temperature record as demonstrated by the IPCC in their first 1990 Report shows that most of this temperature ivrease is due to the natural recovery from the Little Ice Age of approximately half a degree C per century.
      Apparently the IPCC forgot tom subtract this 0.5°C of natural non CO2 induced warming from the measured 0.6°C warming resulting in a forcing parameter that is essentially six times too forceful since the 100ppmv increase in CO2 can really only be the justifiable cause for only 0.1°C of the observed 0.6°C warming.
      This would reduce your calculated values to just one sixth of what you present and while this is much closer to reality the inconvenient truth is that most of this 100ppmv increase in CO2 has been naturally sourced so these claculations have nothing to do, with CO2 emissions from fossil fuels.
      In spite of this challenge I commend you for your common sense calculations that show the fallacy of the whole climate change issue

      • Norm

        The only challenge to your calculation is the 5.35ln(2)=3.71 Watts/m^2 used as the forcing parameter in the IPCC climate models for a doubling of CO2.

        Agree.

        I have used the (questionable) IPCC assumptions a) that only 7% of the warming was caused by natural factors (solar) and b) that the remainder was caused by CO2 (with all other anthropogenic factors cancelling one another out).

        If one corrects for this, the impact of reducing CO2 emissions by 30% becomes even more insignificant than I stated in my post to Vaughan Pratt.

        Max

      • Norm Kalmanovitch

        Max

        Your argument is more than good enough so I withdraw my challenge because there is no use debating how insignificant the effect from increasing CO2 concentration is when the concentration keeps rising at 2ppmv/year yet the Earth stopped warming after 1998 and has been cooling since 2002

        Norm

    • The scenario you’ve used for business as usual, B1, is the “low growth” scenario that reaches 580ppm by 2100. In contrast the moderate growth A1B scenario reaches 700ppm by 2100, and A1T is over 900ppm.

    • 10.2 * (2100 – 2011) = 816 GtCO2
      Half of this “stays in atmosphere” = 408 GtCO2
      (see many previous posts regarding this observation)

      Max, I was fine with your arithmetic up to this point. Let me explain why this step gives me pause.

      Suppose your money manager has recommended that you adopt a life style such that your expenses are half your income, allowing you to accumulate the other half in your bank balance.

      After several years of figuring out how to do this, you finally succeed.

      Now suppose you lose your job.

      If your expenses continue to be half your income you will starve to death in a week due to spending zero on food.

      Do you see my point? Just because something has been a factor of a half during a relatively steady state does not mean that it will remain a half if you suddenly change things.

  45. Vaughan Pratt

    BTW that calculation comes out the same if one uses the more complicated formula based on the exponential rate of CO2 concentration, as assumed by IPCC:

    BaU = IPCC Scenario B1
    C2 = 580 ppmv (2100)
    C1 = 390 ppmv (2011)

    Exponential (componded annual) rate of increase
    CAGR (2011-2100) = 0.447%

    Cintermediate = CO2 concentration in 2020:
    = 390 * (1.0447)^(2020-2011) = 406 ppmv

    Increase after 2020 with no cutback = 580 – 406 = 174 ppmv

    Increase after 2020 reduced by 30%: = 0.3 * 174 = 52 ppmv reduction

    C2 (CO2 concentration by 2100 with 30% reduction after 2020):
    = 580 – 52 = 528 ppmv (same as with simpler calculation)

    Max

  46. Vaughan

    Typo correction: That should read

    = 390 * (1.00447)^(2020-2011) = 406 ppmv

  47. The so called “fat tail” is a product of unproven models.

    To me one set of measurements is worth 1,000 models.

    Endangerment Finding Proposal
    Lastly; numerous measurements of atmospheric CO2 resident lifetime, using many different methods, show that the atmospheric CO2 lifetime is near 5-6 years, not 100 year life as stated by Administrator (FN 18, P 18895), which would be required for anthropogenic CO2 to be accumulated in the earth’s atmosphere under the IPCC and CCSP models. Hence, the Administrator is scientifically incorrect replying upon IPCC and CCSP — the measured lifetimes of atmospheric CO2 prove that the rise in atmospheric CO2 cannot be the unambiguous result of human emissions.

    http://mobjectivist.blogspot.com/2010/04/fat-tail-in-co2-persistence.html

    • CO2 is largely inert unless happens across a short pathway to a sequestering site. So if you have a fast rate and a slow rate operational, the mathematical averaging of this will give a fat-tail.

      Oxygen molecules have a long residence time (several thousand years) because even though there are fast pathways to sequestering, it is obviously crowded out by the large concentration of oxygen that exists in the atmosphere already. So O2 is more thin-tailed than fat-tailed.

      Gases like N2 and very inert ones Helium or Argon have long residence times but are not fat-tail as well.

      I just don’t understand this need to believe in an unproven model of CO2 having a residence time of a few years.

      BTW, if you want to associate that link, which is my work, with anything that IPCC gets funded to do, you would be sadly mistaken. I work my analysis after-hours with funding supported by the sweat on my brow. I don’t have an Administrator, whatever that person is supposed to do.

    • netdr,

      “To me one set of measurements is worth 1,000 models.”
      Very true. Why people here wasting time arguing, typical laboratory experiments should be able to determine the roughly residence times for varying CO2 concentrations.

    • To me one set of measurements is worth 1,000 models.

      To me one model is worth 1,000 models. This is as true of climate models as of runway models. You’d be spending all your time breaking up fights between them.

  48. O.K. Here are a couple of my fits. What I did was fit the online New Zealand 14CO2 pre/post bomb trace to a first order decay. This give a rate constant between 0.056 and 0.022 year-1.
    Then I fitted to Keeling [CO2], converted into GtC, a simple model.
    The world began in 1964, when the atmospheric CO2 was 680 GtC and so in 1965 with human emissions 3.13GtC it when up. However, there was an efflux of CO2 into the aquatic phase and into the biotica,with a rate constant of (I) 0.056 or 0.022 year-1 (II).
    The fits state that the exchangeable reservoirs are either 43 (I) or 47 (II) times bigger than the atmosphere and have rates between 0.00134 (I) or 0.0005 (II) year-1.
    I modeled 680 GtC in 1964, then added human emission, I then removed either 2.5 to 5% per year and added the new amount of human emissions. I did this for all the years. The carbon I removed from the atmosphere I added to a second reservoir, and fitted the best size/rate to this second reservoir (the sum of all the reservoirs; chemical, aquatic, biotic, e.t.c.)
    So, what to take from the figures. Firstly, the fact is that the decay of 14CO2 is first-order and has a half life of between 13 and 26 years.
    Something like 2.5 to 5% of the total atmospheric mass of carbon leaves the atmosphere (14CO2). It partitions into a second reservoir which is >40 and <50 times the size. You can try to model for an infinite sink, but it is horrid, somewhere between 40 and 50 it is.
    Here are the pictures.

    http://i179.photobucket.com/albums/w318/DocMartyn/NewCO2streadystatewith14CO2.jpg

    http://i179.photobucket.com/albums/w318/DocMartyn/NewCO2streadystatewith14CO2II.jpg

  49. The residence time of CO2?

    Bwahahaha.

    May as well ask the length of the coasts of countries in America’s sphere of political influence.

    A complex function of variables we ill understand and may not be able to identify with ambiguous parameters and armwaved definitions meant to be used in calculations with almost equally nebulous foundations in reasoning, plunked down in the middle of an overheated, politicized, scandal-prone, diatribe-plagued mayhem.

    It practically demands its own name. I suggest “Salbyist Fallacy”. Putting such nonsense in a textbook, of all things. Tch.

    • Bwahahaha.

      May as well ask the length of the coasts of countries in America’s sphere of political influence.

      I assume you made this analogy because you think it is an irrelevant question to ask.

      Well, disorder exists in the environment and it makes a lot of sense to understand it as much as we can. Take a look at the recent Gulf Oil spill. I suppose people let out a “Bwahahaha” when concerns were raised on what the residence time of the spilled oil and of the dispersants such as Corexit were. And same for the Fukushima accident. Yea, guffaw about how long the radiation would hang around and how far it would disperse.
      In other words, in terms of understanding and having other practical implications, it is an important question to ask these questions.

      I started looking at these questions not because of climate science but because I got interested in energy. We use up all of our concentrated forms of energy first and then are left with the dispersed variations. For the latter, we have to understand all these points and I really don’t think it is a laughing matter.

      • WHT

        Your assumption is incorrect.

        I make this analogy because the very basis of such analyses are fundamentally so flawed as to be not even wrong.

        We seek answers and forecasts, certainties and solidity, we want to know and some of us perhaps even feel entitled to sure, straightforward answers that fit into neat formulae.

        Well, boo hoo.

        The world is not always built that way.

        We have plenty of smart people — very smart people — forgetting to consider the parameters and domains of the question, else they’d realize the complexity involved and be guided by the precepts of Chaos Theory, and admit that there are many (near) unknowables, and of the knowable ranges of the problem space, there are spatiotemporal interdependencies that tell us CO2 residence time in the atmosphere varies (sometimes profoundly) with factors throughout the biosphere that themselves are variable depending on CO2 levels in unpredictable ways on some spans.

        Which is why Salby’s recent claims ought be treated as no more than one semiplausible scenario among many, and a minor scenario at that.

        Which is why residence time of CO2 ought be treated with less certainty, and all figures between two weeks and 400 years might be considered equally valid within their own scenarios.

      • Which is why residence time of CO2 ought be treated with less certainty, and all figures between two weeks and 400 years might be considered equally valid within their own scenarios.

        Which is exactly why Jaynes and other physicists/information scientists suggest that we model according to our levels of ignorance. If we are uncertain about some value than we can apply concepts like the maximum entropy principle to constrain the model parameters to what we have more certainty in. The fact that we have uncertainty from two weeks to 400 years suggests that indeed a fat-tail model to the CO2 impulse response function is more appropriate. That is the best approach for reasoning under uncertainty and dealing with limited knowledge.

        E.T. Jaynes, “Probability Theory: The Logic of Science”
        http://omega.albany.edu:8008/JaynesBook.html

      • A “fat tail” is merely a consequence of parameters chosen for the model. It is that very parameterization which is at issue. Those who assert a “fat tail” response are not drawing on any deep physics which demands a particular response. They are merely expressing their own preference.

      • Fat-tail is a property of the data. The end.

      • Which data?

      • Unless you guys are using the same definition of “residence time,” you’re just talking past each other.

        Unfortunately there is no standard definition of “residence time,” any more than there’s a definition of “God.” There will be arguments about residence time as long as there are arguments about whether God exists.

      • Of course there is a standard definition. You release a single CO2 molecule into the atmosphere and measure the time it takes before it is sequestered out of the atmosphere. This of course is a statistical average of all the possible release points under the nominal operating environment.

        The problem then is not in the definition but in the rather indirect method we have to measure the effect. All of these techniques are indirect because we can only infer what is happening and the only direct evidence that we are ever going to get is if we suddenly stopped emitting fossil-fuel CO2 and measure what happens. Unfortunately this ain’t gonna happen anytime soon.

      • WHT

        There appear to me to be two or more issues with your definition.

        I suggest it is more useful and meaningful to speak of CO2 residence time as time it takes before your single CO2 molecule (1)(including from methane and CO emissions) is (2)permanently sequestered out of the (3)system.

        This of course is an ensemble of statistically distinct stochastic pathways of all likely (chaotic) outcomes.

        For (silly) example: suppose all other things being equal, and ACE were variant dependent on public perception of CO2 emission cost through primarily sequestration, and CO2 residence time were determined to be a very costly 400 years. The most aggressive drops in ACE would result, thereby effectively reducing the CO2 residence time (possibly even making it effectively negative if sequestration were more efficient than emission).

        Doesn’t the example scenario reflect the elements of the problem space better than your average fat-tailed atmospherics?

        I see Chief frothing about essentially the same issue, only with better diction and more credibility than I, but it appears Vaughan Pratt has us all pegged with his talking past characterization.

        Also there are the measurement issues as you say.

      • I am just using the 7 year exchange rate “residence time” that the short-timers are advocating and showing how that number becomes a fat-tail distribution when you physically model what the molecules are actually doing in the system.

        The tails are fatter than expected and thus gives the CO2 a strong inertia even if the source of emissions is stopped.

        I consider the several examples of derivation approaches that I presented as useful exercises that someone like Prof. Curry can use in her classroom. I would do it myself if I was in that field and actually worked as a teacher.

      • WHT

        I think I understand what your claims are, as they coincide with the training I had long ago in the sort of analyses done prior to Chaos Theory, or by those even now who have problems with this type of analysis (either rejecting it for some reason, or just not bothering with it).

        I have to ask, do you understand my claims as I have clumsily expressed them, and reject Chaos Theory as applied to this situation for some objective reason, or have I made too much of a mess of my explanation?

        I get that I haven’t been tactful, sensitive, or nice in how I’ve said it, but I still have no inkling if I’ve gotten the least part of my point across.

      • It is possible that that we are at an impasse, and it is not a AGW vs not-AGW, but a chaotic systems versus disordered systems impasse.
        This is not an uncommon disagreement. Murray Gell-Mann who started the Santa Fe Institute, the preeminent complex systems think tank, writes about this issue. He likes to use entropy arguments to reason about systems because when the complexity reaches a certain level, the description can tend to collapse into a more concise representation. That is where I am coming from and since I don’t have the resources or conviction to go the chaos route, I am looking for the statistical arguments and I use the ideas of people like Edwin Jaynes and other probability gurus to see what I can accomplish. The field is wide open IMO.

      • WHT

        Fair enough, though hardly an impasse.

        For some spans, and there’s no telling how long they might last, your methods may well be sufficient, and it’s beyond me to suggest whether your surmises are right or wrong.

        From the arguments I’ve seen, AGW looks like the likeliest of the alternatives proposed and on spans much larger than a century, cAGW too seems dominant, but I admit to some confirmation bias on my part.

        The risk outcomes of AGW and policy recommendations due costs that follow are largely in agreement with the outcomes of the cost of risks due perturbation by CO2 increase case that I favor for its relative simplicity and the satisfactory levels of evidence for mounting this case.

        Otherwise, AGW +/- is nothing to me in itself. It’s simply a more detailed scenario of the risk outcome of perturbations on a chaotic system.

        For CO2 rise as an external forcing, the scientific proofs and evidence are sufficient, the uncertainty low enough and in general in the same direction as the policy actions that best reduce such risk costs, and, a bonus for me, there’s a Fair Market solution that appeals to my Capitalist philosophy.

        All this coincidental alignment of beliefs and values and attitudes of course cautions me to more skeptical appraisal of my own views.

        So I entertain alternate hypotheses and analyses, I think about them, I examine doubts both reasonable and baseless, clearly plausible and transparently manufactured, on both sides +/-AGW.

        And still, the reasonable argument comes down to this, after all the skeptical evenhandedness I can muster:

        We have a bearing, a direction, that is less costly overall and therefore for each individual holds a better chance of utility, and we have a vehicle for moving that direction honestly and fairly.

        We’re madmen, victims or fools, or cheats in the extreme minority seeking advantage by deceptive means, if we hesitate to reduce global CO2 levels until at least we have much better understanding; so-called economic arguments against this simple truth rely on burdening the many with the free ridership of the fossil few; so-called political arguments against this honest conclusion of science employ the lobbying of public relations experts to steal the democratic voice of individual choice in the market.

      • Check. We all have our own curiosities in the science, policy, or eventual outcome.

      • This is a rather extreme, yet inconsistent, proclamation of skepticism. Sure, natural processes are intricate. But, that does not mean that bulk properties and average behavior cannot be described simply. The entire scientific revolution is based on the fact that, quite often, they can.

      • Other Bart and WHT

        My point of view is extreme and inconsistent and skeptical.

        How is it more extreme or inconsistent than expressions which discount entirely the problems of Chaos in this issue, as the majority of approaches referred to in this blog do?

        I find it laudable to pursue study of each of the scenarios, especially the most likely-seeming ones.

        I think gethering data and increasing understanding of the mechanisms involved is absolutely necessary.

        I think most people will be receptive to explanations involving bulk properties that are factual and accurate, and most people can grasp that some uses of averages are meaningful and some aren’t, if their proponents spell out their assumptions and parameters.

        I don’t pretend to know the entire scientific revolution first-hand, but it seems to me that the other Bart overstates the value of faulty analyses.

        We know enough to know we emit CO2 and alter the CO2 emission and absorbtion of lands and sea on a significant scale, and therefore ought reduce this impact.

      • “We know enough to know we emit CO2 and alter the CO2 emission and absorbtion of lands and sea on a significant scale…”

        How do we know this? How do we know our input isn’t completely insignificant overall?

        “…and therefore ought reduce this impact.”

        Why? Is Nature inherently benevolent?

      • Nature? *snrx*

        We know a little of the nature of the complex — not merely ‘intricate’ (cute and meaningless euphemism that is) as many linear systems are, but spatiotemporal chaotic — problem spaces and the scales of area (global, involving land and sea, above and below tropopause and to the deeps of the oceans, at every latitude from pole to equator) and time (we’ve been emitting increasing GHG levels since the dawn of the Industrial Revolution and altering land and water for at least so long, and we’re changing our intensities increasingly with time), but not enough to propose to control the chaos (http://fisica.unav.es/~hmancini/downloads/Physrep.pdf
        ).

        We know the systems had some ergodic stability on millennial scales within livable ranges, in some cases so finely tuned as to allow evolution of niche species within some habitats unchanged for verifiable epochs prior to the external GHG forcings we introduced (leaving aside our other activities).

        We know something of the behaviors of parts of these systems are affected by GHG’s in general and CO2 forcing in particular with significant differences on the range of CO2 intensity and the spans of time in question, and we know we don’t know everything there is to know about all such behaviors, but we do know our GHG perturbation is unplanned and uncontrolled with regard to the problem space.

        We know with increase in perturbation, there is increase in Risk, and that we can establish significance by some elementary measures:

        1. Optically, we’re adding about as much GHG impact to the atmosphere in the IR range as the effect of using up a standard pencil on an ordinary 8.5″x11″ piece of paper per decade. CO2 is nonsaturating in this range, and non-overlapping with other IR-absorbing GHGs for much of this range. That in and of itself ought be considered ‘enough’ knowledge to determine an increased element of Risk in a complex system we are dependent on. Even if far greater amounts of GHGs are reabsorbed than are emitted, it’s not the amount (which is still significant, within two orders of magnitude), but the persistence and area of the perturbation that categorizes the Risk profile.

        2. Thermodynamically, there is plentiful argument for substantial temperature sensitivity from adding GHGs to the atmosphere at these levels. The argument need not be proven, so long as it is not entirely invalidated, to take this as an increased element of Risk, too. Again, amount is not the only factor determining significance.

        3. Botanically, changes of CO2 above 200 ppm are known to have effects in plants analogous to hormone changes (because they’re plant hormone changes) that are differentiated across plant species and affect growth and reproduction, principle areas of survivability. The same global long-term external forcing for a separate and interconnected complex system, with unknown and mostly unexamined elements of Risk. As there’s been a 44% change in CO2 concentration in a mere quarter millennium, all new factors that might contribute to this growth might be considered significant, ranked by size, which fossil fuel burning and land use are the top contenders for.

        4. Microbiotically, there are similar observed effects of differential impact on soil organisms of small CO2 level changes. Some speak of ‘priming’ the soil to a state of runaway CO2 emission growth under some conditions. While I can’t speak to the likelihood of such a scenario, it sounds like a new element of Risk.

        5. The oceans, by extension, have like issues of Risk generated by increased perturbation of complex systems.

        So, we know what we know at a level that emphatically demands policy response, especially in light of how much we do not know; or that lack of active policy response itself is a policy response that favors increasing Risk and Uncertainty to the many without consent for the benefit of a few without right or precedent or compensation.

        Nature. Indeed.

      • Under your bed is ‘Where the Wild Things Are’.
        ================

  50. Judith,

    How about a post about atmospheric CO2 seasonal variation (NH/SH difference, lattitude distribution, causes, global temperature seasonal variation, SST seasonal variation and its lattitude distribution)?

    I find these topics very interesting, but have no time to look into it deeper. Maybe a guest post by one of the knowledgable denizens? It should be a very interesting discussion.

  51. CO2 residence time depends on its temperature, its location in the atmosphere and the weather conditions, from a few hours to a few years. CO2 has a heavier molecular weight has a tendency to sink into oceans and to the ground so that plants can grow.

    • Not really on the last part, the atmosphere is well mixed (with the exception of water vapor which can condense) for molecules with molecular weights up to 200 or more because of diffusion and winds. Your statement is the well known CFCs can’t make it to the stratosphere fallacy.

      • Oh yes, by residence time Eli assumes you mean the time for any single CO2 molecule to remain in the atmosphere

      • Eli,

        I did not say CO2 could not be diffused to stratosphere by wind and its temperature (or energy) to gain that height. But in general, heavier molecular weight is denser tends to sink more readily to the ground levels. Otherwise you don’t install CO alarms at below bed level and you don’t install smoke alarms at ceilings.

      • you don’t install smoke alarms at ceilings.

        A plume of heated air and smoke will rise because of a real buoyancy effect of the enclosed volume.

        As Eli said, heavier molecules like Radon will tend to sink but there you have differences of almost an order of magnitude in molecular weight.

      • “As Eli said, heavier molecules like Radon will tend to sink but there you have differences of almost an order of magnitude in molecular weight.”

        There you are. What are you trying to argue?

      • Radon has a molecular weight of what like 220?

        But you are the one that brings up the CO detector location theory, saying it should be at the level of your bed. CO has the same molecular weight as N2 which is like 78% of the atmosphere.

        Good gawd man, you put the CO detector near your bed because that is where you are breathing while you are asleep! Pardon the pun, but are you intentionally this dense?

        Or are you an agent provocateur planted here to display how naive and non-knowledgeable the skeptics are?

      • You did not digest what people say and distorted them.

      • How true. These people tend to use these non-scientific intuitive ideas to reason about what’s happening. This is how they think “Let’s see CO2 has a molecular weight of 44 and N2 is 28, therefore CO2 will sink. I know this because Helium is very light and a balloon filled with it will rise, so CO2 will rise too!”

      • “These people…”. Who are you, H. Ross Perot?

        Categorizing people into “Us” and “Them” is a strategy for self-affirmation, not success. It may make you feel good, but it is unseemly in public.

  52. It is amazing how many threads are overwhelmed by semantics. I think Judith words some topics just to let people see how confused they are.

    • Dallas,

      List a few so that your statements are more solid not adding to confusion.

      • I haven’t made that many on the topic, other than CO2 residence time is useful in determining which biofuels are better to reduce the rate of CO2 added to the atmosphere. The atmospheric life time is a different subject, that is how long it would takes the co2 level to return pre-industrial, or any particular level you select. Residence time impacts atmospheric life time, if you consider biofuel crops that from planting to harvest take less time than the average CO2 residence time of about 7 to 10 years versus using older sources of carbon.

      • Dallas – I think you’ve identified a principal reason for many of the arguments here, and for confusion in the mind of some of the arguers. Two salient examples are the terms “average CO2 residence time” and “cooling”, both of which have been used to mean different things by different people. To try to dispel some of confusion, I’d like to summarize the main conclusions that are well established. Any further disagreements should probably involve the details, and be most usefully conducted outside of the content of this comment.

        1. Average CO2 residence time. If you put a tag on a bunch of CO2 molecules in the atmosphere and asked how long the molecules would remain aloft, you would find that about half of them had disappeared in an interval measured in a small number of years – five years might be about right. This is a “half life” for individual molecules. Where did they go? Into the ocean, or into trees, shrubs, soil, or other terrestrial reservoirs. On the other hand, you could ask a different question that involves starting with a climate where CO2 concentrations are fairly stable and then adding large quantities of CO2 by burning fossil fuels so that the concentration rises. The question is – if you now stop adding any more CO2, how long will it take for the concentration to drop back to around the stable level it had previously? The answer here is a much longer interval. Much of the CO2 will still be around 100 years later, and a complete return to baseline would take a far longer interval, with the actual length a subject of some uncertainty.

        Why the difference in intervals? In the “tagged CO2” scenario, the molecules are disappearing from the atmosphere, but the reason the 5-year interval is much shorter than any estimate of the time needed for an excess to return toward baseline, is that the molecules that disappear into the land or ocean are being replaced by other molecules leaving the land and ocean and entering the atmosphere. Those rates are not very different, and so the actual reduction in the atmospheric level over 5 years of no emissions would be small – simply because most of the molecules that left the atmosphere were replaced by others that entered it. For more of the quantitative details and uncertainties, readers should review the discussion in the above thread, including relevant references.

        2. Cooling. When an object such as the atmosphere or ocean has thermal energy (i.e. any temperature above absolute zero), it will tend to emit that energy by radiation or other means. That emission – the shedding of thermal energy – can be referred to as a “cooling”. However, the fact that thermal energy is being emitted does not necessarily mean that the object is getting colder. In fact, if it is gaining more energy from some other source than it is shedding, it will actually be getting warmer – its temperature will be rising. Conversely, if it is gaining less energy than it is shedding, it will be getting colder – its temperature will be declining. When “cooling” is interpreted as getting colder, it may actually be the opposite of what is happening with an object that is “cooling” in the sense of shedding thermal energy. In fact, the more an object’s temperature is raised, the greater its cooling tendency in the “energy-shedding” sense. An example is an increase in atmospheric CO2. This tends to raise atmospheric and surface temperatures. In turn, the rate at which the atmosphere and surface shed energy by radiation and other modalities will tend to rise. Thus, the warming of the atmosphere will result in a greater cooling tendency than if the atmosphere had not been warmed. In the case of a CO2 increase, the initial effect is a reduced heat-shedding (a reduced cooling tendency) leading to a temperature elevation. The higher temperature results in an increase in cooling from the initial low level following the CO2 rise, thereby limiting the magnitude of the temperature rise.

        This isn’t meant as a substitute for an accurate understanding of the greenhouse effect, but rather to illustrate the principle that cooling (increased energy shedding) and cooling (declining temperatures) are two different phenomena that don’t necessarily operate in the same direction.

        Because both “residence time” and “cooling” have been discussed in detail elsewhere, I hope any specific questions involving mechanisms will be directed to those comments. This comment is about semantic confusion.

      • Exactly. This is were Sam NC gets confused, but it really does look like he is trying to better understand. I have contributed to his confusion discussing the significance of different processes at different temperatures, pressures, concentrations and radiation windows. It is hard to pick one point and not have it confused with another.

      • Much of the CO2 will still be around 100 years later, and a complete return to baseline would take a far longer interval, with the actual length a subject of some uncertainty.

        If we use the concept of a “half-life” of 5 years, which is actually the 1/e level (assuming an exponentially damped response ), then the amount of CO2 left after 100 years is 2e-9 of the original amount. This is 2 parts in a billion of the original amount.

        The standard practice is that if scientists present a “half-life” or mean time then the damped exponential is the accepted response to use.

        Whether people actually believe in the 2e-9 fraction remaining after 100 years depends on how much they believe that this is statistically possible, given the immense volume of the atmosphere and the random 3D walk of the molecules (combined with convection) within the atmosphere. That is why I think the current theory is for a much fatter-tail in CO2 residence time.

      • The half life for a simple exponential process is log(2) times the time constant. For a sum of exponential responses, it is rather more involved.

        What we are dealing with here is a sum of generally complex exponentials in complex conjugate pairs. In the limiting case of an infinite sum of real exponentials, this becomes the hypothesized “fat tail” response. But, a simple model should consist of three such terms, one necessarily real, and two complex conjugate pairs, generally begetting a bandpass (or, band-amplifying) response. These “eigenvalues” reflect the third order model consisting of “states” representing the Land, Sea, and Atmospheric reservoirs.

        The long term response depends on the longest time constant. What we are arguing here is, what is that longest time constant? Is there any evidence which conclusively bounds it below, above a minimum level which convicts humankind of being responsible for the observed rise in measured atmospheric concentration? My judgment is that the usually cited evidences fail to establish this beyond a reasonable doubt.

      • In the limiting case of an infinite sum of real exponentials, this becomes the hypothesized “fat tail” response.

        The sum of exponential variates turns into a convolution, which then leads to a centrally limited Normal distribution, which is definitely not fat-tailed.

        My point is that fat-tail responses summed together will lead to a stable fat-tailed CLT distribution, which is a well known effect.

      • We’re on a different page, here. I am talking about solutions to ordinary differential equations which arise from an eigenfunction expansion of the solution of the PDEs governing the process which, in its limit, will become the “fat-tailed” response.

      • We could be on a different page. I know how to deal with ordinary differential equations (such as Fokker-Planck) in the presence of disorder in the diffusion and drift coefficients and these do indeed lead to fat-tail responses. I did some pioneering work in understanding response times in disordered photovoltaic material and modeled the super-long tails in the current with a light impulse. The relative time scales are similar. Ordinarily a good PV material will stop conducting within a few microseconds of having the light turned off. However, disordered material will continue conducting for much longer, with some materials still showing current for seconds after the light is removed. The current is decreased of course, but that is exactly what we are talking about WRT to CO2 persistence as well.

        See the link in my handle above if you want to see what I did.

      • “I know how to deal with ordinary differential equations (such as Fokker-Planck)…”

        FP is a PDE. But, I’ll assume you were not paying attention and wrote in haste. On the other point, you know a way to deal with these. There is not one, unique way.

      • Fred,

        1. Residence is a subjective opinion without experiments.
        2. Cooling – the key point is where are the extra heat comes from and the magnitudes involved.

        You said “cooling (increased energy shedding) and cooling (declining temperatures) are two different phenomena that don’t necessarily operate in the same direction.” This is weird thought that has no thermodynamic and radiation laws in mind.

        Dallas,

        No, you appear to understand what Fred is saying but you don’t. You got confused with energy flow and types of CO2. You need time to study energy and think deep or your boss in the biofuels industry will not be happy with you after a few years time and you will not go far.

      • Sam NC, Some of the issues with definitions are pretty subtle, I understand what Fred is saying, but I have plenty I disagree with Fred on. Fred’s using energy shedding is just a manner to differentiate between cooling that reduces the temperature of an object and an situation where the object is remaining the same temperature. Everything is “cooling” in terms of radiating energy. As far as types of CO2, some plants may have an affinity for one isotope over another, not my area of interest, in general, CO2 is CO2.

      • “The answer here is a much longer interval. Much of the CO2 will still be around 100 years later, and a complete return to baseline would take a far longer interval, with the actual length a subject of some uncertainty.”

        What is your evidence? What are the error bars? To what extent are you allowing your desired answer to lead your conclusion?

      • The question is – if you now stop adding any more CO2, how long will it take for the concentration to drop back to around the stable level it had previously? The answer here is a much longer interval. Much of the CO2 will still be around 100 years later, and a complete return to baseline would take a far longer interval, with the actual length a subject of some uncertainty.

        Fred, do you believe this because other people say so, or because you understand the relevant mechanisms well enough to infer it on your own?

        If the latter I’d be very interested to see how you prove this. Based on my understanding of the relevant mechanisms it seems obviously false to me.

      • Vaughan – See my comments and those of others upthread. The evidence that much will still be around 100 years later is very strong. It is the length of time for most of the remainder to disappear that is more uncertain, but given observed values for the rates of weathering, plus isotopic studies on the rates at which moieties are transferred from the surface into the deep ocean, an estimate of millennia is not unreasonable.

      • Fred can’t(won’t?) imagine sequestering mechanisms recruited by a higher conc. of CO2.
        ==========

      • Dallas,

        So you are saying there are CO2 behave differently in residence times from type of biofuels being used? Not very convincing to me. Older source (presumably you mean fossil fuels or just trees), they produce same CO2 and the plant produce will not differentiate which CO2 to absorb. Biofuels do not have carbon dioxide emission are AGWers’ self numbing contradictions. They think they have the authority to command which CO2 molecule is pollution free.

      • The CO2 doesn’t have different residence times based on the type of fuel consumed. CO2 has an average residence time in the atmosphere before some percentage of the CO2 has been reused in some way. You may consider 3 half lives or 10, but it takes so long for the CO2 in the air to do some thing that takes it out of the air. Say you pick 15 years for your interpretation of residence time. If you burn a biofuel that took 15 years to grow and used it for energy, it would be carbon neutral. If it took hundreds of years to grow and or form, like peat it would be carbon positive, it would increase the level of carbon in the atmosphere. If you used Algae which takes just months to grow, it would be carbon negative, IF, it replaced the use of some fuel that was carbon positive. If the use of algae didn’t reduce the use of some other energy source it would be carbon neutral.

        So residence time is useful for evaluating the things that add or subtract CO2 from the atmosphere.

      • I think there is some valuable insight here. If you think about how the exchange of CO2 between the atmosphere/ocean and biological life-forms are in pretty much steady state, then we can reason about how the diverse bioform constituents vary in their uptake and release of CO2.

        For example if I consider how much carbon gets locked into slow-growing massive deciduous and coniferous forests, my mind immediately thinks about an equivalent residence time for this carbon. This is the amount of time that carbon is stored in place before the forest burns or decomposes. We know that this time can be on the order of hundreds of years in some cases. (Carbon in the middle of a thick redwood ain’t going anywhere very fast) There is also carbon that gets decomposed yearly into grasses, etc, which is a much shorter cycle.

        The issue comes up when we look at the ONE residence time given for atmospheric CO2. If this is 5 years, then the possibility of a CO2 molecule lasting for one hundred years in the atmosphere is 2 parts in a billion, which is really quite small. However, we know that the proportion of carbon that gets locked into slow-growing trees is NOT 2 parts in a billion in comparison to all the other flora. It is definitely a much larger proportion, and eco-diversity is a known fat-tail effect. Since all these rates are in a mutual steady-state and governed by ecological succession rules over millions of years, something doesn’t sit right.

        One of the rules of stable probability distributions (either thin-tail or fat-tail) is that all of the constituent sub-distributions follow the stable one. Therefore if we have a distribution of biological CO2 reservoir times that is fat-tail which is in steady state with the atmospheric/ocean distributions, then the latter also has to be fat-tail. This then points to the fact that the 5-year half-life is not correct and needs to have a fat-tail to make sense.
        P_flora(t) = P_atm(t)

        I think this is an intriguing idea and one that didn’t cross my mind until I saw Dallas’s comment. Even if what I say above doesn’t pan out, I do agree with Dallas’s carbon balancing argument.

      • Well thanks. It is only a portion of the puzzle and I will gladly leave the rest for others.

      • Dallas,

        You cannot just run away with that. Do you understand that fossil fuels aborbed the Sun energy and CO2 in the atmosphere and then buried underneath without releasing CO2 for millions of years? Do you understand biofuels absorbed CO2 from the atmosphere and the Sun energy and then immediately release CO2 to the atmosphere? Do you know the difference. You should view fossil fuels as carbon negative until they are burnt? Do you know biofuels immediately release CO2 back to the atmosphere? Whuch is more carbon negative to the atmosphere?

      • Dallas,

        ” If you used Algae which takes just months to grow, it would be carbon negative”No. you are quite wrong. Probably your invention not even agrred by some AGWers, carbon neutral at best. If you are burning alagae as fuel you produce CO2 the same way as burning fossil fuels or peat. You cannot discriminate CO2 by how long it will release CO2 back to atmosphere and how fast plants absorb CO2. Fossil fuels are stored Sun energy that do not get immediately consumed after they absorbed CO2 from the atmosphere and the Sun energy. All plant growths come from the atmospheric CO2 and all animal growths come from eating plants or animals, CO2 indirectly from the atmosphere. Fossil fuels are stored CO2 with the Sun’s energy buried underneath are more carbon -ve than any of your biofuels which are immediately consume and release CO2 back to atmosphere.

        AGWers like to cheat themselves to distinguish them as carbon neutral.

      • I guess you missed the capital IF.

  53. Re: Fat Tail

    The “fat tail” explanation makes sense if you don’t think about it too much.

    The explanation is that there are 2 processes at work, a fast one and a slow one. So CO2 decays quickly but remains in the atmosphere for a long time.

    So the first process shuts down ? Any subsequent CO2 is only affected by the slow process ?

    The whole theory seems to be based on unverified models. Does it really work the way they theorize ?

    http://www.climate.unibe.ch/~joos/OUTGOING/publications/hooss01cd.pdf

    • So the first process shuts down ? Any subsequent CO2 is only affected by the slow process ?

      That is a good question. There is a whole mathematical discipline devoted to the properties of stable distributions, which is short-hand for the steady-state behavior. This is important for cases where nature applies the same random process repeatedly. If the first process is thin tailed, any subsequent processes applied will keep it thin tailed. However if the individual processes are all fat-tailed, the result will continue to be fat-tailed.

      The first class of stable distributions will tend to narrow, in much the way of the classical central limit theorem. The latter class will not narrow.
      The wiki page is not easy reading but it has some good references:
      http://en.wikipedia.org/wiki/Stable_distribution

      BTW, that paper you link to is a good reference, and it has a good explanation of what a convolution does and the definition of an impulse response function. What is interesting also is that they make no mention of residence time. In fact, no measure of a mean value for residence times of a fat-tailed impulse response exist. It doesn’t make sense because the conventional statistical moments such as mean and variant diverge and that is why they don’t mention it. What is important is the shape of the impulse response function itself.

      • It is the nature of a chemical equilibrium that there are strong fluxes in both directions to maintain it. Another example is vapor saturation near a water surface. The amount of vapor stays very constant at constant water temperature, but the flux in both directions is quite large compared to the net.

  54. The 5-year residence time is because the atmosphere and ocean are exchanging CO2 all the time. You get it by dividing the atmospheric CO2 content by the ocean-surface CO2 flux. There is about as much CO2 going in each direction, because changes in ocean acidity are quite small. This net flux is small because of a chemical equilibrium at the surface that has to be maintained which constrains the amount of CO2 relative to ocean carbonates. The ocean circulation is slow compared to this time scale, so the atmosphere/ocean system’s carbon content is fairly fixed on a year-to-year basis, except for external sources (i.e. fossil CO2). A major point is that the short time-scale of 5 years is a consequence of the surface chemical equilibrium’s fast action.

    • There is about as much CO2 going in each direction, because changes in ocean acidity are quite small.

      Agree, because the directional quantities have to be equal in the steady-state. This is analogous to the solar radiation steady-state. It has nothing to do with the exact mechanism or the speed of that mechanism. Perturbations from steady-state are important though.

      A major point is that the short time-scale of 5 years is a consequence of the surface chemical equilibrium’s fast action.

      Consider a sequence of steps, some with fast reaction times and some with slow. The overall rate is determined by the slowest reaction step in the loop. This is known as a long-pole-in-the-tent effect. So if there was a long diffusional step involved in the CO2 molecule getting from point A to point B, and then the point B to point C step was this fast surface reaction, then the overall length of time would be governed by the first step.

      • Yes, this is more or less the dynamic in question. Now, how do we resolve the “long pole in the tent”?

  55. Fred Moolten

    Your deliberation on CO2 residence time is interesting, but I think the key take-home points for readers to remember from all the discussion here are

    – in an earlier post you concluded that this was around 100 years,
    – IPCC states that it is between 5 and 200 years, but cannot be defined accurately,
    – the Lam study tells us that models cited by IPCC have used an average of 400 years for climate projections, although there are no empirical data to substantiate this value,
    – data presented by Zeke Hausfather at a Yale climate conference point to a half-life of 80 to 120 years, with a “fat tail”
    – an earlier model study by David Archer comes up with essentially the same conclusion as the ZH data,
    – if these data are correct and the half-life formula holds for the beginning of the curve, we should expect that around 2 ppmv are removed from the climate system per year at today’s concentration of 390 ppmv
    – there is no statistically robust correlation between annual human emissions and annual change of atmospheric CO2 concentration, although averaged over several years, atmospheric concentration increases by roughly 50% of the human emission,
    – this is roughly equivalent to a removal of 2 ppmv per year,
    – the amount of CO2 “remaining” in the atmosphere over a year appears to be more closely correlated to the change in global temperature from the previous year, with a higher percentage “remaining” after warming.

    So, at the end of a long thread we still do not know what the residence time of CO2 in our climate system really is. Since the natural carbon cycle is so much larger than the amount of human emissions, it is very difficult to estimate the residence time of human additions.

    If we forget the hypothesis presented by Professor Salby for now, we can assume that if human CO2 emissions were stopped abruptly today, the level of atmospheric CO2 would start to reduce fairly rapidly at first (at 2 ppmv/year?), but then flatten out before asymptotically reaching a new equilibrium at some point in the distant future.

    What we have also seen is that even if we were to reduce all global CO2 emissions by 30% starting in 2020 (a bit more than would result from completely shutting down the entire US economy) we would only see a net reduction of atmospheric CO2 of around 52 ppmv by year 2100, with a corresponding decrease in global warming of 0.2°C to 0.4°C.

    In other words, the whole discussion of CO2 residence time, while interesting, is a fairly meaningless discussion as far as practical implications on our climate are concerned.

    All we can say with fairly good likelihood is that the IPCC climate projections have used an model-derived figure that is most likely on the high side, even by their own estimates.

    Max

    • The rate of removal doesn’t depend on the full concentration (390 ppm), it depends on the difference from the atmosphere-ocean equilibrium ratio. Once that is reached, the removal rate stays proportional to the atmospheric increase rate to maintain that ratio.

      • JimD

        I would suggest that maybe you do not really know with 100% certainty that:

        The rate of removal doesn’t depend on the full concentration (390 ppm), it depends on the difference from the atmosphere-ocean equilibrium ratio. Once that is reached, the removal rate stays proportional to the atmospheric increase rate to maintain that ratio.

        to start off with, as there may be other mechanisms, which could be removing CO2 from the climate system beside the ocean..

        Then I would suggest that the “atmosphere-ocean equilibrium ratio” itself is certainly related to the atmospheric concentration or partial pressure (as well as the temperature of the ocean, of course).

        Suffice it to say, that it is logical to assume that the “rate of removal” from the climate system is correlated with the atmospheric concentration, so that if 2 ppmv/year are being “removed” today at 390 ppmv concentration, we can assume that a lesser amount would be removed at 300 ppmv, for example (if not, there goes the “fat tail” argument), and conversely a greater amount would be removed annually if concentration rose to 500 ppmv..

        Any disagreement?

        Max

      • I would say that the 2ppm/yr rate of removal is proportional to the rate of outside addition which is 4 ppm/yr, so if that halved the removal rate would drop proportionately, and would not exceed the addition rate unless you somehow stopped the emission suddenly in which case it would temporarily exceed it.
        There would not be much removal without addition because the equilibrium that existed in pre-industrial times was very steady. The average decline in the past 100 million years has been 10 ppm per million years, which is a quite slow natural rate of change due to carbon sequestration opposing volcanoes. Even century-scale processes weren’t depleting CO2, probably because the deep ocean also remained in equilibrium.

      • “I would say that the 2ppm/yr rate of removal is proportional to the rate of outside addition which is 4 ppm/yr, so if that halved the removal rate would drop proportionately, and would not exceed the addition rate unless you somehow stopped the emission suddenly in which case it would temporarily exceed it.”
        Generally, yes.
        But you are assuming that only humans are adding CO2. This is possible. But since we in warming period since the end of last ice, and without humans existing, CO2 level rise after the end of a ice age and during warming periods, we definitely have some increase in CO2 level due to “natural causes”. It’s possible that this natural increase is insignificant- but I don’t we have enough information to determine whether or not the natural increase [or a natural decrease due to Little Ice age] is a significant factor.

      • There is an accepted natural increase from Henry’s Law, which gives about 10 ppm/degree. This is why it increases between the ice ages. There may also be increases in volcanic periods. Apart from volcanoes and fossil fuels, there aren’t any other ways to increase the carbon in the carbon cycle.

      • Not that you can imagine.
        =========

      • There is one source of carbon in the carbon cycle – volcanoes. It is of course mediated through hundreds of different biological compartments including that of fossil fuels.

        On geological timescales the inputs from volcanoes and sequestration though mostly the biological carbonate pump is equal. Over lesser timeframes there is obviously a difference as carbon levels change. The sources of the variations are volcanic and biological – as biology changes with temperature, nutrients, available water etc.

      • The Ba-ba-ba-barbaric Bu-bu-bu-bubbing Bi-bi-bi-biosphere.

        H/t anna v.
        ========

      • “There is an accepted natural increase from Henry’s Law, which gives about 10 ppm/degree. This is why it increases between the ice ages. There may also be increases in volcanic periods. Apart from volcanoes and fossil fuels, there aren’t any other ways to increase the carbon in the carbon cycle.”

        Ok, I won’t argue about this number. And is around what I would have guessed.
        I wouldn’t assume as much as 50 ppm/degree and could even accept as low as 5 ppm/degree { I assume C rather than F]].
        There is one important aspect of this and that is the time delay- CO2 has *around* a 800 year delay, according to ice core records.
        So we have two things “about 10 ppm per 1 C per in increase global temperatures, and at what time frame this 10 ppm occurs.
        We also could have the duration of the warmer period. Do have same 10 ppm increase if it’s 100 year at this level as compared to say 500 years at this level ?
        And of course there is no difference between CO2 from manmade sources and natural sources. Meaning if nature adds say 20 gigatonnes in a year, then you will measure an increase about 10 gigatonnes increase if measuring total Global CO2.
        And finally does it work the same in terms of cooling- each lowering of 1 C decreases 10 ppm in global CO2?

        Would the relatively short period of Little Ice age affect CO2 levels, significantly?
        And/or is it possible the warmer periods before the Little Ice age and particularly in their contrasts and/or their lag have bearing on not just levels of CO2 but their increasing rate at the present time?

        It seems rather difficult to get any precise number, but is it possible 10% of yearly increases in Global CO2 is due to natural effects?

        And since much exponentially increase in human CO2 emission of fossil fuel use has occurred in last few decades, that 1958 CO2 levels [and previous to previous to this] much of the increase in CO2 would be due to natural sources? For example say in 1960 90% may have be due to natural sources whereas in 2011 perhaps less than 10% is due to natural sources?

  56. Alexander Harvey

    I, and others before me, have wondered if Murry Salby had popped across to Colin Prentice’s office at Macquarie University to discuss the Carbon Cycle. (TAR Chapter 3: The Carbon Cycle and Atmospheric Carbon Dioxide, Co-ordinating Lead Author I.C. Prentice).

    What I do know is that the Macquarie website is carrying the following commentary by Prof Prentice:

    “How we know the recent rise in atmospheric CO2 is anthropogenic”

    http://www.climatefutures.mq.edu.au/eventsandnews/commentary/

    Alex

    • Damage control. It won’t work. The truth is unstopable.

    • I lost interest after the uptake of the land must be approximately… There is quite a bit for room for error in that approximation. For example there is a great deal of biological carbon sequestered in peat, compost, black carbon, feces, etc. each year. Cite required.

    • Thanks, Alex, interesting.
      I hope this puts Salby to rest. New to me here was how important the biosphere seems to be compared to the ocean. Apparently warmer years mean less uptake by the biosphere too. To me, this has the implication that, for a given CO2 level, vegetation growth is significantly suppressed in warmer years. Or maybe it is more burning or decay of biomass releasing CO2. This needs to be explained, and does it have consequences in a warmer climate?

    • Alexander Harvey

      The “rebuttal” to Salby’s lecture by Professor Colin Prentice is pretty weak and does not bring any new arguments or data..

      Here are my comments for what they are worth.

      The amount of CO2 that is stored in the atmosphere is the most accurately known component of the global CO2 budget, from the global network of high-precision observing stations.

      Strike “most accurately” and replace with ”only”

      Strike ””high precision”

      The rate of CO2 increase varies considerably from year to year. In some years, CO2 in the atmosphere has hardly increased at all. In others, almost all of the CO2 emitted has been retained. It has long been known that this variability is related to climate (Bacastow 1976). Generally, El Niño years are characterized by unusually hot and dry conditions over much of the tropics, and by a high rate of CO2 increase. Conversely, the period immediately after the eruption of Mount Pinatubo in 1991 was characterized by unusually cool conditions and a low rate of CO2 increase.

      Yes. And these years are also characterized by increased global average temperatures, again suggesting that the change in atmospheric CO2 level is related to global temperature (viz. Salby).

      The isotope argument has already been rebutted by Salby (and earlier by Spencer), so does not need repeating.

      Salby incorrectly claims that the increase in CO2 (the first dot point) is due to the land biosphere releasing CO2. There are several reasons why this is wrong, the main ones being:

      Strike ”this is wrong” and replace with ”we suggest that this might be incorrect”

      1. The rate of accumulation of CO2 in the atmosphere is less than the rate of emission from fossil fuel burning.

      The first argument is weak. The statement proves nothing. The annual variation in atmospheric CO2 is clearly climate-related (see above) and the longer-term geological record shows that CO2 levels have risen several centuries after temperatures rose, so there is no logical reason to assume that longer-term variations in the modern CO2 record are not somehow related to climate, as well.

      2. Precise measurements of the ratio of atmospheric oxygen to nitrogen (O2/N2) concentration have been made since the 1990s…[these] have been going down, as we would expect from fossil fuel burning, but the rate is not as fast as would be expected from the amount of fossil fuel that is burned. Therefore, the land must be taking up CO2, not releasing it.

      Also a weak “argument from ignorance”; there could be several other reasons why the ratio is changing less rapidly than has been theoretically calculated.

      3. Unlike the stable isotope 13C, the radioactive isotope 14C (“radiocarbon”) has a quite different signature in fossil fuel and in biomass…This decline can only be explained if the CO2 rise was mainly due to fossil fuel burning, and not to the release of modern carbon from the land.

      Another argument from ignorance: “we can only explain it if we assume…”

      4. The slope of the relationship between CO2 concentration and global mean temperature, as seen in the interannual variations, is on the order of 10 ppm per degree Celsius (e.g. Frank et al. 2010, Friedlingstein and Prentice 2010). If the same mechanism were responsible for the 20th century increase (in other words, if about 0.7 degrees of warming had caused the CO2 increase) then the CO2 increase would have been less than 10 ppm. This is nowhere approaching what has been observed.

      This is another weak argument. What has been observed? How can we be sure that there are not several factors contributing to increased CO2 levels?

      Salby has made the mistake of confusing a forcing with a feedback…When looking at trends over several decades…we can see how rising CO2 concentration causes global warming (forcing).

      This is pure double-talk. Simply stating that Salby has made this mistake does not make it so. And simply stating that ”we can see how rising CO2 concentration causes global warming” does not make that so, either.

      Overwhelming evidence in the peer-reviewed literature demonstrates that the warming trend in recent decades has been caused by the increase in CO2.

      Ouch!

      Let’s wait a) for Salby to publish his paper with all supporting data and b) for a real rebuttal to his paper, if and when this should occur.

      Max

      • Also, on this:

        “If the same mechanism were responsible for the 20th century increase (in other words, if about 0.7 degrees of warming had caused the CO2 increase) then the CO2 increase would have been less than 10 ppm. This is nowhere approaching what has been observed.”

        This is assuming that the sensitivity is frequency independent. This is a wholly unwarranted assumption, and would be most unusual if true. As I discussed here, all you need is a fairly shallow roll off in response to reconcile the sensitivities to different spectral components.

    • Alex, thanks for the paper. Every time I read some new information on the subject, I get some more clarity and a possible different perspective. Let me take a combinatorial statistics point-of-view on the residence time issue.

      The total CO2 in the atmosphere is about 3000 Pg.
      From Prentice’s paper he said that 123 Pg of CO2 is exchanged between land and atmosphere each year. The two amounts largely balance if we assume a near steady-state. So that means about 123/3000=0.041 of the fraction of atmospheric CO2 is replaced each year. Naively one would expect that the time constant for replacing about half of the amount is roughly 1/.041 = 24 years. But this assumes that there is no replacement of the initial CO2. In actuality, since the system is in steady-state, the CO2 removed is replaced with fresh CO2, so the older CO2 has to compete with the newer CO2 to sequester out. This is a kind of modified geometric progression for the removal of molecules, whereby the amount removed is scaled properly each year with the addition of fresh CO2.
      Straightforward to do this on a spreadsheet, which results in the curve below.

      http://img692.imageshack.us/img692/9013/co2replacementc.gif

      Note the difference between the two curves. The faster dropping curve is a thin-tail response function, whereas the correct formulation displayed as the upper curve has a much fatter tail. This is a dynamic law of diminishing returns for the replacement of CO2 in the atmosphere. After a while the old CO2 gets more and more diluted so that it becomes harder to remove as it competes for sequestering sites. This results in the fatter tail.

      I knocked this off pretty quickly but it makes some sense and roughly duplicates the IPCC Bern curves.

      • WHT,
        Prentice’s figure is 123 Pg of C exchanged, not CO2. About 440 Pg CO2.

        But I don’t think you can get the with replacement curve this way. That really depends on a forced shift in equilibrium and its kinetics, rather than the kinetics of approach to equilibrium.

      • Thanks. The differences between the curves still holds with that correction. The fat-tail curve has 260 times the amount of CO2 as the thin-tail curve after 50 years. After 100 years, the factor is like 300,000.

        So it really explains the differences between the way opposing scientists are thinking about what a “residence time” means. In the thin-tail case, the mean residence time value is 7 years. In the fat-tail case, the mean value residence time diverges very slowly. Over the first 1000 years the estimated mean is 27 years and after 3000 years the estimated mean is 35 years.

        There is really no equilibrium in this case, just a steady-state with possible corrections. The steady state is perfect exchange and then if we start adding molecules by a forcing function, this shows how long it will take them to get out of the backlog and become sequestered. You still have to do a convolution if this continues.

      • Webby,

        previously I asked for more than arm waving to show CO2 had a fat tailed distribution. You simply asserted it did. You later claimed that the fat tail is in the data, again asserting it with no facts.

        What you and your fat-tail are missing is that fat-tail distributions are in the ASSumptions about the data. But please don’t let reality interfere with your party. You are having such a good time of it!!

      • Oh yes I did do a great job with it, thank you.
        You must have steam coming out of your ears to witness some dude come in here who just applies a little logic to the problem and is able to crack a solution.

      • Yes, you are doing a GREAT job blowing that smoke up my dress. I am giggling so hard I can hardly type.

        Now, about that data you alledge FITS your fat-tail, wanna let us in on the secret of where to find those papers and measurements??

      • You are forgetting that there are no papers or measurements or any controlled experiment possible because we are soaking in the uncontrolled experiment as it unfolds.

        A similar conundrum occurred for the first atomic bomb test. Someone had a theory and they couldn’t test it until they actually blew up an atomic bomb. I suppose there were accusations made to Oppenhiemer and Fermi that they were intentionally hiding all sorts of papers and measurements as well.

      • Webby,

        so you are finally admitting the DATA do NOT have a fat-tail?? Good.

        Now continue with your hypothesis. Just be clear that it MIGHT be applicable, but, there is not evidence supporting it yet.

      • Webby,

        if you ever find a steady state in nature you WILL give us a ringy dingy won’t you??

      • When you see somebody say the equivalent of “Don’t look in the desk, there is nothing there!”, that is the time to look in the desk.

        You certainly find the steady-state condition more often than equilibrium. Call me when you find an equilibrium.

      • Funny, you didn’t mention WHERE OR WHAT you think is steady state. Should I laugh now??

      • If there is a flow of free energy through a system, it can’t be in thermal equilibrium, but it can still be in steady state. So, if the sun is shining on the earth and therefore energy is flowing through the system it can’t be in equilibrium. Yet it can be in a nearly steady-state because the incoming and outgoing radiation balances. I am just going according to the definitions.

      • Webby,

        you are apparently more confused than I. Where did you get the idea I made a claim of Equilibrium?? I would have said the same if you had!!

        To make it quite simple for you. How the heck can you have equilibrium or steady state when the energy input to the system is continuously varying over several differing parameters??? You might have a RELATIVELY stable system, but, not much else.

    • It’s too bad climate scientists don’t reason like physicists, better yet like mathematicians, better yet like logicians.

      Prentice simply cites evidence from the IPCC without addressing the question of whether natural causes can play any role at all. Since it’s pretty obvious from the temperature record of a century ago that natural causes played a significant role, claiming they no longer do seems naive in the extreme.

      The right question to ask would be, what proportion of the 0.5 °C temperature rise from 1970 to 2000 was caused by humans?

      One extreme position would be 0%: humans had no influence.

      One might imagine the other extreme would be 100%: it’s all our fault.

      However the latter presumes that without us the temperature would not have changed. What if without us it had actually gone down 0.5 °C?

      In that case humans would have had to drive it up 1 °C to offset nature’s 0.5 °C and then add another 0.5 °C.

      In that case one would call this 200%.

      Looking at the last century and a half of temperature, and the last half century of CO2, I would estimate that humans are responsible for 2/3 of the rise and nature for the other 1/3.

      This estimate will make no one happy, and the people in white smocks with torches that kuhnkat imagined gathering around my house will consist of both skeptics and dogmatists.

  57. Chief Hydrologist

    There are multiple processes – carbon flux that is temperature dependent, rock weathering that is dependent on soil moisture, temperature , the acidity of rain, biological activity in soil, biological activity in the sea that is nutrient and light limited. There is an introduction here – puddle.mit.edu/~mick/Docs/carbon-intro.pdf

    The SeaWiFS project – the instrument very unfortunately died this year after 13 years – gave insight into both terrestrial and marine primary production.

    ‘The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) provides global
    ‘The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) provides global monthly measurements of both oceanic phytoplankton chlorophyll biomass and ligh harvesting by land plants. These measurements allowed the comparison of simultaneous ocean and land net primary production (NPP) responses to a major El Nino to La Nina transition. Between September 1997 and August 2000, biospheric NPP varied by 6 petagrams of carbon per year (from 111 to 117 petagrams of carbon per year). Increases in ocean NPP were pronounced in tropical regions where El Nino Southern Oscillation (ENSO) impacts on upwelling and nutrient availability were greatest. Globally, land NPP did not exhibit a clear ENSO response, although regional changes were substantial.’

    http://www.ess.uci.edu/~jranders/Paperpdfs/2001ScienceBehrenfeld.pdf.

    The movement of carbon through terrestrial systems is a matter of balance between autotrophs (plants) and heterotrophs (animals) – the SeaWiFS measurements suggest that this was balanced in the 1998/2000 ENSO transition although with regional changes one would expect from the nature of ENSO.

    The large changes in primary production – comparable with anthropogencic emissions – occurred in the oceans. The ocean modulates carbon sequestration through the biological pathways of formation of organic carbon and bicarbonate. The processes are biological and not simple chemical equilibria – scavenging carbon from the water using the energy of sunlight to incorporate carbon into tissues and shell. Life in the oceans is not carbon limited. This then takes carbon out of the system into geological storage by the simple process of deposition on the bottom of oceans.

    There are multiple stores and flux for carbon for which we have limited data and which are themselves are variable in response to a range of conditions. It is a complex system with multiple interactions and little understood thresholds for abrupt change. You may insist on the results of Archer’s models or some theoretical narrative or other – but we are entitled to be sceptical given the extent of the unknowns. It seems to me to be inconsistent to have any firm conclusion. But as Fred says- after him the problem is resolved and we may only argue about the detail. How droll. I have to admit that I stopped reading then.

    I also note that Webfuscate made the classic boner of claiming ‘pioneering’ work in something or other, as well as accusing me of plotting murder. Robert took it one step further accusing me of complicity in mass muder in Norway, Jim D shifted the goalposts at least three times, Numbnut made his usual mind numbing references to horseshit, tt suggested that what I was saying was bullshit. Where was Joshua? Who cares. I just get more intransigent. What is this pissant sophistry that drives these people – with neglligible scientific understanding, modesty and decorum – to make idiots of themselves?

    They have lost, or never had, the scientific high ground – it is all unravelling as the world doesn’t warm. Something that is obvious even to realclimate – http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/ – and yet these sock puppets contiinue to unctously preach to the house. They have lost the political argument for cap and trade or taxes and they never had a moral high ground – or a scintilla of decency. I have one question for these people. How does it feel to be a pointless and irrelevant loser?

    • I also note that Webfuscate made the classic boner of claiming ‘pioneering’ work in something or other, as well as accusing me of plotting murder.

      Yes, I know that claiming anything makes one a candidate for the crackpot index. Yet, I have this knack of simplifying relatively complex problems and try to take advantage of it. So take a look at my comment above and see what you think of that.

      • But, where is the evidence that such a response exists in the real world data?

      • Something like this has to happen in the real world, because replacement does exist. This leads to dilution which crowds out the remaining molecules.

        I will give you a great example from the world of oil production. What it shows mimics the oil production profile known as hyperbolic decline. In many cases the operators will use water injection to remove the oil. If it mixes (which is not really good but illustrative) the data can follow this same profile because the water will keep diluting the oil. It’s really two rates working with each other, one is a proportional extraction rate and the other is a dilution rate. This tends to extend and thus fatten the tail. I gave an alternative derivation upthread, but the interesting point is that you can do it on a spreadsheet as well.
        column 1 column2
        +$A1*$B1 +(1-$C$1*$A2)
        copy down the columns, $C$1 is the constant

        You can also look it up, found in the discipline of reservoir engineering. They won’t do a good job of explaining it, but it fits the data.

      • This is of course a wrong concept – simply adding and subtracting for a store and assuming a steady state. There is no solution without a steady state – so despite the overweening arrogance there is no prize.

      • Apparently you don’t do engineering work. The quiescent point of a circuit is the steady state operating point and then you perturb about that point. This is bean-counting stuff IMO.

      • I am busily deriving flood hydrographs in between blogivating. The underlying hydrological equation is the differential equation of storage.

        Outflow = Inflow – d(S)/dt

        This can be solved analytically in a special case and numerically otherwise. It gives the typically form of the downstream hydrograph – a reduced peak and an extended period of discharge.

        Typically you know the inflow. calculate the outflow from weir and pipe formula and then solve for d(S)/dt at successive timesteps.

        You are essentially assuming that inflow = outflow in the steady state and storage isn’t changing. This is the case with baseflow in the waterway – a steady flow in between rainfall more or less. It is then perturbed by a storm. You get an inflow which peaks at the time it takes for a water droplet to get from the most distant point in the catchment and then declines in the same way when rain stops. The levels in the storage – let’s say it is an atmosphere analogy – rise and an outflow start to increase. The peak of the output hydrograph is lower and later than the inflow hydrograph. The outflow can keep rising as the inflow declines and quite commonly even exceed inflows as storage volumes fall.

        Not even close – for CO2 there are multiple storages all changing size continuously with CO2 flowing between based on a range of factors that includes ocean upwelling, biological activity and silicate weathering amongst the most important – for which there is little data. You see I am also trained in environmental science – and don’t really expect simple answers for complex problems.

        But – hey – I am still in touch with my roots. The following comes with a PG15 warning for Robert.

        http://www.youtube.com/watch?v=i68cEsALWt0

        .

      • The point is that we are trying to define what a residence time is for a CO2 molecule in the atmosphere is. We are not solving the hydrology problem as you may think.

        Instead, we are trying to show why the IPCC Bern graph, Archer’s work and others demonstrate the specific fat-tail behavior that they chart, and do it in as concise and simple matter as possible. We want to do this so that lay-people can understand what is happening.

        OK, here is another way of phrasing the problem. We know that it is a dilution situation that prevents the CO2 molecule from being sequestered quickly. So we can set up the rate equation as:
        dx/dt = -k*x^2
        This is second order and it shows how at the stronger the dilution, the smaller the rate factor gets (normalized concentration x is always less than 1). The solution to this is
        x = 1/(1+k*t)
        which is that power-law decline I have shown elsewhere and in the graph I linked to.

        This is pretty standard math and it demonstrates why the residence time, according to the standard definition, shows a fat-tail behavior and that a mean time for this value diverges slowly.

        This is the kind of word-problem you might get in high-school. These kinds of problems have real usefulness because they allow people to think about the problem and apply math to help guide their intuition. It really seems like we should be able to deal with the analysis I present on its own merits.

        So instead of just going off on chaos, complexity and other topics we really ought to just apply some elementary principles, establish a base scenario, and then go from there.

      • “Something like this has to happen in the real world, because replacement does exist.”

        Ouch! How many times does that sad song have to be replayed in history to announce the eminent demise of a beautiful, but mortally flawed, hypothesis?

      • Imminent, even (bloody spell check).

      • I asked for evidence. Your premises are arbitrary. Without real physical evidence, they mean nothing.

      • “Next consider that the rate is actually a stochastic variate that shows a large natural deviation. It will follow an exponentially damped PDF if the standard deviation is equal to the mean:
        p(r) = (1/R) * exp(-r/R)
        If we then integrate over the marginal :
        F(t) = ∫ F(t | r) * p(r) *dr
        this results in F(t) = 1/(1+R*t)”

        So, to define this approach more cleanly, what you are proposing is an ensemble of rates. Perhaps we can think of it as a bunch of differential volumes of CO2 in the air, each of which will follow a different path to ultimate sequestration. And, those paths are distributed with an exponential PDF. Why? Wouldn’t it more reasonably be, e.g., a Maxwell distribution? Doesn’t the fact that you have, seemingly arbitrarily, chosen a distribution in which most of the probability mass is near zero effectively constrain your tail to be long?

        What you are proposing is not what I thought you were proposing. Now that I think I understand, I think you are trying to apply something which was meant for a different purpose to a system for which it is not applicable.

      • So… please justify your PDF. And, please point to some real world evidence having to do with the Atmosphere/Oceans/Land system of CO2 exchanges.

      • So… please justify your PDF.

        Here is another one: Say that the rate equation follows this
        dx/dt = -k*x^2
        then you will get the same profile. The rate changes with dilution of the material as I describe in a comment further down the thread. This is a variation of a second-order kinetic rate law, which will create a fat-tail.

        And, please point to some real world evidence having to do with the Atmosphere/Oceans/Land system of CO2 exchanges.

        You bloody-well know that we are in the middle of gathering this evidence.

        You also should know that as a physics theorist (however marginal I may be as a theorist), I don’t have to present the experimental evidence. Experimentalists are responsible for doing experiments and gathering evidence. That is the tradition and you should know this if you are spouting off like you are a scientist. So it is actually you that should hire someone to disprove this theory, or perhaps you can lift a finger and do it yourself.

        I am a grizzled scientist and engineer and know your style. You will try to debate the science and then when you fail, you point to the fact that the evidence doesn’t exist. Well, duh, we are soaking in the experiment as we speak!

      • “Say that the rate equation follows this
        dx/dt = -k*x^2”

        How about just using something that doesn’t automatically skew your distribution toward small rates, necessarily giving you a “fat tail”?

        “You bloody-well know that we are in the middle of gathering this evidence.”

        No need to get testy. A simple “I don’t have any, but I think it merits consideration” would suffice. It would also be nice, in the meantime, if you wouldn’t issue categorical statements like “We know that it is a dilution situation that prevents the CO2 molecule from being sequestered quickly.” That’s kind of annoying.

      • I am giving you a bunch of different ways of solving this problem and none of them seems to be registering. These are all equivalent: the maximum entropy solution I started with, the incremental dilution spreadsheet solution I gave, and now this rate law. They all give exactly the same result because they are all based on physics.

        How about just using something that doesn’t automatically skew your distribution toward small rates, necessarily giving you a “fat tail”?

        It isn’t skewed toward small rates. When the original molecules get diluted enough, of course the rates will start to get infinitesimal. That is actually what is happening. Don’t you get this?

        If I don’t dilute, I get the exponential solution, which is what the short-residence-time people are advocating. But that is clearly wrong.

      • (reposted since I quoted incorrectly)

        I am giving you a bunch of different ways of solving this problem and none of them seems to be registering. These are all equivalent: the maximum entropy solution I started with, the incremental dilution spreadsheet solution I gave, and now this rate law. They all give exactly the same result because they are all based on physics.

        How about just using something that doesn’t automatically skew your distribution toward small rates, necessarily giving you a “fat tail”?

        It isn’t skewed toward small rates. When the original molecules get diluted enough, of course the rates will start to get infinitesimal. That is actually what is happening. Don’t you get this?

        If I don’t dilute, I get the exponential solution, which is what the short-residence-time people are advocating. But that is clearly wrong.

      • “It isn’t skewed toward small rates.”

        You do not appear to understand that, with the PDF you have chosen, you have more differential mass per unit interval at low rates than at high rates. This automatically gives you a fat tail. Your fat tail is purely a result of your assumed model. And, you have no empirical backing for assuming it.

        “If I don’t dilute, I get the exponential solution, which is what the short-residence-time people are advocating.”

        Or, if your distribution is narrowly concentrated about a mean value. You see, it’s all dependent on what distribution you select.

        As for “which is what the short-residence-time people are advocating”, you are narrowing the field into a monolith, and setting up a straw man to knock down. Whatever “they” are arguing, I am arguing this: there is a fast process, which is the diffusion of CO2 to the land and ocean reservoirs. The rate here is undoubtedly distributed in a narrow band about a mean value. Then, there is a separate category of processes which represent sequestration of CO2 into land minerals and deep ocean precipitates. This one may be at a significantly slower rate. But, they are two distinct processes, and not describable by a single, continuous PDF.

        But that is clearly wrong.”

        But, you cannot say why it is wrong, because you do not have any actual evidence which says it is wrong.

        This is the point where you no doubt will contemplate interjecting the classic cop-out: “oh, you just don’t understand Science”. That is the usual comeback when advocates of a hypothesis find themselves stymied. It seeks to obtain credit from the truism that science is basically a process of hypothesis and test, without admitting that the proposed mechanism is, in fact, merely an hypothesis.

      • You do not appear to understand that, with the PDF you have chosen, you have more differential mass per unit interval at low rates than at high rates. This automatically gives you a fat tail. Your fat tail is purely a result of your assumed model. And, you have no empirical backing for assuming it.

        I chose an exponentially damped PDF of rates assuming the 7 year value time constant, which comes from the exchange rate. This is the maximum entropy PDF given the constraints of only knowing the mean and lacking any other knowledge. The spread of rates is balanced between high rates and low rates. There is more differential “mass” at low rates but this is balanced by the larger range yet faster suppression of probability caused by the exponential damping at high rates. That is why I use the Maximum Entropy formulation as this is the least biased expected behavior of a physical process for a mean energy condition.
        The fat-tail comes about not from the exponential alone, as you can see from my graph, but you need the dilution of the material over time to slow down the rate.

        Or, if your distribution is narrowly concentrated about a mean value. You see, it’s all dependent on what distribution you select.

        With dilution it will still give a fat-tail.

      • Webby,

        “Well, duh, we are soaking in the experiment as we speak!”

        Yup, when folk like you get going without any data yet claim the data FITS you imagination we need more than just waders. Fortunately I have a couple friends that work at surf shops so can get discounts on wet suits!!

      • Yup, when folk like you get going without any data yet claim the data FITS you imagination we need more than just waders.

        By implication you are saying that the IPCC Bern model and Archers work and others who use fat-tail models are also off in the deep end. That is fine, just consider me to be a person that is trying to understand how that modeled behavior can come about from some elementary principles.

      • “With dilution it will still give a fat-tail.”

        Ah, no. Work it out

      • “With dilution it will still give a fat-tail.”

        Ah, no. Work it out

        I had it in the wrong order. Yes, dilution gives the overall exponentially damped thin-tail response. Any disorder and randomness in the rates will fatten the tail, which is what I am trying to show.

        Really, the only difference we are having is that you seem to be a purist when it comes to PDFs, whereas I am perfectly happy with a super-statistics approach that tries to explain a wide dynamic range and the extended tail.

        I am slowly converging on to your idea of the importance of the permanent sequestration sites. If the CO2 is locked into a tight biotic carbon cycle with the atmosphere, it never really leaves the system (so to speak) and thus is sinusoidally forcing the excess CO2 to stay in the atmosphere. The permanent sites are the only place for the CO2 to leave the system, and that is what generates the long-tails as shown by the IPCC BERN curves.

        In that case the atmospheric residence time is a bit of a red-herring. The actual “effective” residence time is fat-tailed but the boundary of the system has to be enlarged until the enclosing box reaches the permanent sequestering sites to get the really high numbers.

  58. You mean apart from getting the numbers wrong and having a wrong methodology?

    Would you like me to do the convolution maths for you and work out your crackpot index?

  59. Alan D McIntire

    DocMartyn you had some interesting posts on 14 C and atmosphere residency time. You didn’t give any figures, so I’m plugging in my own calculations which I’m sure are overlooking some significant points.
    You gave a !4 C ratio of 100 prior to nuclear testing, 160 immediatly after the testing, 130 after 15 years and 115 after 30 years.

    You gave atmospheric CO2 balance of 320 at the start, 338 after 15 years, and 355 after 30 years. Assuming the original 160% stayed in the atmosphere, the ration would have been 160*320/355 = 144.225 after 30 years solely due to the increase in atmospheric CO2.
    Likewise, the 130 ppm in 1980 would have decreased to
    130*338/355 = 123.775 by 1995 solely due to an increase in atmospheric CO2.

    using 1995 CO2 levels, the 14 C rations in 1965, 1980, and 1995 were
    144.225, 123.775, and 115

    Since the extra 14C in the atmosphere would be expected to
    drop off to a factor of (1 – 1/(e^x)) after period X,
    the drop was 144.225-123.775 = 20.45 after period X, and
    an additional 123.775-115 = 8.775 after period 2X.

    Since 20.45 = 0.6997*(20.45 + 8.775)
    (1- (1/e^x)) = 0.6997 (1-1/( e^2x))
    so X = 0.84587, so the 1/e period of relaxtion would be
    (1/0.84587)* 15 years = 18 years, and the new zero level
    of 14 C being approached asymptotically would be
    108.4 of the original 100%.
    8.4/44.225 = 19%, so 19% of the CO2 is cycling through the atmosphere, 81% in the oceans or plants.

    • Alan, have a look at the thread I did on steady states. The 14CO2 from Wellington is amount/vol, not a ratio of 14C/12C, although both plots are around..
      I initially thought it was a ratio, but it isn’t and states so on the header.
      When you use a ratio you must remember that the zero-order generation of 14C from 14N also dips when expressed as a 14C/C12 ratio as the levels of 12C increases.

      • Alan D McIntire

        First, I’ll concede that I only used the 3 points given by you, the minimum needed to fit the equation, so given measurement error, my formula can’t be too accurate- equivalent to extrapolating a trend from two points.

        I’m aware of your first point on vol of C 14, not C14/C12 ratio, that’s why I calculated the conversion factor to 142.225 in 1965, 123.775 in 1980, and 115 in 1995 by assuming 14 C remained constant, then added the additioinal C 12.

        I also realise the zero order generation of C 14 from N14 would also dip, but the half life of C14 is 5730 years, the drop over 30 years would be
        something like 30/5730 =0.53%, so I ignored that negligible fraction
        Multiply by 0.9947 and the 144.225 corrected 1965 fraction of C14 would be reduced to 143.46, and the 1980 fraction would be reduced to
        123.75*0.99735 = 123.42, leaving my original figures in the same ballpark. They’d be even closer to the same ballpark once the production of additional C14 from cosmic rays over the period 1965 to 1995 is taken into consideration.

      • “I also realise the zero order generation of C 14 from N14 would also dip, but the half life of C14 is 5730 years, the drop over 30 years would be
        something like 30/5730 =0.53%, so I ignored that negligible fraction”

        Not what I meant. Assuming that the steady state ratio is
        1 at (360( Constant)/(12C)360 ppm),
        as the amount of 12C from human combustion increases then the ratio falls i.e. 0.92 at (360/390ppm).
        When doing the ratio you have to realize that the N14-C14 is essential zero-order, but the pool of 12C it is being diluted into drops. The ‘true’ end-point of the 14C is therefore elevated by 1/0.92 or what the change in the size of the atmospheric pool is between two points.
        The expanding amount of 12C in the atmosphere dilutes 14C, dropping the 14/12 ratio (which can be calculated from Keeling) and the ‘true’ background level of the 14C/12C is changed by expansion of the 12C pool. This is why delta14C ratio is such a pain in the bottom for calculations.

      • Alan D McIntire

        That’s what I did. The 1965 C14 ratio was 160. with a 320 ppm CO2 atmosphere. by 1995, the CO2 in the atmosphere increased to 355 ppm. The original 14 C ratio would be reduced from 160%
        to 320/160 = 144.225 due to the additonal C12 in the atmospheere.

        That’s why I used 144.225 and not 160 in my formula.
        Incidentally,went to the trouble of taking the half life of 5730 years into consideration. i was using the latest date you gave, 1995, for calculations.

        Correction for atmospheric replacement of C14, half life 5730 years.
        Assuming steady state of C14 production by sun, the 100% will remain constant, the excess over 100% will drop to .9973856 after
        15 years, to a factor of .994778 after 30 years. This will result in adjustment to
        160*320/355 = 144.225. With constant replacement of C14 by the sun, the 100 ppm will remain constant, the 44.225 excess will
        be reduced to 44.225* 0.994778 = 43.994, so the adjusted 1965 C12 balance is 143.994

        The adjusted 1980 bal is 123.75. Again the 100 ppm will remain constant, assuming constant CO2 atmospheric production.
        23.75*0.9973865= 23.688

        new radiatoin corrected balances are 143.994 in 1965, 123.688 in 1980, and 115 in 1995.
        Differences are 143.994- 123.688 = 20.306 and
        123.688-115 = 8.688 for a ration of 20.306/(20.306+8.688) = 0.70035

        Again plugging into the equation
        (1- (e^-x)) = 0.70035 (1-(e^-2x)), we get X = 0.848965, and 1/e would be 15/0.848965 = 17.669 years.
        1/ e^0.848965 = 0.427858.

        The new 14 C balance would 20.306/(1-0.427858) =35.491 reduction from amended original 143.994, or 108.503

        Correction for atmospheric replacement of C14, half life 5730 years.
        Assuming steady state of C14 production by sun, the 100% will remain constant, the excess over 100% will drop to .9973856 after
        15 years, to a factor of .994778 after 30 years. This will result in adjustment to
        160*320/355 = 144.225. With constant replacement of C14 by the sun, the 100 ppm will remain constant, the 44.225 excess will
        be reduced to 44.225* 0.994778 = 43.994, so the adjusted 1965 C12 balance is 143.994

        The adjusted 1980 bal is 123.75. Again the 100 ppm will remain constant, assuming constant CO2 atmospheric production.
        23.75*0.9973865= 23.688

        new radiatoin corrected balances are 143.994 in 1965, 123.688 in 1980, and 115 in 1995.
        Differences are 143.994- 123.688 = 20.306 and
        123.688-115 = 8.688 for a ration of 20.306/(20.306+8.688) = 0.70035

        Again plugging into the equation
        (1- (e^-x)) = 0.70035 (1-(e^-2x)), we get X = 0.848965, and 1/e would be 15/0.848965 = 17.669 years.
        1/ e^0.848965 = 0.427858.

        The new 14 C balance would 20.306/(1-0.427858) =35.491 reduction from amended original 143.994, or 108.503

      • Alan D McIntire

        That first line should be 160 * (320/355) to give the effective 1995 percentage of C14 as compared to pre nuclear testing ratio of 100.

      • sounds about right, my best fits give me something similar, we do not see a change in t1/2, as would be the case if the efflux was becoming saturated.

  60. ‘Bob,

    Take another look at the graph. I’m not in disagreement with guys at Realscience .My graph looks just the same as theirs. As the article you linked to suggested:

    “anomalous behavior is always in the eye of the beholder.”

    I suppose if you look hard enough, and tell yourself often enough that its cooling, I suppose it might be possible to fool yourself.’

    I suppose you must be referring to me? The graph however is by no means the same as yours. For one thing it shows no warming out to 2020 – although if you were capable of reading the articles referred to you would – see that the periods of the climate shifts were indeterminate but could last decades. A guesstimate is sometimes better than nothing – if you don’t fool yourself that it is more then that.

    I think my exact words were no warming – for a decade or 3 more. Just on the real climate post – which bit of ‘warming interrupted’ don’t you understand? I suppose it is difficult being a total loser arguing BS while everyone else has moved on.

    Bob? Let’s do better than that shall we. Might I suggest Chief Hyperbologist, Chief Hung out to Dry, Chief Hysterical, Chief Bogologist – etc

  61. http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-105893

    Hunter,

    I was just saying that CO2 warms with IR – and then in its higher energy state collides with other molecules to spread the energy around.

    Cheers

  62. There appear to me to be some major misapprehensions as to what is going on in the preceding discussions. There is not one type of process with a continuous distribution going on. There are the distinct processes of diffusion of the added anthropogenic CO2 to the land and ocean reservoirs, and from there, two other processes: a recycling of that diffused CO2 back and forth between all the reservoirs, and permanent sequestration processes, which can be broken down into land and sea sub-processes, which can further be broken down into biological and mineral reactions, and even further from there. Ultimately, what we are interested, in to determine residence time, are the permanent sequestration processes.

    There are also replenishment processes going on, through volcanic activity, upwelling from the deep oceans, et al. Without these, the sequestration processes would eventually drain all the CO2 out of the system, and the Earth would die.

    • Bart

      An excellent summary.

      We tend to forget the continuous distribution processes you mention and, instead, fixate myopically on the human impact, as if this were the only thing that really mattered.

      Max

    • Ultimately, what we are interested, in to determine residence time, are the permanent sequestration processes.

      Thanks for this insight. May need a different name for the permanent sequestration time. The atmospheric residence time name is locked in place because it is used to describe gases like helium which won’t sequester until they slowly drift into outer space. It is one of those ontological problems where we want to classify a phenomenon to make it as general as possible.
      I would think the names chosen depend on the box we draw around the relevant system boundaries.

    • Bart, you miss something else: Equilibrium Thermodynamics vs. Steady State.
      The vast majority of people state, and still state, that in, say, the year 1700 the atmospheric, biotic and oceanic reservoirs were in ‘equilibrium’.
      This is a huge assumption, and probably incorrect.
      In the oceans the biotic processes fix 10 times the total biotic mass every year. A large fraction of this carbon is converted into feces and then fall to the depths. It is attacked by microorganisms all the way down, being converted to CO2 and methane, but non-the-less, most of the organic snow generated in the top 25 meters of the oceans makes it way to the bottom. The bottom of the oceans is where the vast majority of the worlds organic matter sits, being slowly being degraded to CO2/CH4.
      It is not only possible, but likely, that this biotic drive conversion of CO2 at the surface into organic shit at the bottom of the ocean, creates a CO2 potential (at least in the form of CO2 activity if not in absolute concentration); with the surface of the oceans denuded, and CO2 from the bottom slowly following the concentration gradient UP.
      If the assumption that the CO2 in the oceans was at equilibrium, as opposed to steady state, is incorrect, then all the simulations which look at the alterations in CO2 due to human fossil fuel burning, based on an equilibrium model, are quite frankly bollocks.
      This is one of the biggest problems in this discussion. If you apply classical equilibrium thermodynamics to a steady state system, you fail.
      They are attempting to estimate the level of Lake Mead, based on rainfall and temperature data, little realizing that there is a river coming in and a river going out.

  63. WebHubTelewscope

    You write:

    Archer’s work and others demonstrate the specific fat-tail behavior that they chart, and do it in as concise and simple matter as possible.

    Yeah. But as has been pointed out Archer’s work is all model-based stuff rather than empirical data based on actual physical observations, and as the Lam study states:

    There exists no observation data to validate this value. In fact,
    it is not possible to experimentally measure the value of τL (and the
    claim of its constancy) unless reliable data taken over many centuries
    (with constant emission rate) are available.
    τL ≈ 400 years is the consensus value of all the published IPCC
    models.

    IOW the fat-tail is a model-derived assumption, which may or may not actually be real and, as has also been pointed out here, is irrelevant as far as the practical implications on our climate is concerned.

    Max

    • IOW the fat-tail is a model-derived assumption, which may or may not actually be real and, as has also been pointed out here, is irrelevant as far as the practical implications on our climate is concerned.

      The fat-tail is not based on some model but on reality. Let’s do an experiment.
      Put a small amount of easily diffused tracer (like a food coloring dye) in a volume of water. Place a small actuated leak at the bottom of the volume and a small drip at the top of the volume. Let the drip be water at a certain rate, and momentarily open up the leak after the dripped water fully mixes with the volume to exactly compensate for the amount in the drop.

      The incoming drip represents proportional outgassing of CO2 and the leak is proportional sequestering of CO2. The abstraction is that perfect mixing allows the mass balance to conceptually hold and that we can still identify new (water) versus old (dye) CO2 constituents.

      Measure the concentration by using glass to hold the water and use a laser and sensor in some configuration to detect the transmitted light. Calibrate the end points to the best of your ability.

      What you will find is that the concentration of dye over time will follow a fat-tail. It would be simple enough for a high-school physics project. The students will learn something about inferencing, dilution, the law of diminishing returns, and may even want to fit the results to some equation.

      The point is that we all have to chip in and try to understand what is going on at an elementary level. My goal is to find out if there are indeed some simple explanations for the fundamental parts of the puzzle.

      • WebHubTelescope

        Your example is interesting (whether it proves anything or not).

        Inasmuch as the residence time of CO2 is not based on any empirical observations, I find it hard to believe that the “fat tail” can be quantified based on any empirical observations, either.

        However, the key point still remains that the “fat tail” is irrelevant as far as the practical implications on our climate is concerned.

        What is more of interest, is: what is the initial rate at which atmospheric CO2 is removed from the climate system?

        This initial rate tells us how quickly CO2 levels would start to level off once the removal rate equals the emission rate, assuming, of course, a) that Professor Salby is wrong and b) that human emissions are the only factor which perturbs an otherwise perfect equilibrium condition.

        Max

      • PS BTW the standard “half life” formula gives you the asymptotic relationship you have described in your dye experiment.

      • PS BTW the standard “half life” formula gives you the asymptotic relationship you have described in your dye experiment.

        Congratulations, absolutely correct. That is the Basic Room Purge Equation. But what happens if the dye doesn’t mix as well as it should and incoming water drops have a faster route to the leak and didn’t allow perfect mixing? Then that initial impulse response will show a fat tail, as the dye gets crowded out.

        But then again this will never show the really long permanent residence times that Bart is talking about. The point is that there are many rates to think about and this is only a first step to educate people (and me) on how to think about the problem.

      • WebHubTelescope

        Using the half-life” concept and taking the upper end of the 80-120 year “half-life” range suggested by the Zeke Hausfather data, we get an annual CO2 “decay rate” (or rate of removal from the climate system) of 0.58% of the concentration. This is 390 ppmv today, which means we should see a calculated instantaneous removal rate of around 2.2 ppmv per year.

        Our annual emissions today represent almost 2x this rate, but the atmospheric level increases by only a bit less than half of this amount.

        This raises the questions: Is the “missing” half “leaving our climate system” (as the half-life formula would suggest)? If so, where is it going? Would atmospheric CO2 concentrations level off if net additions to the climate system were reduced to around 50% of current levels?

        To me these are much more pertinent questions than worrying about a hypothetical “fat tail”, which no one will ever see in real life.

        [All of the above is with the caveat that Salby is shown to be wrong, which has not yet occurred.]

        Max

      • I was thinking about what Bart was saying about the permanent sequestering sites and how they fit in. It seems that one could use a compartment model type of approach for modeling the flow of CO2 throughout the system. I think the climate scientists call this a box model.

        This morning, I did a model of a compartment model as shown in the link below.
        http://img6.imageshack.us/img6/6378/co2compartmentmodel.gif

        This gives an impulse response behavior with the initial conditions of an excess pulse of CO2 in the atmosphere. The steady state flows between the atmosphere and the dynamic stores are equal at a rate of 7 years but the permanent rate has a time constant of 100 years. I can see how the IPCC Bern model comes about now.

        It didn’t register with me that they were using a compartment model until I started participating in this discussion, but then again its a slow learning process as I try to understand the dynamics of the carbon cycle.

        One thing missing is the rate from the permanent sites going back into the system. Bart said this is important or else the carbon would completely be removed from the system. I put the flows into the diagram below and made them 10 times as slow. The response does not change at this time frame, but sure enough it will reach an asymptote whereby the CO2 is spread between the compartments instead of going completely into permanent storage.
        http://img40.imageshack.us/img40/1183/co2compartmentmodel2.gif

        Thanks for the insight. I will likely update my full model which includes the convolution of the impulse response with the fossil fuel emissions. That is described in the link in my handle. I had been using a fat-tailed impulse response for the CO2 residence time, but this compartment model will be less of a heuristic because it is at least based on some better numbers, particularly the 7 year time constant.

      • Correction in the figure. I labeled the time axis wrong, instead of seconds, it should be years. Old habits die hard.

      • I hate to butt in where the other (and much smarter) Bart has done so much good, but I’m interested in “b) that human emissions are the only factor which perturbs an otherwise perfect equilibrium condition.”

        I don’t think it strictly true we need assume human activity being the only factor, merely new or dominant. That is, the straw that broke the camel’s back, or the big man on campus.

        Likewise, equilibrium condition seems premature, given our poor understanding of the ergodic processes making up the system over the past 800,000++ years. (230 +/-50 ppm, roughly cycling every 100k years)

        And narrowing our view to emissions only seems too limited. Humans do a lot more than spew crap into the carbon cycle that might alter the way the cycle works.

  64. Convective overshooting might be something for those arguing convection to consider.

    “Since, the tropospheric air
    flux derived from CALIOP observations during North Hemisphere winter is 5–20 times
    larger than the slow ascent by radiative heating usually assumed, the observations suggest that convective overshooting is a major contributor to troposphere-to-stratosphere
    transport with concommitant implications to the Tropical Tropopause Layer top height,
    chemistry and thermal structure.”

    http://www.atmos-chem-phys-discuss.net/11/163/2011/acpd-11-163-2011-print.pdf

    • Interesting paper – convective jets high into the stratosphere.

      • I looked at this expecting to see some estimations of how this may affect energy transport to the stratosphere and really haven’t come up with much. I did find an interesting question in my readings. If storms control how much water is in the stratosphere and the amount of water in the stratosphere has some significant control over the climate as per Solomon et al 2010, which one comes first. The decrease in stratospheric water by 10% after 2000 and the lack of warming attributed to the decrease in water per the Solomon paper would indicate climate is not controling storms but that storms may help control climate.

      • To be clear I should replace climate with temperature in my above comment.

      • As always – wherever you look there is more complexity.

      • Seems to fit well with Willis Eschenbachs thermostat hypothesis.

      • Not really. I think Willis is stating warming will create more storms which would cause convective cooling. I think the mainstream view is warming will cause more storms which will add water to the stratosphere causing more warming. The decrease in stratospheric water content would indicate fewer storms despite the warming if in fact it is convective overshooting that controls the water content. But this is all as clear as mud to me.

      • Here’s what really controls the water content near the tropopause:
        http://tallbloke.files.wordpress.com/2010/08/shumidity-ssn96.png

        The amount of water vapour in the stratosphere above will wimble about, but the basic relationship is clear. The current level will hold up for a while due to oceanic heat release following the solar surplus of the late C20th. After 2015 I’d expect to see it falling further.

        Don’t put your old warm winter coats on ebay yet.

      • There is literature out there to support a connection between solar activity and thunderstorms. I wasn’t planning on getting rid of my coats anyway :).

      • You have to keep the stratospheric water vapor thing in perspective. Strat water vapor increased from about 4 ppmv in 1983 to a whopping about 4.1 ppmv in 2008 over Boulder Colorado, with that wicked peak of about 4.5 ppmv in 1998 plunging to about 4.1 ppmv. This resulted in an infinite increase from zero W/m^2 forcing to about 0.8 W/m^2 of forcing. The error bars are a touch large for my liking, but plenty of uncertainty for a research grant.

    • Yep, they would have a pretty big impact on radiative cooling don’t ya think. The stratospheric water vapor is just a bonus.

  65. In JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 110, D14105, doi:10.1029/2005JD005888, 2005, Mark Jacobson has a couple of estimates in Correction to ‘‘Control of fossil-fuel particulate black carbon and organic matter, possibly the most effective method of slowing global warming’’ that people may find relevant to this thread. Quoting from the article,

    “In the work of Jacobson [2002], it was assumed that the atmospheric lifetime of CO2 against all loss processes combined was between 50 and 200 years. This range is commonly used in the literature. However, the upper lifetime does not appear to be physical, even within the range of reasonable uncertainty, and the lower lifetime appears to be too high to explain the rate of change of the observed mixing ratio of CO2. On the basis of Figure 1 and uncertainties associated with it, it is assumed here that the lifetime of CO2 ranges from 30 to 95 years, although a more likely upper limit may be 50 or 60 years.”

    I.e. Mark finds 200 years implausible (“not physical”), and wants to reduce it to 95 years as an absolute upper bound, while considering it more likely that the range should be around 30-50 or 30-60 years.

    In view of the 220 GtC fluxes involved, these estimates don’t seem particularly out of line to me. While I wouldn’t have been willing to go out on a limb and say 200 years was impossible, that’s only for lack of expertise in such things, which Mark has plenty of, having been in the business a long time. Currently he’s Professor of Civil and Environmental Engineering at Stanford, Director of the Atmosphere/Energy Program there, a Senior Fellow at the Woods Institute for the Environment, and a Senior Fellow at the recently founded Precourt Institute for Energy. Not that I always agree with him, but there’s probably no one in the world that I always agree with, even when they know more than I do.

    • Sadly, for all that Jacobson is one of the world experts in black carbon and aerosols, he has no background in carbon cycle dynamics. He basically did an Excel spreadsheet calculation, with no underlying physical basis, to come up with a lifetime. See Archer et al. 2009 (http://geosci.uchicago.edu/~archer/reprints/archer.2009.ann_rev_tail.pdf, pg. 119): Archer specifically calls Jacobson out on this one, pointing out that Jacobson’s approach is only appropriate for gases which are expected to decay according to linear kinetics… which is not true of CO2.

      • Well, I’m considerably less of an expert even than Jacobson, let alone Archer and his ten coauthors on that paper, so it’s not really my place to call the outcome here.

        However climate scientists claim to be allowing skeptics to have their say, so even though I’m not a skeptic on most aspects of climate science, I am on this one. Here’s my reasoning, I’m happy to have it shot down by the experts.

        But before I do let me first state my problem with the reasoning of Archer et al, and of everyone they cite.

        1. They point to the PETM as an example of what we can expect when it’s no such thing. The problem I see with taking the PETM as a precedent is that both its onset and decay took many thousands of years!

        Even though the resolution of the evidence for the detailed progression of the event is no better than 800 years (corresponding to a granularity of 1 cm in the strata), this was sufficient to see that the event came and went in a time frame of many thousands of years. You cannot draw an analogy between modern warming and the PETM because modern warming is happening hundreds of times faster than the PETM did.

        2. Those predicting a long residence time seem to be assuming that the CO2 “slug” (the result of a spike in CO2) has had millennia to embed itself in the planet. But that’s not the scenario that’s going to happen. Instead we will transition off fossil fuel within the century, either because we exhaust it or it ceases to be as economical as the alternative energy sources. In any conceivable scenario the slug will not have had to time to embed.

        Now for my reasoning, by all means shoot it down if it seems ridiculous.

        Consider a dry porous rock. If you wet it for one second it dries much faster than if you wet it for an hour. This is because water that had only a second to penetrate the rock is not going to end up far from the surface, and can easily escape. If it had an hour to get inside the rock is now going to take an hour, or perhaps two, to get back out.

        The general principle here is that rates of flow into and out of a natural system should be more or less commensurate. It surely will not take ten hours to get a porous rock reasonably dry after one hour of immersion.

        Same thing with reversible chemical reactions. If you unbalance the equilibrium of a reaction, it will take its time responding. If you reverse the imbalance by the same amount, the reaction will run back to its starting point at about the same speed as for the excursion.

        If you don’t believe this, I would love to see a physical demonstration of a counterexample that bears any plausible relationship to how CO2 is absorbed and emitted in liquids and suitably permeable solids.

        The reason we can expect to see CO2 fall quickly even without assistance from vegetation is because it rose quickly.

        Any assistance from vegetation will speed this up even more.

        So, what’s wrong with that reasoning? And if there’s something wrong with it, where is the proof in Archer et al or any other paper that CO2 injected quickly will take orders of magnitude longer to come back out? All they prove is that if you keep emitting CO2 for millennia it will take millennia to come back out when you eventually stop doing so.

        That time frame just isn’t going to happen.

        I just don’t buy it.

      • Doc, you had me worried with the sweater hug analogy, but you are back in true form with the rock wetting.

      • I don’t think the “embedding slug” makes much sense in this situation: the CO2 is going into the atmosphere and getting well-mixed around the planet, whether it is emitted in one slug, or slowly over time. The only difference between emitting it slowly over time and in one slug is the peak concentration: if you emit it in a slug the concentration will go very high, and then drop relatively quickly. If you emit it gradually, the concentration will go up very slowly. But the concentration in 1000 years is going to be almost entirely a function of the cumulative emissions over the previous 1000 years without much sensitivity to _when_ the CO2 was emitted (assuming that they aren’t all emitted in the last 100 years).

        The modeling simulations in Archer et al (and also in the National Academies Stabilization Targets report: http://www.nap.edu/catalog.php?record_id=12877) have been done for both slugs and gradual releases.

        Think of it more like one bucket with a very small hole in the bottom, on top of a second, 5 gallon bucket: if I pour 10 gallons of water into the top bucket, the second bucket will fill, and the top bucket will be left with 5 gallons in it. If I pour the 10 gallons quickly, then the top bucket will temporarily have 10 gallons, which will slowly drain into the lower bucket until there are only 5 gallons left. If I pour the 10 gallons slowly, then the top bucket will never exceed 5 gallons, but it will still get there. Not a perfect analogy, because the size of the second bucket is really a function of the total water in the system (eg, for any given quantity of CO2, there is an total quantity of CO2 that will end up in the ocean after the ocean and atmosphere have finally reached equilibrium).

        (the problem with your dry porous rock example is that there is an infinite atmosphere for the water to evaporate into. Also, that you aren’t keeping the amount of water you’ve added constant in your two examples)

      • In mathematical terms I think what you are saying is that if you have a convolution of two functions, one with a slow response and the other a fast response, the slow response will eventually win out. Convolution has the nice property in that it will conserve mass of the two functions.

        \int_0^te^{-a (t-x)} e^{-b x} dx = -(e^{-a t}-e^{-b t})/(a-b})

  66. TimTheToolMan

    Perhaps you’re asking the wrong question, Judith.

    If anthropogenic emmissions of CO2 were to stop entirely tomorrow, would the levels of CO2 continue to rise or would they fall? At what rate would they rise or fall?

    I’m thinking Salby would punt for “continue to rise” whereas I’m thinking AGWers might punt for “would fall until temperature/CO2 equilibrium were re-established (temperature would be rising during this time, CO2 would fall and they’d meet in the middle) and then CO2 levels would rise again along with the temperature”

    • TimTheToolMan

      If anthropogenic emmissions of CO2 were to stop entirely tomorrow, would the levels of CO2 continue to rise or would they fall? At what rate would they rise or fall?

      I do not believe that anyone (Salby included) would conclude that “they would rise more rapidly” as a result of stopping human emissions.

      He might conclude that “they would rise more slowly” or that “they would fall ever so slightly” (since the net impact of human CO2 emissions is very small in the overall carbon cycle, so stopping them would also have a very small effect).

      But even adding water into a bucket with an eye dropper at a rate sightly higher than the rate of evaporation from the bucket would theoretically cause the level to rise, and stopping the eye dropper would cause it to gradually lower based on the evaporation rate.

      IPCC models (AR4 WG1) have assumed that if human GHG (and aerosol) emissions had stopped in 2000, the concentration would be kept constant at 2000 levels, yet a further warming of about 0.1C per decade would be expected for the first two decades, with warming of 0.6C expected from 2000 to 2100. This is based on “pipeline” hypothesis of Hansen et al., which, in turn, is based on GISS models and circular logic, but has not been validated by empirical data from actual physical observations (but that’s another story).

      To your last question: No one knows at “what rate would they rise or fall”.

      IPCC does not give us much help. On one hand it tells us that the CO2 residence time in our climate system is “5 to 200 years”, and on the other the IPCC models apparently have all used a residence time of 400 years in the projections (Lam, 2011).

      If one accepts the half-life data of Zeke Hausfather (which checks roughly with the Archer model study), one arrives at a half-life of CO2 in the climate system of 80-120 years. If one takes the upper value, this means that the instantaneous decay rate would equal 0.58% of the concentration in the first year and continue at 0.58% of the annual concentration thereafter.

      This equals a reduction of around 2 ppmv in the first year and slightly less each year thereafter.

      We are now at 390 ppmv in the atmosphere, humans are emitting an equivalent of around 4 ppmv per year and the atmospheric concentration is increasing on average by around 2 ppmv/year.

      So you can make the calculation (for what it’s worth), but I believe that it is all much more complicated than that, and no one knows the answer to your question.

      Max.

  67. Konrad | August 29, 2011 at 5:34 am |
    Vaughan,
    I find your questions of Tallbloke puzzling. If you wanted to test if backscattered LWIR can slow the cooling of water that is free to evaporatively cool how would you do it? Fool around with incomplete ocean temperature measurements from disparate sources involving small quantities of LWIR acting over years? Or would you just directly test the impact of a larger amount of LWIR on a small sample of water that is free to evaporatively cool?

    How exactly would you do that?

  68. manacker | August 28, 2011 at 6:10 pm | Reply
    JimD

    I would suggest that maybe you do not really know with 100% certainty that:

    The rate of removal doesn’t depend on the full concentration (390 ppm), it depends on the difference from the atmosphere-ocean equilibrium ratio. Once that is reached, the removal rate stays proportional to the atmospheric increase rate to maintain that ratio.

    to start off with, as there may be other mechanisms, which could be removing CO2 from the climate system beside the ocean..

    Then I would suggest that the “atmosphere-ocean equilibrium ratio” itself is certainly related to the atmospheric concentration or partial pressure (as well as the temperature of the ocean, of course).

    Suffice it to say, that it is logical to assume that the “rate of removal” from the climate system is correlated with the atmospheric concentration, so that if 2 ppmv/year are being “removed” today at 390 ppmv concentration, we can assume that a lesser amount would be removed at 300 ppmv, for example (if not, there goes the “fat tail” argument), and conversely a greater amount would be removed annually if concentration rose to 500 ppmv..

    The exchange with the ocean surface is a reversible process so if the atmospheric [CO2] were increased by 4ppmv then the rate of absorption by the ocean would increase by a factor of 394/390 (assuming Henry’s Law applies for simplicity). The rate of desorption would gradually increase as the [CO2]ocean increases, based on the data the rates will match at about [CO2]atmos ~392ppmv. If the ocean temperature changes the equilibrium value will change by about 10ppmv/ºC.

    • Phil

      You are discussing “the rate of removal of CO2 from the atmosphere by the ocean” (viz. Segalstad with a residence time of 5-10 years).

      This will be largely driven by atmospheric CO2 concentration, to a lesser extent by CO2 concentration in the ocean (due to the larger reservoir, the chemical buffering processes and biological processes at work there), and to an even lesser extent by a slow increase in ocean temperature, if and when this occurs. All of this is OK, as you write.

      But I was discussing the rate of removal of CO2 from the climate system by all processes (suggested residence time 5 to 200 years, per IPCC)..

      I would suggest that these are two different things.

      Max

      • While this is far from a subject I think important, Dr. Pratt used an interesting analogy of a porous rock, if you pour a little on for a short period, it should take a short time to evaporate out. Pour a little on for a long time, it will take a long time to evaporate out. So about as long as it took CO2 to build up will be roughly as long as it will take to get out. Pick your initial level and the time from then, you have a fair estimate of how long it would take to reduce to that level. Not perfect by any means, but close enough for government work. That’s atmospheric life time though, residence time is more subjective, so the general 5 to 20 years can all be valid, the higher end 15 to 20 years would be the best to use due to uncertainty.

        There are somethings you don’t need an exact answer for, just a reasonable estimate will do.

      • Dallas

        Your analogy (from Dr. Pratt) makes sense to me.

        As we know, this is all very interesting, but a purely hypothetical discussion as human CO2 emissions are not going to stop.

        It could well be, however, that the annual emission rate is reduced and that a higher atmospheric concentration is reached at which point net in = net out.

        IF a) we accept the ZH “half-life” estimate of 120 years, b) assume that human reductions are reduced by 25% below today’s level as fossil fuels become more expensive and economically viable alternates are developed and c) assume (as IPCC does) that the only net addition to the climate system is from human emissions, we would need to reach an atmospheric level of around 560 ppmv to level off (net in = net out).

        Others may disagree, but I have seen no better estimates.

        Max

      • That is perfectly reasonable. The exact value may vary, but you have a reasonable range for both depletion of fossil fuels and replacement of fossil fuel with alternatives that can increase the rate of emission reduction. I think the “residence” time of 15 to 20 is more important for decision making. The longer term prediction more a gauge of success/failure of policy.

      • Dallas

        Just to carry the ridiculous calculation to its extreme, assume the 25% reduction in human CO2 emissions occurs at a rate of 0.5% per year starting in 2030 until it reaches 25% less than today’s value (25.5 GtCO2/year around 2088), “equilibrium” (net in = net out) would be reached around year 2580 at 560 ppmv concentration, at which level it would remain.

        Yawn!

        Max

      • Yeah, the only way to make a major change would be to do the impossible, but a rapid reduction, if possible, should show the change from that point decreasing at roughly the rate of the previous increase. The “residence” time just lets us know we are doing something that probably will not be noticeable unless we do a lot fast. All this though is based on conditions that resulted from a temperature rise caused naturally and by us (don’t discount land use). How much of each? If we do get a downward temperature trend of any significance, the numbers will change.

      • An even better analogy, but one that is more high tech than rocks, is the diffusion of dopants into a semiconductor wafer. If you lay a thin sheet of dopants on the surface and watch the concentration over time (while it is annealed at a high temperature of course) you will see that surface layer concentration change over time in a vary characteristic fashion.

        I did research as a postdoc on some of the strangest diffusion configurations and when it comes down to it, the behaviors are very predictable. They had to be, because eventually the process people had to characterize the profiles accurately to get the chip yield up.

        The simplest solution is the zero-drift variation of the Fokker-Planck equation.
        1/sqrt(2*pi*t)*exp(-0.5*x^2/t)

        Notice that if you fix the x term at a point near the surface x~0, then the exponential quickly saturates to 1 with time, and you are left with the 1/sqrt(t) as a tail. This is pretty standard stuff in the microelectronics world but I never really understood the significance of this WRT to CO2 until I started listening to you guys on this thread. The natural sequestering of CO2 to deeper sites is really the same thing, and all we are watching is the waning profile of the excess airborne and excess surface carbon as they diffusionally mix in a quasi-steady-state.

        Think about it. All that carbon due to fossil fuels that we extract had to essentially do a random walk to get below the surface via a diffusional process. That gives you an idea of the long time-scales involved. For some reason, I was thinking that the long time scales occur as a result of atmospheric diffusion, but those are actually fast and the turnover rate of CO2 is a testament to needing to consider the below-surface explanation.

        Now I think understand the carbon-cycle from a statistical physics perspective. YMMV.

      • Webby,

        “All that carbon due to fossil fuels that we extract had to essentially do a random walk to get below the surface via a diffusional process. ”

        There are two processes that people believe created hydrocarbon fuels, biotic and abiotic. Biotic is based on the idea that large amounts of biological mass gets buried due to tectonic activity and chemically changed under heat and pressure and time. Abiotic claims the deeper generation by chemistry without biologic sources. Neither postulate a diffusion process for gas to penetrate the earth and be changed, although this would seem to be possible in small amounts.

      • Neither postulate a diffusion process for gas to penetrate the earth and be changed, although this would seem to be possible in small amounts.

        Diffusion is the model of random walk. Tectonic activity can move the organic and inorganic carbon in any direction, therefore it classifies as a random walk and the process can be described as diffusion. How else to explain why organic carbon is found well underground and also near the surface.

        When carbon gets transformed into oil, the oil will try to migrate upward due to the tremendous pressure at depth. It gets blocked by salt domes and other structures where it stays put over the millenium. Some deeper discussion is found in the link on my comment handle.

        Abiotic claims the deeper generation by chemistry without biologic sources.

        That is in regards to hydrocarbons. I don’t think is worth to bring up abiotic, as it is just a red herring more suitable for the Art Bell crowd. It might work to explain simple hydrocarbons such as methane on Mars but not complex hydrocarbons.

      • I see, you aren’t talking about just gas diffusion then. You are also talking about geological processes. Might reasonably fall under random walk, but, I don’t find too much literature that calls tectonics diffusion.

      • Yeah, I was misdirected by thinking about gas diffusion for the longest time. But then Manacker and Bart started laying out the deep processes involved and everything started clicking in place for me.

      • I am certainly no expert on much of anything so, what do you think of the claims by these guys:

        http://www.gasresources.net/
        (try the Scientific Publications link on the left)

        They seem to be pretty sure that thermodynamics prevents oil from being produced at such shallow levels unless you know of a catalytic or other process that would make you very rich??

      • You can also look up the work of Dr. Thomas Gold. He was a big proponent of abiotic oil before he died. He also predicted that the moon was covered by a thick layer of soft dust and the lunar lander would sink without a trace.

        This link is a discussion I had on my blog several years ago concerning abiotic oil. http://mobjectivist.blogspot.com/2008/02/wing-nut-oil.html
        I did provoke it with my confrontational post, but the commentary gets a bit absurd.

      • Tommy Gold has been found to be correct in a number of areas such as Neutron stars and The deep hot biosphere, his heuristic arguments were against the party line so to speak,eg

        Another area where it is particularly bad is in the planetary sciences where NASA made great mistakes in the way in which they set up the situation. NASA made the grave mistake not only of working with a peer review system, but one where some of the peers (in fact very influential ones) were the in-house people doing the same line of work. This established a community of planetary scientists now which was completely selected by the leading members of the herd, which was very firmly controlled, and after quite a short time, the slightest departure from the herd was absolutely cut down. Money was not there for anybody who had a slightly diverging viewpoint. The conferences ignored him, and so on. It became completely impossible to do any independent work. For all the money that has been spent, the planetary program will one day be seen to have been extraordinarily poor. The pictures are fine and some of the facts that have been obtained from the planetary exploration with spacecraft – those will stand but not much else

        .

      • Money was not there for anybody who had a slightly diverging viewpoint.

        Anybody that whines about their own state of being ignored scores the most points on the Crackpot Index. Gold goes on and on about this.

      • Webby,

        on the site I linked asking you the question, you will find a statement claiming Gold stole much of his information and ideas about abiotic oil from the Russian literature. Try dealing with the science rather than personalities.

      • Whether you like it or not there is an unwritten rule in scientific research circles that relates to baseball. You get at most 3 strikes before you are out and people stop listening to you.

      • Gold was no crackpot.He was correct (although he did not have priority) in Neutron stars ie that the physics of the standard model were nonsense eg Eddlingtons Diophantine approximations .That they were complete nonsense is well documentated in the literature
        eg Landau 1932 Peierls 1936 Chandrasekhar 1953 ,

        The deep hot biosphere is also well documented ,there is a good review by Paul Davies.The suprise is the extent of the biomass around 10-20% of the existing estimates of total biomass.,That they are chemolithic,allows legitmate questions in the estimates of both carbon cycling in paleoclimatic studies such as the sulphur problem eg the PETM.

        Abiogenic hydrocarbons seemed to have moved to the mainstream literature.

        http://www.nature.com/ngeo/journal/v2/n8/abs/ngeo591.html

      • With the right stoichiometry, adding energy via pressure to an assembly of molecules under laboratory conditions can potentially create anything of that same stoichiometry byproducts. It’s really a matter of whether it actually does under non-laboratory conditions. We know that methane can be abiotic, simply because it exists on Mars and there is no life on Mars.

        Practicality and rationality plays a role in all this. Consider that the abiotic advocates use it as an argument to advance the concept of “limitless” oil.

        Remember that no petroleum is found below a certain depth as the pressures will turn the remains to graphite and diamond. The crustal volume of the earth has been pretty much explored. Oil can only collect in certain stuctural volumes that provide capture basins for liquid moving over millenia. That means if there is a source of abiotic oil, it won’t matter because the process of new abiotic oil creation is so slow to be practically insignificant. And we watch these reservoirs like a hawk because they are all we got.

        Somebody suspected that abiotic oil was appearing in Eugene Island in the Gulf, because of a resurgence in production after a lull. They looked and then found nothing related to abotic. It was just replenishment from a deeper reservoir.

        Why chase ghosts?

      • Webby,

        you talk a great game but do not even pretend that you looked at the Russian literature on abiotic oil linked at the site I posted. When you decide to actually deal with their issues involving the destruction of oil at heat and pressure where it can be brought up to the energy levels required to form it in the first place then we can move on. Or do you have that mythical catalyst and are hiding it till,. you get offered more money?Until then you have whiffed another one and Biotic oil is still the myth that it has always been.

        Lotsa people like to slam big oil for their buying science. Why do you think they may not have bought science to prove oil is Biotic and therefor LIMITED to keep the price up??

      • you talk a great game but do not even pretend that you looked at the Russian literature on abiotic oil linked at the site I posted. When you decide to actually deal with their issues involving the destruction of oil at heat and pressure where it can be brought up to the energy levels required to form it in the first place then we can move on. Or do you have that mythical catalyst and are hiding it till,. you get offered more money?Until then you have whiffed another one and Biotic oil is still the myth that it has always been.

        I find enough new science statistics that I don’t need to chase stuff down a rabbit hole.

        Lotsa people like to slam big oil for their buying science. Why do you think they may not have bought science to prove oil is Biotic and therefor LIMITED to keep the price up??

        I have my own set of mathematical models that has usage way beyond proving that what some Russians are saying is right or wrong. Big whoop, plenty of people believe weird things. I don’t have the desire to chase all these people down.

      • Mr. Webby,

        how can you even pretend to have knowledge on a particular subject if you haven’t read a reasonable amount of the literature? Are you really suggesting that the US mainstream literature on the subject is the only worth considering?

        HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

      • how can you even pretend to have knowledge on a particular subject if you haven’t read a reasonable amount of the literature? Are you really suggesting that the US mainstream literature on the subject is the only worth considering?

        HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

        No skin off my nose, go ahead and laugh.
        You would be surprised how often the neophyte grad students pick up on some novel approach right away. They aren’t necessarily up on the literature but they somehow come up with a fresh idea that no one has applied before.

  69. could you get me out of the spam filter?

  70. Chief Hydrologist,
    Your comment at http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-106593
    Please continue.

    • Yes it is pretty neat how we have been able to continue a discussion on CO2 residence times and really make some progress in our common understanding.

      Now when I look at a figure like the one below, I note that both sides are close to being correct. They just have different definitions on what they are correct about.
      http://c3headlines.typepad.com/.a/6a010536b58035970c0120a7895f54970b-pi

      Interpretation of meaning should be one of the last things we have to think and worry about, but unfortunately that is the way it is.

    • Chief Hydrologist,

      When you add more CO2, say, by burning fossil fuel, CO2 leaves the chimney or the stack at a temperature of 110C or by setting fire leaving the fireplace at the combustion temperature, both situation CO2 rise up and heat up along their paths up the atmosphere until equilibrium at a certain height. AGWers concern is reaching the steady state, i.e. all CO2, O2, N2, etc are locally and globally thoroughly mixed at the same temperature and adding CO2 content at the same temperature.

      So start from here when you said ” But it seems to me to be the case that the atmosphere in general will be more energetic if there are more molecules that interact with photons in the IR frequencies.” You may like to elaborate that. It seems to me that the atmosphere will not be more energetic when adding more CO2 with the same atmospheric temperature to the atmosphere. Only the atmosphere gain energy will be more energetic, same temperature – no gain. Correct me if I am wrong.

      • Hi Sam,

        Anthropogenic CO2 is hotter than natural sources – but it is a small part of the total flux. We are also looking at water vapour which rises by convection and cools as it does. It may be easier to look at the radiant balance at the top of atmosphere.

        Ultimately the radiant balance at the top of atmosphere is a simple function that describes warming and cooling’

        Ein/s – Eout/s = d(GES)/dt

        Ein/s and Eout/s are the average radiant flux (unit energy) in and out respectively in a period. d(GES)/dt is the instantaneous rate of change of global energy storage (GES) – mostly as heat in atmosphere and oceans. A positive rate of change and the planet warms and vice versa – energy is conserved.

        Ein changes a little bit in the 11 year solar cycle and perhaps a little more in the longer term. Eout changes a lot with cloud in the short term and to a lesser degree with aerosols and greenhouse gases.- and ice in the longer term. There is a change in reflected SW of 85W/m2 between snowball earth and the blue green planet.

        If greenhouse gases are added to the atmosphere – the atmosphere and oceans warm because of the IR photons interacting with more molecules -but the atmosphere continues to emit energy at the same rate for the very same reason. If greenhouse gas concentrations in the atmosphere didn’t change – and all other things remained the same – Ein and Eout would quickly come into equilibrium and the planet would not warm or cool.

        There seems little doubt that increased greenhouse gas molecules in the atmosphere result in higher temperatures on the planet. This can’t be seen in the radiative flux record – which shows warming in the SW and cooling in the LW.

        The ‘residence time’ is of course a different issue. That can’t be decided without knowing the dynamics of silicate weathering on a global scale, of terrestrial photosynthesis and respiration and of ocean phytoplankton biology. The latter is influenced by the dynamics of deep ocean upwelling. Chemistry is secondary to biological processes in the carbon cycle.

      • I am not sure if this WordPress blog has a Latex plugin so the equations may not turn out right, but here goes.

        The ‘residence time’ is of course a different issue. That can’t be decided without knowing the dynamics of silicate weathering on a global scale, of terrestrial photosynthesis and respiration and of ocean phytoplankton biology. The latter is influenced by the dynamics of deep ocean upwelling. Chemistry is secondary to biological processes in the carbon cycle.

        Methane emissions have also risen in the recent past and according to measurements of atmospheric concentration of CH4, it has risen quite dramatically:
        http://www.lenntech.com/images/methem.jpg

        Even though methane residence time is short, it decomposes to CO2 in the end
        CH_4 + 2 O_2 = 2 H_2O + CO_2

        So what is the meaning of residence time? CO2 has a residence time of anywhere from 2-10 years, but if you look at the numbers, so does CH4:
        http://www.gly.uga.edu/railsback/Fundamentals/AtmosphereCompV.jpg

        The big distinction is that CO2 takes a long time to semi-permanently sequester. That is what the carbon-cycle is all about. So to understand methane and CO2 long term we have to understand the forcing function of CO2.

        The following graphic illustrates a slab calculation of CO2 dynamics I did the other day, which describes the long term excess CO2 levels modeled as an impulse response.
        http://img534.imageshack.us/img534/9016/co250stages.gif

        The overall envelope of the CO2 that does not diffuse to the deeper layers into more permanent storage looks very similar to the solution of the Fokker-Planck master equation with no drift. The solution with a normalized diffusion coefficient looks like:
        \frac1{\sqrt{2 \pi t}} exp(-x^2/{2t})

        The key term is the lead inverse power law with time. That is a very fat-tail which explains the apparent long residence time of CO2. Indeed CO2 will recycle completely in the atmosphere in within 10 years, but the sequestering of excess is much longer. The layer of carbon at the surface is where the mixing takes places and until that gets semi-permanently buried, the excess will keep coming back. (And I say semi-permanently because if it was permanent, we wouldn’t have the carbon cycle over a geologic time span).

        So the methane residence time quickly turns into a CO2 residence time after the methane decomposes. And that has implications as a long-term forcing function.

        Doing this exercise was a very edifying experience, as I never thought of it this from this perspective. Even though I did all sorts of semiconductor diffusion experiments and calculations working as a postdoc at a research lab, I never made the obvious connection to the solid state situation. It isn’t a residence time in the atmosphere that is important but the fluctuating excess between the atmosphere and surface as the CO2 diffuses underground or to the deep ocean.

      • There is a fundamental disparity of approach. You use a simple math for a complex system with little understood fundamentals. You need first of all to understand the system without thinking in terms of analogies. The response function is not applicable the fluxes change for other reasons than anthropogenic emissions.

        ‘Feldman remembers watching the first dramatic example of SeaWiFS ability to capture this unfold. The satellite reached orbit and starting collecting data during the middle of the 1997-98 El Niño. An El Niño typically suppresses nutrients in the surface waters, critical for phytoplankton growth and keeps the ocean surface in the equatorial Pacific relatively barren.

        Then in the spring of 1998, as the El Niño began to fade and trade winds picked up, the equatorial Pacific Ocean bloomed with life, changing “from a desert to a rain forest,” in Feldman’s words, in a matter of weeks. “Thanks to SeaWiFS, we got to watch it happen,” he said. “It was absolutely amazing — a plankton bloom that literally spanned half the globe.” http://www.sciencedaily.com/releases/2011/04/110404131127.htm

        CO2 increase with increasing temperature – and are sequestered with La Niña. Or perhaps that is the other way round.

        If the system of carbon cycling is not understood in any detail – it cannot be modelled. There are dozens if not hundreds of trophic pathways in the carbon cycle – carbon is of course the fundamental building block of life.

        Carbon doesn’t ‘diffuse’ to the deep ocean – it sinks in both organic form and as calcium carbonate adding to storage in sediments and in the deep ocean and in sediments. 70 million GtC odd for instance in sediment in permanent storage.

        I don’t even what you mean by diffusion into soil – perhaps you could explain?

      • Diffusion is diffusion and it is observed on just about any level. The effect is most observable where there is not a strong field gradient to move it along by convection. From the motion of gas molecules, to the movement of micro-organisms, to the staggering of a drunk in a stupor, diffusion is really a model of random walk that has no preference to move in any one direction. Carbon can get buried in soil, oil can migrate underground, CO2 can find its way to the bottom of an ocean, all through the movement caused by a random walk. The fact that carbon can migrate back up through eruptions and seeps is part of the cycle.

        Random walk is naturally balanced so it will tend to equalize probability density in space in accordance with maximizing entropy.

        Mathematically, because the effect is linear, it doesn’t matter if we don’t know all of the mechanisms, as the effect will average out and give a mean diffusion coefficient.

        BTW, geologists, civils, and petros call the Fokker-Planck equation Darcy’s law, see the link in my handle for more info. The gravity head often gets in the way of seeing the effect if is liquid, but carbon will largely be buoyant.

      • The system is non-linear.

      • How much is it non-linear? I assume linear insofar as one can superimpose multiple solutions at different diffusion coefficients and the fat-tail profile will come out essentially the same. I have the Fokker-Planck master equation worked out in this case, check the link on my comment handle (see page 505 of TOC)

        If the diffusion is non-linear, then we have to describe how random walk is non-linear. There are other types of random walk, such as the Levy Flight. Yet Levy flights are even more disordered and will have more of a fat-tail, as they also have a declining power-law property.

        Random Walk
        http://www.grunch.net/synergetics/images/random3.jpg

        Levy Flight
        http://sethgodin.typepad.com/.a/6a00d83451b31569e2012877573fb6970c-800wi

      • Hi Robert,

        “Ein/s – Eout/s = d(GES)/dt”
        I have no problem with this equation. But your explanation left out storage of energy with plants on land and in oceans as well as animals which eat plants also store energy. Death of plants and animals released most of their energy gradually. Earthquakes may tricker storage of energy by burying these plants and animals as fossil fuels if buried long enough. Hurricanes or typhoons, volcanoe eruptions, earthquakes are typical stored energies released at conditions favorable to their occuring.

        “Ein changes a little bit in the 11 year solar cycle and perhaps a little more in the longer term.” This is your expertise area, I could learn a lot from you.

        “Eout changes a lot with cloud in the short term and to a lesser degree with aerosols and greenhouse gases.- and ice in the longer term. There is a change in reflected SW of 85W/m2 between snowball earth and the blue green planet.” I know clouds have a lot of effects on the Earth’s release and absorption of energies. I am not sure how it is formed. CERN recently said it was cosmic rays that neucleated clouds under suitable conditions. Sounds reasonable to me, you may have other theories on the formation of the clouds to enlighten us. I agree with you that aerosols and GHGs do not affect Ein and Eout much and ice do reflect out a lot of the Sun energy that supposed to heat up the Earth during the winter months.

        “If greenhouse gases are added to the atmosphere – the atmosphere and oceans warm because of the IR photons interacting with more molecules -but the atmosphere continues to emit energy at the same rate for the very same reason.” I maybe wrong but you seemed jump to the conclusion too soon without explaining why and how IR photons interacting with more molecules warm the ocean and the atmosphere. You have to explain to us where are the extra IR photons energy come from since Ein is relatively constant from the Sun. GHG molecules cannot create energy in the form of IR photons without absorbing energy from other sources.

        “There seems little doubt that increased greenhouse gas molecules in the atmosphere result in higher temperatures on the planet.” I don’t see the correlations, perhaps, you would enlighten us with those energies that involved in the GHG molecules to ‘result in higher temperatures on the planet’.

        “The ‘residence time’ is of course a different issue. That can’t be decided without knowing the dynamics of silicate weathering on a global scale, of terrestrial photosynthesis and respiration and of ocean phytoplankton biology. The latter is influenced by the dynamics of deep ocean upwelling. Chemistry is secondary to biological processes in the carbon cycle.” Thanks for mentioning these and when at an appropriate time, I can learn much from you and others about these sciences.

  71. Dallas, it appears impossible to find a reference for the transport of energy due to overshooting, even the most recent ones have comments such as

    “Although the relative contributions of direct fast uplift of cold and heavy air at high altitude and local drain
    areas required for compensating the resulting energy sink are not fully understood yet, the fast velocity of these
    events and their average zonal signature strongly suggest a significant role of deep convective overshooting on
    troposphere-to-stratosphere transport at the global scale.”

    Geophysical Research Abstracts
    Vol. 12, EGU2010-12619, 2010
    EGU General Assembly 2010
    © Author(s) 2010
    Importance of convective overshooting troposphere to stratosphere
    transport in the tropics at the global scale
    Jean-Pierre Pommereau

    It would suggest the convective overshooting is important and needs to be quantified especially in the context of being much more common than previously thought.

    The water content of the stratosphere could be an important aspect of warming and cooling trends. The Solomon et al 2010 paper indicates 25% of the warming expected was offset by stratospheric water vapor which would amount to 0.05C per decade. If you had a couple of decades of decreasing water vapor followed by a couple of decades of increasing water vapor this could explain 0.2C of the measured warming, a fairly significant portion of total warming

    • Steven,
      Like you said there aren’t that many papers to be found on convective over shoot and forcing changes associated with strat WV change. Looking at the small amount of strat WV and the small change in respect to PPMV, not percentage change, I find it difficult to believe that 25% of warming was offset by strat WV. The peak of the strat WV appears to be associated with the 1998 El Nino with the down step following. That to me seems to indicate that strat WV is more of an indicator of change than a driver. The data quality of the strat WV did not impress me with the early measurements limited to a small area and weather balloons followed by two other methods with healthy error bars. So a lot of wiggle words would be required when making any conclusions.

      Because of the spectrum of water vapor, the strat WV change does pose questions about the tropopause’s impact on radiant heat transfer. If that is simple radiative physics I would love to see it.

      • There isn’t any information to speak of on energy transport either which I think we can both agree is likely to be important.

      • I have a couple of issue with energy transport. First is the convective overshoot and tropopause near and slight penetrations like the Rosby waves. The radiative window is much different and horizontal transfer much greater, too great for an up down model in my opinion. The second is associated with internal oscillations where the destination provides for hugely varying transfer rates to space. They may be unforced variations, but their impact is important.

  72. Chief Hydrologist | August 31, 2011 at 4:38 am |
    I think that CO2 added to the atmosphere by people is as hot as it is ever going to get – it is after all the product of that famous oxidising and exothermic reaction. We set fire to it.

    So it can’t be the case that adding CO2 to the atmosphere causes the planet to look colder from space.

    Nonsense, you are revealing a complete lack of knowledge about the physics of gases, perhaps you should stick to Hydrology, because you’re out of your depth here!

    I have just understood what you were saying. N2 and 02 impart energy to C02 and H2O in collisions – which then occasionally emit a photon to space cooling the world.

    Can happen but only if the translational energy of the diatomic is specifically used to excite a vibrational energy state.

    Well – yes – that seems a reasonable part of the puzzle. Although N2 and O2 don’t actually need CO2 to cool down. Anything above absolute zero will emit radiant energy – so a warmer atmosphere will emit more energy to space and by quite a lot more than proportionately by the Stefan-Boltzmann equation.

    More complete ignorance on the physics of gases revealed here, you’d fail freshman Phys Chem with this nonsense! I suggest you read up on Physics of Gases and Spectroscopy any decent undergraduate text on Physical Chemistry should do the trick.

    But it seems to me to be the case that the atmosphere in general will be more energetic if there are more molecules that interact with photons in the IR frequencies. Then the planet will be warmer and emit more energy to space with an exponentially increasing radiative flux with temperature. Cooling off again – because it can’t emit more energy than comes in from the sun.

    There is a puzzle here that I need to think on. If anyone wants to continue this – I suggest that we take it to the bottom of the thread.

    The emission by a GHG of IR occurs in the same wavelength band as it absorbs. in a strong absorption band the emission from the earth will be strongly absorbed and heats up the non-radiatively active gases like N2, O2 and Ar which constitute most of the atmosphere, it is only able to emit to space when it reaches the more rarified regions of the atmosphere. However, the emission is limited by the gas’s temperature via the S-B equation so the emission to space in that band is less that the emission from the surface hence the planet looks cooler from space.

    View from 70km, showing the effect of progressively removing each GHG:
    http://s302.photobucket.com/albums/nn107/Sprintstar400/?action=view&current=Atmos.gif

    • Being rude and noxious should disqualify you from any response.

      But there is simple idea – as other things being equal the planet warms wth additional CO2 until the energy equilibria at TOA is restored. Energy in must equal energy out over the longer term.

      • Oh really, well pontificated on a subject as if you know something about it when you don’t even have a college freshman’s understanding of the subject gets the treatment it deserves. There was no rudeness, what you posted was nonsense and displayed ignorance of the subject, face facts rather than getting all huffy about it. Given your rudeness in this thread if everyone followed your advice you’d get no responses!

      • I gave turned over a new leaf – there are incorrigibles who persist in trolling. It is pointless responding in kind or indeed at all.

      • Oh but you do believe in the 1st law of thermodynamics I take it?

      • This has exactly what to do with your lack of understanding of the physics of gases?

      • Energy is conserved of course – and the formula given above is a complete description of global energy dynamics. With increased anthropogenic greenhouse gases emissions – the planet is warmer and the energy equilibria at TOA is restored. All other things are not equal. As the gases are warmer to start with – the product of combustion – the planet initially cools. And there are the other important changes in the SW that I discussed.

        Spectral absorption can’t be related directly to TOA energy dynamics – the need is to consider temperature as in the Stefan-Bolztmann equation and energy equilibria at TOA.

      • Of course it can, and must be, the best advice I can give you is to stop digging the hole. You were wrong, face the facts and learn from the experience. The heating up of CO2 by the process of combustion is a minuscule effect and unrelated to the process under discussion.

      • You were rude – continue to be condescending – and wrongly insist that planetary emissions are not temperature dependent?

        Bye

      • …all other things being equal…

  73. Tomas Milanovic

    I have just understood what you were saying. N2 and 02 impart energy to C02 and H2O in collisions – which then occasionally emit a photon to space cooling the world.

    Can happen but only if the translational energy of the diatomic is specifically used to excite a vibrational energy state.

    Chief understood the issue well. Not only can happen but must happen. For an atmospheric layer considered isothermal in Local Thermodynamic Equilibrium, the distribution of vibrational energy among the different energy levels is given by the MB distribution and depends only on temperature. From that follows that any transfer of energy from CO2 towards a diatomic by collision must be compensated by an energy transfer from the diatomic towards CO2 by collision too. If this was not the case, then the kinetic energy distribution of the diatomics would drift towards ever higher temperatures while the CO2 distribution would not move. The CO2 then would no more be in equilibrium with the diatomics what would contradict the reality of an isothermal layer in LTE.

    The emission by a GHG of IR occurs in the same wavelength band as it absorbs. in a strong absorption band the emission from the earth will be strongly absorbed and heats up the non-radiatively active gases like N2, O2 and Ar which constitute most of the atmosphere

    This contradicts the statement first statement. If CO2 really “heated” the diatomics within some given volume V (imagine f.ex a sphere of 10 m diameter) then there would appear N DIFFERENT kinetic energy distribution curves. 1 for CO2 and N for the diatomics. As this distribution defines temperature, we’d have N different “temperatures” within this volume and THE temperature wouldn’t be defined.
    This can be easily seen in a CO2 laser which works according to the same process but in an opposite time direction.
    One starts with a mixture of hot N2 and cold CO2. The mixture is not in equilibrium and has no defined temperature.
    Through collisions N2 transfers energy to vibrational states of CO2 and CO2 transfers vibrational energy towards N2 (and also radiates of course) . The former process has a higher rate so that N2 is cooled and CO2 is heated.
    Same happens with translational energy.
    This process stops when there is as much collisional transfer from CO2 vibrational states as there is towards CO2 vibrational states and the mixture achieves local equilibrium with a well defined temperature.
    What the statement quoted above says is that the process does NOT stop but continues with CO2 “heating” N2 so that the mixture departs again the equilibrium and the temperatures of CO2 and N2 begin to spontaneously diverge.
    This is of course absurd.
    Perhaps this is not what you wanted to say but it is what you actually said.

    • I think that CO2 added to the atmosphere by people is as hot as it is ever going to get – it is after all the product of that famous oxidising and exothermic reaction. We set fire to it.

      Why do you stick up for him? When it is added to the atmosphere, CO2 by definition is less than 0.04% mole percent of air’s composition. That is the upper bound because it is the steady-state composition.

      Work this out logically. Instead of trying to talk it out, be a real physicist and give an order of magnitude estimate for how much an effect that CO2 heated by a smokestack can have on the surrounding air.

      That happens exactly once when the CO2 leaves the smokestack or exhaust pipe. So at best the energy that is transferred by the most crude of mass-action law approximations is some stupendously small fraction.

      The real difference and the significance of this whole discussion is that once the CO2 enters the atmosphere, with an “excess residence time” of hundreds of years, the CO2 has a chance to absorb upwelling infrared heat over and over again. Not just once when it leaves the smokestack but many times over.

      That is why I worked out the CO2 impulse response model and showed how the IPCC researchers could arrive at that conclusion of a fat-tail impulse response curve, just by considering some very elemental statistical physics considerations, and working out the math.

      • The concept was introduced as an initial condition – the gas is as hot as a result of combustion as it is ever going to get. There is no point where the atmosphere is warming as a result of interactions with IR photons – there is no point therefore at which the planet appears to be colder from space. It is – as I say – an initial condition.

        Do try to follow.

      • English is a wonderfully ambiguous language. Tomas tried to do the disambiguation, but in the end he said “Perhaps this is not what you wanted to say but it is what you actually said.”

        Obviously, I neither understand what you said nor what you wanted to say. I will keep trying though.

      • Tomas was quoting Phil. Do try to get it right.

      • Weird, what does the temperature of CO2 emissions from a smoke stack or tailpipe have to do with anything?
        That was the premise of your comment, that it had some deep significance.

      • You are always on the edge but are descending into trollery. I am sure you understand initial conditions. But you have little feel for physical science.

        It is explained again 2 comments up – and I don’t feel like being nice to pissant trolls like you.

      • According to your argument, the initial conditions are the fossil-fuel emitted CO2 is at some elevated temperature, while the naturally occurring emitted CO2 is at the environmental temperature.
        So your premise is that the temperature of the FF emitted CO2 would have some significant impact?
        Believe it or not, that is what you wrote

      • Specifically that the molecules are already in an excited state when emitted to the atmosphere. The question arose in the context that the planet appears cooler from space as a result of extra CO(sub>2 – and I was wondering if there was any point in time where the atmosphere was cooler and warming as a result of photon absorption.

        Why do you continue to put words in my mouth?

      • Take your own advice, those are Tomas’s words, you need to read more carefully.

    • “I have just understood what you were saying. N2 and 02 impart energy to C02 and H2O in collisions – which then occasionally emit a photon to space cooling the world.”

      “Can happen but only if the translational energy of the diatomic is specifically used to excite a vibrational energy state.”

      Read this more carefully Tomas.

      Chief understood the issue well. Not only can happen but must happen.

      No he didn’t.

      For an atmospheric layer considered isothermal in Local Thermodynamic Equilibrium, the distribution of vibrational energy among the different energy levels is given by the MB distribution and depends only on temperature.

      If you’re considering a purely collisional system only, however you have neglected the incoming IR flux which changes the vibrational energy level distribution from that given by the Boltzmann distribution, i.e. the vibrational temperature is not equal to the translational temperature.

      From that follows that any transfer of energy from CO2 towards a diatomic by collision must be compensated by an energy transfer from the diatomic towards CO2 by collision too. If this was not the case, then the kinetic energy distribution of the diatomics would drift towards ever higher temperatures while the CO2 distribution would not move. The CO2 then would no more be in equilibrium with the diatomics what would contradict the reality of an isothermal layer in LTE.

      Again you have forgotten the IR flux.

      “The emission by a GHG of IR occurs in the same wavelength band as it absorbs. in a strong absorption band the emission from the earth will be strongly absorbed and heats up the non-radiatively active gases like N2, O2 and Ar which constitute most of the atmosphere”

      This contradicts the statement first statement. If CO2 really “heated” the diatomics within some given volume V (imagine f.ex a sphere of 10 m diameter) then there would appear N DIFFERENT kinetic energy distribution curves. 1 for CO2 and N for the diatomics. As this distribution defines temperature, we’d have N different “temperatures” within this volume and THE temperature wouldn’t be defined.

      Again the same error, the temperature of the CO2 is determined by collisional input from the atmosphere plus radiative input from the earth’s IR emissions balanced by collisional transfer to the atmosphere (CO2 in the lower atmosphere predominantly loses its vibrational excess energy by collisions, radiative transfer only becomes important in the rarified upper atmosphere)

      This can be easily seen in a CO2 laser which works according to the same process but in an opposite time direction.

      Afraid not, you’re comparing apples and oranges, the atmosphere involves transitional-vibrational exchange whereas the CO2 laser relies on a specific near resonant vibration-vibration transition not applicable to our atmosphere.

      One starts with a mixture of hot N2 and cold CO2. The mixture is not in equilibrium and has no defined temperature.
      Through collisions N2 transfers energy to vibrational states of CO2 and CO2 transfers vibrational energy towards N2 (and also radiates of course) . The former process has a higher rate so that N2 is cooled and CO2 is heated.

      What actually happens in the laser is an electron discharge where the electrons collide with the N2 molecules and excite it to the v=1 state (2331cm^-1 iirc), this state is metastable since as N2 is a homonuclear diatomic it is unable to lose that energy radiatively. However due to the near resonance with the v=3 state (001, 2349cm^-1) of CO2 vibrational energy is directly transferred very efficiently by vib-vib transfer (in layman’s terms the ‘spring constant’ of the N2 almost perfectly matches that of the CO2). The CO2 state is quite long lived, ~0.4msec, but can radiatively lose energy to lower energy levels (100, 1388 cm^-1 and 020, 1334 cm^-1) giving laser beams of either 10.6 or 9.4 μm.
      By the way the 15 μm band of importance in the atmosphere is centered around 010, 667 cm^-1, resonant exchange with the Nitrogen v=1 in our atmosphere except perhaps in a lightning bolt!

      Same happens with translational energy.

      Nope

      This process stops when there is as much collisional transfer from CO2 vibrational states as there is towards CO2 vibrational states and the mixture achieves local equilibrium with a well defined temperature.

      You have forgotten about the ~100Watts of laser emission to the lower state and collisional deactivation of the remaining excited CO2 by the He in the laser tube!

      What the statement quoted above says is that the process does NOT stop but continues with CO2 “heating” N2 so that the mixture departs again the equilibrium and the temperatures of CO2 and N2 begin to spontaneously diverge.
      This is of course absurd.

      Yes it is, due to the mistake you made in ignoring radiational transfer pointed out above.

      Perhaps this is not what you wanted to say but it is what you actually said.

      As pointed out it was neither.

  74. Phil,
    http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-106888

    “The emission by a GHG of IR occurs in the same wavelength band as it absorbs. in a strong absorption band the emission from the earth will be strongly absorbed and heats up the non-radiatively active gases like N2, O2 and Ar which constitute most of the atmosphere, it is only able to emit to space when it reaches the more rarified regions of the atmosphere. However, the emission is limited by the gas’s temperature via the S-B equation so the emission to space in that band is less that the emission from the surface hence the planet looks cooler from space.” Apparently, you are half-correct when you said that strong IR radiation heats up the Earth, say, by setting fire with CO2 being higher temperature, heats up the Earth. That satisfys S-B radiation law. But, at atmospheric temperature, in general has a lower temperature that the Earth does not absorb weaker IR radiation. That violates the S-B law.

    • Apparently, you are half-correct when you said that strong IR radiation heats up the Earth, say, by setting fire with CO2 being higher temperature, heats up the Earth. That satisfys S-B radiation law. But, at atmospheric temperature, in general has a lower temperature that the Earth does not absorb weaker IR radiation. That violates the S-B law.

      What kind of language are you using? What exactly is meant by “setting fire with CO2 being higher temperature“?. When something is on fire, it is being oxidized. CO2 is already completely oxidized as Eli Rabbett said up-thread.

      Photons are photons and if and when a molecule absorbs a photon, it will gain heat by a discrete amount of energy proportional to its frequency or inversely proportional to its wavelength. I just stated an axiom of physics that was first postulated about a hundred years ago.

      Again the significance of this whole discussion is that once the CO2 enters the atmosphere, with an “excess residence time” of hundreds of years, the CO2 has a chance to absorb upwelling infrared heat over and over again, before the excess eventually gets sequestered.

      • Sam NC,
        I can’t believe this is still going on. If I said that on a humid day the temperature won’t get as hot because the moisture diffuses the direct solar energy by absorbing a good deal and gradually releasing it, causing the evening temperature to be warmer than it would be, you would have no problem with that. The atmosphere is not warming the surface, the sun is, the atmosphere is just balancing the rate of warming over a longer time period.

        That is all that is happening. CO2 just happens to contribute to the balancing of the rate of heating and cooling. The source of the energy is still the sun. When describing the GHE, those photon shooting CO2 molecules get their ammunition from the sun. If we turned off the sun, the earth would turn into a frozen rock, it just takes a little longer with the atmosphere.

      • If we turned off the sun, the earth would turn into a frozen rock, it just takes a little longer with the atmosphere.

        And if we kept the sun but got rid of the Earth’s atmosphere, the temperature would be closer to what the temperature is on the moon, which has no atmosphere. And of course it would fluctuate more wildly day-to-day.

        That is why it so important to model the excess CO2 residence time, as CO2 along with the other GHG’s have a significant impact on the moderation of the earth’s climate.

      • Dallas,

        Have you shorted out fossil fuels and biofuels Carbon -ve and Carbon neutral yet? Which is which? Apparently, you have not paid much attention to this led to your wrong conceptions about carbon neutral, carbon +ve and carbon -ve.

        ” If I said that on a humid day the temperature won’t get as hot because the moisture diffuses the direct solar energy by absorbing a good deal and gradually releasing it, causing the evening temperature to be warmer than it would be, you would have no problem with that.” Here, let me clarify it for you:
        1. Moisture absorbs direct sunlight cause it at a superheated state, it rise up and release energy. It does not just store energy during direct sunlight absorption, it aslo release energy simultaneously. The only difference is, it absorbs slightly more energy than releasing energy. During evening, the surface temeprature is still higher than the atmospheric temeprature and the moisture temperature. With the slightly higher moisture temperature, the Earth reduced its IR radiation (i,e. reduced cooling not warming). You have a very wrong concept of warming here, it violated S-B radiation law.

        ” The atmosphere is not warming the surface, the sun is,” Glad you understand that. Its common sense, a person without going to school, knows it.

        ” the atmosphere is just balancing the rate of warming over a longer time period.” Wrong again. It should be ” the atmosphere is just balancing the rate of COOLING over a longer time period.” How can you not capable distinguish ‘warming’ and ‘COOLING’. You need deep thinking about these 2 words in here.

        That is all that is happening. CO2 just happens to contribute to the balancing of the rate of heating and cooling. The source of the energy is still the sun. When describing the GHE, those photon shooting CO2 molecules get their ammunition from the sun. If we turned off the sun, the earth would turn into a frozen rock, it just takes a little longer with the atmosphere.

      • Dallas,

        “That is all that is happening. CO2 just happens to contribute to the balancing of the rate of heating and cooling. The source of the energy is still the sun. When describing the GHE, those photon shooting CO2 molecules get their ammunition from the sun. If we turned off the sun, the earth would turn into a frozen rock, it just takes a little longer with the atmosphere.” This time you have got it right.

      • Sam NC,

        Been right pretty most all of the time. Been accused by both sides of the argument of being wrong most of the time. :)

      • Sam NC,

        You are mixing things up a bit. Determining if a fuel is carbon positive, neutral or negative has nothing to do with its excitation state, only if the use of that fuel will increase the concentration of co2 in the atmosphere, not do anything to the concentration or reduce the amount of CO2 added.

        On the radiative physics front, the statistics is the thing. Generalization of overall effects are fine if you stick with the intent of the generalization. This seems to be a common problem in the majority of discussions. Trenberth’s budget drawing is a generalization of the radiative balance with no details worth discussing other than his generalization. Some think it shows that down welling radiation physically warms the surface by shooting back only outgoing radiation captured. Some think that since at the top of the atmosphere co2 is more likely to radiate to space than to the surface shows that CO2 is cooling the Earth, something Trenberth missed.

        If you divide the atmosphere into a bunch of layers, you will find that different things happen in each layer and you can determine a rough net effect by combining the effect of all the layers. You can’t just pick one layer, one process, one effect or one molecule to explain the whole system. Since things happen more than just up and down, you need to then divide the layers into cells to describe the things going on side to side which change three dimensionally. Then since things change with time you need to have the model change with time.

        I have a pretty fair handle on the situation, so it is frustrating to attempt to explain one small part of the puzzle only to have that confused with something totally different. But I do love a good puzzle.

      • This is a very good interpretation. That is the same reason to use a compartment or box model to demonstrate the CO2 adjustment time (not the residence time). The compartment model that I made had 50 layers and it models the role of diffusion between the layers as a random walk. The sequestered carbon can either go deeper or shallower at each layer with more or less equal probability until it reaches an equipartition state of maximum entropy. This will then give a fat-tail adjustment time which isn’t necessarily apparent from just simple intuition.

        Dallas is describing essentially the same thing but now the layers are layers in the atmosphere and the detailed balance is between radiated photons going down and up, mixed with whatever convection is going on.

        I would like to try to do a simple model of this to sate my own curiosity but have not attempted it. Combining the multi-physics aspects of both radiation and gas convection is the sticky point for me right now. The plan would be to do a steady state profile of CO2 concentration with altitude and then do the radiative layer slab computations on that. On the planetary balance thread, I thought that the maximum entropy barometric pressure profile modeled by Verkley had some potential to get the steady state right
        http://judithcurry.com/2011/08/19/planetary-energy-balance/#comment-102690

      • Dallas,

        You are avoiding direct answers to my questions (asked you twice already)and you divert to something (CO2 excitation State) irrelevant to my questions. Why run? Runout of reasons and then divert attentions! Apparently, you cannot shoulder your boss jobs delegating to you.

        “On the radiative physics front, the statistics is the thing.” You are throwing out a vague idea here.

        ” Generalization of overall effects are fine if you stick with the intent of the generalization. This seems to be a common problem in the majority of discussions.” Where do you find generalization? Its your problem by throwing something irrelevant out.

        “Trenberth’s budget drawing is a generalization of the radiative balance with no details worth discussing other than his generalization.” A lot of misceptions from Trenberth’s budget drawing. AMS should revoke the publication in 1997 which only show AMS is an ignorant organization.

        “Some think it shows that down welling radiation physically warms the surface by shooting back only outgoing radiation captured.” Does not make sense.

        “Some think that since at the top of the atmosphere co2 is more likely to radiate to space than to the surface shows that CO2 is cooling the Earth, something Trenberth missed. ” !st part makes sense, 2nd part ‘something Trenberth missed’, really?

        If you divide the atmosphere into a bunch of layers, you will find that different things happen in each layer and you can determine a rough net effect by combining the effect of all the layers. You can’t just pick one layer, one process, one effect or one molecule to explain the whole system. Since things happen more than just up and down, you need to then divide the layers into cells to describe the things going on side to side which change three dimensionally.” Well tell us in details what happens each layer. I am sure there will be a lot discussion about each layer. Since you throw it out different layers, I am looking forward to your detailed layer mechanisms. Please don’t runaway like fossil fuels and biofuels’neutral, negative, positive carbon contents.

        ” Then since things change with time you need to have the model change with time.” Explain?

      • What did I avoid? The thread is about CO2 residence time, which has nothing to do with excitation states or the energy of a C atom. You said, “…wrong conceptions about carbon neutral, carbon +ve and carbon -ve.” I assume that +ve is gaining an excited valence electron and -ve losing an excited electron, which has nothing to do with increasing the concentration of carbon dioxide in the atmosphere, reducing it, or doing nothing.

      • Dallas,

        Your response is worse than I thought you understood the questions. Fossil fuels took CO2 out of the atmosphere millions (if not billions) of years ago and buried underground. Biofuels taking CO2 out of the atmosphere temporary and almost immediately re-emit CO2 to the atmosphere. Now you should know fossil fuels are carbon -ve. Plants on lands and in oceans are unable to identify which CO2 to absorb.

      • I can understand what Dallas is talking about very readily, and can gain something from his insight.
        In contrast, I haven’t the slightest clue as to what makes Sammy run.

      • Sam NC,

        Fossil did come out of the atmosphere millions of years ago gradually over millions of years and it took millions of years for them to be buried under ground. They are still being taken out of the atmosphere and millions of years fro now there will be more. Burning those fossil fuels that took millions and millions of years to form, increases the amount of carbon in the form of CO2 in the air. So they now, in this point in history add CO2 of carbon to the air. So they are carbon positive, add carbon to the air, because they are going into the air in less time that it took to make them.

        Relative to millions and millions of year to be formed and a short human life time to burn, everything else we burn that is not a fossil fuel would add less CO2 to the air. That is the concept of CO2 residence time. If we know the residence time, we can select fuels that don’t on average add to the amount of CO2 in the air. IF that residence time is 15 years and the fuel used took 15 years or less to grow, it would be considered carbon neutral. Period end of conversation!!

        IF, a fast growing biofuel like Algae which takes only days to months to grow from nothing to a harvest is used for fuel AND it replaces the use of a fossil fuel or any fuel that takes longer to mature than the residence time of CO2 in the atmosphere, it can be considered carbon negative, as in its continue use and the lack of use of the fossil fuel will result in less carbon in the air. I have to stop now, I broke my crayon.

      • Hi Sam,

        Burning fossil fuels puts carbon into the atmosphere that was taken out millions of years ago – so adding carbon to the current cycle.

        Biofuels recycle carbon continuously and so don’t add to the volume of carbon in the atmosphere.

        I don’t like these people very much – they don’t seem to have any sense of fair play. They feel free to insult and then complain when I insult them. I am much better at insults when I put my mind to it.

        One thing you might need to be careful of is confuding the Stefan-Boltzmann equation with the 2nd of thermodynamics. One shows emissions increasing exponentially to the 4th power with temperature. The other states that on average energy moves from a warmer to a cooler body. So net energy moves outward from the surface – you can think about but of course it is a continuum. Close to the surface the atmosphere is denser and collisions are more frequent and the mean free photon path is less than in the upper atmosphere.

        Tomas and Pekka know more about this stuff than anyone here – so feel free to argue with anyone else.

        One think you might like to think about is quantum mechanics. A photon is literally a ‘packet of energy’. The quantum of energy is related to the frequency of the radiation and is equal to:

        E = hv – where h is the Plack constant and v is the frequency.

        Molecules always gain and lose energy as ‘packets’ in accordance to the frequency – i.e. in the infrared band.

        Cheers

      • Dallas,

        “Fossil did come out of the atmosphere millions of years ago gradually over millions of years and it took millions of years for them to be buried under ground.” Glad that you realized CO2 from these fossil fuels came from the atmosphere, dormanted underground for millions of years, until human discovered them and release the CO2 back to the atmosphere. If you considered algae breeding as carbon neutral running a short time carbon cycle then you must realized fossil fuels are carbon neutral run on a long time.

        ” They are still being taken out of the atmosphere and millions of years fro now there will be more. Burning those fossil fuels that took millions and millions of years to form, increases the amount of carbon in the form of CO2 in the air.” Now your logic undergoes dysfunction. Burning fossil fuels return the dormant CO2 to the atmosphere and not adding to it.

        ” So they now, in this point in history add CO2 of carbon to the air. So they are carbon positive, add carbon to the air, because they are going into the air in less time that it took to make them.” Again, logical dysfunction.

        “Relative to millions and millions of year to be formed and a short human life time to burn, everything else we burn that is not a fossil fuel would add less CO2 to the air. That is the concept of CO2 residence time.” No, you raped CO2 residence time.

        ” If we know the residence time, we can select fuels that don’t on average add to the amount of CO2 in the air. IF that residence time is 15 years and the fuel used took 15 years or less to grow, it would be considered carbon neutral.” You have a very bad concept about CO2 residence time.

        “Period end of conversation!!” LOL.

        “IF, a fast growing biofuel like Algae which takes only days to months to grow from nothing to a harvest is used for fuel AND it replaces the use of a fossil fuel or any fuel that takes longer to mature than the residence time of CO2 in the atmosphere, it can be considered carbon negative,” No, at best algae is carbon neutral. Fossil fuels are carbon -ve, since it extracted CO2 from the atmosphere for a considerable long long time.

        “I have to stop now, I broke my crayon” Yes go ahead buy more crayons to play. Carbon neutral and carbon -ve are really too difficult for you.

      • My apologies if you are offended by those harsh words which I realized did not help in any discussions.

      • My apologies if you are offended by those harsh words which I realized did not help in any discussions.

        I don’t think the other words helped either.

      • Webby,

        “What kind of language are you using?” Its odd that you don’t know English.

        “What exactly is meant by “setting fire with CO2 being higher temperature“?.” CO2 is released during combustion at combustion temperature.

        “When something is on fire, it is being oxidized. CO2 is already completely oxidized as Eli Rabbett said up-thread.” Every high school students know, no need to waste internet bandwidth.

        “Photons are photons and if and when a molecule absorbs a photon, it will gain heat by a discrete amount of energy proportional to its frequency or inversely proportional to its wavelength. I just stated an axiom of physics that was first postulated about a hundred years ago.” Every undergrad in college or university who study Physics know, no need to waste internet bandwidth. But not every one know it requires energy to release photons, once it releases photon, it lost its energy and temeperature and require energy to -re-establish the same energy state to release same energy photons. Without replenish energy from outer sources, it release weaker and weaker photons.

        “Again the significance of this whole discussion is that once the CO2 enters the atmosphere, with an “excess residence time” of hundreds of years” I have doubts about that hundreds of years though, in general, I agree with your statement here.

        ” the CO2 has a chance to absorb upwelling infrared heat over and over again, before the excess eventually gets sequestered” Yes but the CO2 also release photons over and over again to cool the atmosphere by molecular and the Earth by absorbing IR radiation from the Earth and then release photons to surroundings and space. CO2 cools the atmosphere and the Earth. Under steady state, i.e. CO2 is at the same temperature as its surrounding N2 and O2, warming the Earth and the atmosphere violated the S-B radiation law. CO2 can heat up the Earth and the atmosphere during combustion or eruption of vocanoes due to higher CO2 gas temepertures than the Earth and the Atmosphere.

      • Yes but the CO2 also release photons over and over again to cool the atmosphere by molecular and the Earth by absorbing IR radiation from the Earth and then release photons to surroundings and space.

        That is a meaningless statement because by that reasoning, not having any CO2 in the path will allow it to cool (or at least reach a steady-state) even faster. So CO2 is in fact helping to insulate the earth. Which as you say:

        Every undergrad in college or university who study Physics know, no need to waste internet bandwidth.

      • Webby,

        “That is a meaningless statement because by that reasoning, not having any CO2 in the path will allow it to cool (or at least reach a steady-state) even faster. So CO2 is in fact helping to insulate the earth.”
        Very unconvincing reply! Only N2 and O2 insulate the Earth not CO2 as they do not radiate as effectively as CO2 which helps to cool the Earth and the atmosphere. You are very wrong and have very bad energy and radiation concepts.

    • Apparently, you are half-correct when you said that strong IR radiation heats up the Earth, say, by setting fire with CO2 being higher temperature, heats up the Earth. That satisfys S-B radiation law. But, at atmospheric temperature, in general has a lower temperature that the Earth does not absorb weaker IR radiation. That violates the S-B law.

      I said no such thing, you appear to have misunderstood some of the Hydrologist’s erroneous statements regarding CO2, and also do not know what the S-B law is.

      To put matters straight, any CO2 in the Earth’s atmosphere, at whatever temperature, or from whatever source, is capable of absorbing IR in the 15μm band. When it does so it is raised to an excited rotational-vibrational state which is out of equilibrium with its surroundings. That molecule can lose that excess energy in two ways, either it can emit a photon of appropriate energy or it can lose that energy via collisions with neighboring molecules (predominantly N2 or O2). In the lower troposphere collisional deactivation is favored because the mean time between collisions (less than a nanosec, or the time light takes to travel a foot!) is orders of magnitude shorter than the radiative lifetime of the excited state. So there the energy transferred to the CO2 from the surface is distributed to the rest of the atmosphere thereby heating it up. Higher up where the atmosphere is thinner and the collisional frequency is much lower, the excited molecules have more chance to emit a photon so that when viewed from space it appears that the CO2 band is being emitted at a temperature of ~220K (which is where the real S-B law comes in).

      • Phil,

        one of the requirements for Stefan Boltzman to be valid is a geometry that does not allow the body to absorb its own Black Body radiation. This disqualifies the atmosphere as a whole.

        Are you stating that it only applies to individual gas molecules? For an individual gas particle, or most other particles, it does not apply as there are not enough various energy states to provide a black body spectrum.

        Please explain to me how SB is correctly being applied to individual particles or a gas. I am rather ignorant and misunderstand some of what I read.

      • Asking a few questions here.

        SB is a quantum mechanical effect based on the statistical distribution of indistinguishable particles (photons). If the particles weren’t indistinguishable then a different distribution would be used (Maxwell-Boltzmann instead of Planck, say), but the T^4 dependence would most likely not arise. Within a blackbox, if it absorbed its own energy then it would still be SB but reduced in outgoing intensity, as it is in a maximum entropy state.

        Quantum chemists are pretty good at understanding the various energy states after applying energy. After all, that is what spectroscopy is about. They apply photons, electrons, etc of various frequencies and see what pops out. I did my doctoral on one class of structural spectroscopy so have at least a feel for what the general idea is.

        The bridge between the two views (macro statistical and atomic) is in the book-keeping. Somebody did the original book-keeping on the disordered distribution of photons (Stefan and Boltzmann) and now they add in the specific book-keeping for those materials that work within the narrow bands of the dispersed black-body distribution. CO2 happens to fit into a band of the distribution.

        I am just riffing on this one because I am trying to figure out if there is a deeper issue to ferret out.

      • Stefan-Boltzmann law applies only to black bodies and gray bodies with emissivity independent of wavelength over the whole range of significance. It does bot apply to gas, and most certainly not to the Earth atmosphere. It applies as a good approximation to the Earth surface over an area of uniform temperature. A layer of gas of the same temperature does not change the intensity of radiation leaving the SB law valid for the combined radiation from the surface and the gas, but this is not true, when the temperatures differ as in the atmosphere seen from high altitudes or from space.

        Planck’s law is valid more generally, but applying it requires the knowledge of the emissivity of the body (or the volume of gas) as function of wavelength as well as the temperature of all emitting material.

        Stefan-Boltzmann law can be derived from the Planck’s law, when it’s valid.

      • Stefan-Boltzmann law can be derived from the Planck’s law, when it’s valid.

        I stated that Stefan-Boltzmann law can only be derived from Planck’s law, as that is a stronger criteria. Like I said above, if one assumed Maxwell-Boltzmann statistics I really doubt you would get the T^4 dependence except through coincidence. Planck’s law is statistical mechanics applied to quantum effects of the indistinguishable particles.

      • Please explain how one can measure radiation from any body in the universe excluding the effect of its surrounding gaseous “atmosphere”.

        Postma thinks is is valid to consider the “blackbody” to include gaseous atmosphere.

        I understood a greybody to not have a uniform radiative spectrum over wavelength.

        It seems a bit dumb for people to consider “equilibrium” of energy absorption including part of atmosphere but only consider energy emission from surface excluding atmosphere.

      • I might as well put everything I know in this one thread.

        This is the way I think about it. I start with the Planck Distribution law instead of S-B. This is more fundamental and gives the statistical mechanics of the imperfect black-body, and one that you can integrate over the radiation wavelengths that you are interested in.

        I use Planck with the Wolfram Alpha tool to improve my own intuition on the response of temperature to absorbing certain frequencies.

        Perfect black-body is integrating all wavelengths over the standard steady-state temperature of 279K assuming no greenhouse gases (time averaged solar insolation balancing).
        http://www.wolframalpha.com/input/?i=integral+1%2F%28exp%2814400%2F312%2Fx%29-1%29%2Fx%2Fx%2Fx%2Fx%2Fx*dx+from+x%3D0+to+x%3Dinfinity

        Then if we add 33K to this baseline temperature, corresponding to currently accepted average global temperatures, then to first order the effective GHG gray-body absorption criteria is to exclude all above wavelengths above 16 microns.
        http://www.wolframalpha.com/input/?i=integral+1%2F%28exp%2814400%2F312%2Fx%29-1%29%2Fx%2Fx%2Fx%2Fx%2Fx*dx+from+x%3D0+to+x%3D16

        Try this out yourself and you get a general idea of what is happening. The devil is then in the details of the exact spectroscopic absorption profile for CO2 and H2O and a few other GHGs. But you definitely get the general gist of GHGs being strong absorbers of IR radiation around about 10 microns in wavelength and greater, and that the emission distribution rescales from the more perfect black-body profile to accommodate the blocking of that part of the radiation spectrum.

      • Webby,

        not your fault, but, I didn’t get an explanation.

        My issue is how so many people compute the emission of the surface of the earth (390w/m2) using Stefan-Boltzman when it is not correct to use the equation in this situation. As is stated, it apparently can be done by using more basic physics, but, I do not ever see this done.

        So, what is the real OLR and DLR. The pyrgeometers and IR thermometers, to the best of my cursory research, have SB built into them. This is wrong.

        Of course, there is also the possibility that the variance from the ideal conditions are small enough that it is not worth doing the laborious work to compute by other means? Who has proven this for our environment?

      • Phil,

        “To put matters straight, any CO2 in the Earth’s atmosphere, at whatever temperature, or from whatever source, is capable of absorbing IR in the 15μm band.” This is where you got it wrong here. CO2 at a higher temperature cannot absorb lower temperature IR in the 15um band or it violated S-B law.

        “When it does so it is raised to an excited rotational-vibrational state which is out of equilibrium with its surroundings.” It can absorb and raised to an excited rotational state by higher energy state molecules. With lower state energy level, it reduced its state by losing energy to the lower state molecules so the loer state molecules got raised state.

        ” That molecule can lose that excess energy in two ways, either it can emit a photon of appropriate energy or it can lose that energy via collisions with neighboring molecules (predominantly N2 or O2).” This is correct.

        ” In the lower troposphere collisional deactivation is favored because the mean time between collisions (less than a nanosec, or the time light takes to travel a foot!) is orders of magnitude shorter than the radiative lifetime of the excited state.” You need to enlighten me and others deactivation, collision mean time and radiative lifetime of the excited state. Radiative lifetime of the excited state depending on neigboring molecules. Mean time between collisions depending on density of the gas and temperatures. Can you elaborate to support your statement?

        ” So there the energy transferred to the CO2 from the surface is distributed to the rest of the atmosphere thereby heating it up.” I think you got it wrong here. How did CO2 distributed to the atmosphere and heated it (the atmosphere) up. CO2 can only heat the atmosphere up when its temperature is higher and only by collision as N2 and O2 cannot absorb IR radiation from the CO2 per AGW spectroscopy theory. N2 and O2 are heated up by direct Earth surface convection not by CO2 IR radiation.

      • “To put matters straight, any CO2 in the Earth’s atmosphere, at whatever temperature, or from whatever source, is capable of absorbing IR in the 15μm band.”

        This is where you got it wrong here. CO2 at a higher temperature cannot absorb lower temperature IR in the 15um band or it violated S-B law.
        No this is a fundamental error on your part, em radiation doesn’t have a temperature and CO2 most certainly does absorb 15μm radiation. In any case you are still mistaking the S-B law with the 2nd Law as pointed out before.

        “When it does so it is raised to an excited rotational-vibrational state which is out of equilibrium with its surroundings.”
        It can absorb and raised to an excited rotational state by higher energy state molecules. With lower state energy level, it reduced its state by losing energy to the lower state molecules so the loer state molecules got raised state.

        I can’t follow what you mean here.

        ” That molecule can lose that excess energy in two ways, either it can emit a photon of appropriate energy or it can lose that energy via collisions with neighboring molecules (predominantly N2 or O2).”

        This is correct.
        I’m glad you recognize that!

        ” In the lower troposphere collisional deactivation is favored because the mean time between collisions (less than a nanosec, or the time light takes to travel a foot!) is orders of magnitude shorter than the radiative lifetime of the excited state.”
        You need to enlighten me and others deactivation, collision mean time and radiative lifetime of the excited state. Radiative lifetime of the excited state depending on neigboring molecules. Mean time between collisions depending on density of the gas and temperatures. Can you elaborate to support your statement?

        OK, here goes:
        mean time between collisions is just what it says and can be readily calculated from the kinetic theory of gases, there are on-line calculators that will do it for you,
        e.g. http://hyperphysics.phy-astr.gsu.edu/hbase/kinetic/frecol.html#c1

        For the typical lower atmospheric conditions a molecule will experience a collision with another molecule every 0.2 nanosec.

        deactivation the loss by an excited molecule of its excess energy.

        radiative lifetime of the excited state the average time for an excited state to emit a photon, depends on the QM of the excited state etc., for CO2 in the 15μm band typically in the μsec-msec range

        ” So there the energy transferred to the CO2 from the surface is distributed to the rest of the atmosphere thereby heating it up.”
        I think you got it wrong here. How did CO2 distributed to the atmosphere and heated it (the atmosphere) up. CO2 can only heat the atmosphere up when its temperature is higher and only by collision as N2 and O2 cannot absorb IR radiation from the CO2 per AGW spectroscopy theory.

        When a CO2 molecule which is in equilibrium with its surroundings absorbs a 15μm its energy is increased by ~8kJ/mole. It is that excess energy that it loses to the surrounding molecules, its ‘vibrational temperature’ is higher than its neighbors’.

      • When a CO2 molecule which is in equilibrium with its surroundings absorbs a 15μm its energy is increased by ~8kJ/mole. It is that excess energy that it loses to the surrounding molecules, its ‘vibrational temperature’ is higher than its neighbors’.

        The effect of IR radiation at the absorption peak of 15 um is very similar to convection. CO2 in warmer air emits a little more than CO2 in colder air. Thus there is a weak net energy transfer from the warmer region to the colder one.

        The energy levels corresponding to 15 um radiation have an occupancy of some 3% of the ground state. This means that a large number of CO2 molecules is always in the excited state. When one molecule absorbs a 15 um photon that number grows by one, i.e. on the average just a little bit over the local thermal equilibrium (LTE) value. The LTE level of occupancy is reached again in nanoseconds, when the number of de-excitations exceeds that of excitations by one (this applies to the average, real values vary randomly around the average).

      • “The effect of IR radiation at the absorption peak of 15 um is very similar to convection.”
        Not necessarily a weak transfer. But on scales large relative to the photon path length, it behaves like conduction. It’s called Rosseland heat transfer (related to Rosseland opacity).

      • RE: Nick Stokes That is a good point. While Pekka is right about the convection, the convection is conductive/radiation inducted. Kinda hard to separate which is more in control near the surface.

      • That was a typo in my message. I was thinking about conduction, not convection. (I have written on the same point before using the right word.)

        The radiation from the surface has a bit different role. It’s forms the largest component of energy transfer between the surface and the atmosphere. At wavelengths with short mean free path it keeps the temperatures of the surface and the lowest atmosphere close to each other, at other wavelengths it provides a weaker energy transfer link with the higher and colder atmosphere and with the open space leading to significant net cooling of the surface.

        Convection is very important within the troposphere dominating over both conduction and Rosseland heat transfer and compensating automatically fore the variations in these weaker mechanisms for intertropospheric energy transfer.

      • Phil,

        “No this is a fundamental error on your part, em radiation doesn’t have a temperature and CO2 most certainly does absorb 15μm radiation. In any case you are still mistaking the S-B law with the 2nd Law as pointed out before.”
        I was not using strict words ‘temperature’ which should be ‘energy’ in my upper statements. When you consider 15um, its a wave theory. Wave theory must obey wave properties. Weaker energy state waves will be deflected or absorbed by e-m waves of stronger energy state.

        “When a CO2 molecule which is in equilibrium with its surroundings absorbs a 15μm its energy is increased by ~8kJ/mole.” So the surroundings cool off by ~8kJ/mole!

        “It is that excess energy that it loses to the surrounding molecules, its ‘vibrational temperature’ is higher than its neighbors’.” Is it not circular? ‘vibrational temperature’ is higher after loss excess energy? You confused yourself very much.

      • Lower energy photons are not deflected or absorbed by higher energy photons. Electromagnetic radiation from one source interacts so weakly with electromagnetic radiation from other sources that the interaction can be safely disregarded. Interaction of that type is significant for lasers but not for other forms of IR radiation.

        Absorption and emission of radiation is going on all the time. That is a part of the local thermal equilibrium. No single event can be described in terms of change in temperature, as temperature is defined only for the local thermal equilibrium of a system with of many particles (or for the theoretical concept of ensemble, which is a collection of many similarly prepared systems).

      • Pekka,

        “Lower energy photons are not deflected or absorbed by higher energy photons. Electromagnetic radiation from one source interacts so weakly with electromagnetic radiation from other sources that the interaction can be safely disregarded. Interaction of that type is significant for lasers but not for other forms of IR radiation.”

        You have not substantiated your statements.

      • The strength of those effects is calculated by the theory of quantum electrodynamics, i.e. by the theory developed by Feynman and others and verified with better accuracy than any other physical theory.

        I’m not going deeper to that, but that’s a direction, where you can search further confirmation.

      • So CO2 does not have to obey wave properties when viewed as e-m waves and CO2 does not have to follow particle properties when viewed as a photon. We have an odd gas that neither comply with thermodynamics nor comply with radiation laws. It also generates perpetual photons without replenish energy from the surroundings and from the Earth surface. Its a magical gas.

      • Your comment doesn’t make any sense at all. Thus it’s not possible to comment on that any further.

        It’s not possible to learn physics through random guesses. There is an infinity of possible errors, but only one true physics.

      • So CO2 does not have to obey wave properties when viewed as e-m waves and CO2 does not have to follow particle properties when viewed as a photon.

        Do you know any physics?
        CO2 has a large mass so the particle-wave duality is unimportant. CO2 is a distinguishable particle and not a boson, so can’t be confused with a photon.

        We have an odd gas that neither comply with thermodynamics nor comply with radiation laws. It also generates perpetual photons without replenish energy from the surroundings and from the Earth surface. Its a magical gas.

        Gas thermodynamics is a generalization of statistical mechanics. Radiation laws come about from a mix of quantum mechanics and statistical mechanics. Energy is arriving externally from the sun.The free energy is dissipated as photons and disorder(entropy) to the surroundings.

        This is obviously a game to you, in that you directly contradict everything known as factual.

      • This is obviously a game to you, in that you directly contradict everything known as factual.

        Sometimes the errors are indeed so extreme that I wonder, whether they can be due to ignorance rather than made by purpose by someone knowing very well, how totally they contradict well known physics.

      • The problem is that the skeptics can’t or won’t police their own minions. They actually don’t care if the discussion goes completely haywire, because they don’t want to see any intellectual progress being made.

        Pekka, I am really behind you on your ability to reason deeply about these topics, but you can’t give these guys an inch or they will take a yard. They are deliberately obfuscatory and will twist any true meaning you try to give them. From a game theory perspective, it is utterly predictable to watch it unfold but that is only because the strategy is textbook. Challenge, misdirect, repeat.

      • WHT,

        It’s irrelevant, what those few think, who never admit anything. The are, however, two reasons to react to their messages at all:

        – Sometimes there is a possibility that others would take them seriously. I doubt this happens often on this site. They are already known to practically everybody.

        – Sometimes the errors provide an opportunity to discuss a point that has some interest on its own right. This is the better reason for reacting.

      • Webby,

        “Do you know any physics?
        CO2 has a large mass so the particle-wave duality is unimportant. CO2 is a distinguishable particle and not a boson, so can’t be confused with a photon.”

        Unfortunately, you are the one contradicts both particle theory and wave theory as well as thermodynamics and radiations. You should re-learn from your high school.

      • Pekka,

        Your magical CO2 gas will get you nowhere except in that consenus community, like James Hansen, K. Trenberth. LOL.

      • You should re-learn from your high school.

        I guess I wasn’t gifted, I only took quantum much later.
        (I have a feeling the sarcasm will be lost)

      • Webby,

        “The problem is that the skeptics can’t or won’t police their own minions.”

        The problem for YOU is that you have a Consensus within which people promote many unphysical ideas that people like the Slayers, G&T, and others can refute. You cannot even all agree on what your Consensus IS, yet promote it as having a meaning.

        Sceptics and Deniers do not have a Consensus so have no need to police every whackjob like me that shows up.

        Give up your Consensus and try the Scientific Method.

        The first thing the Consensus should do is to actually come to an agreement on the SCIENCE you are trying to promote, what it means as to our climate and weather and make it clear to the Politicians, Media, and EACH OTHER!!!! You should NOT confuse the issue by getting into what is to be done about it. That is a separate discussion and a separate Consensus to be built!! Depending on fanboys to carry diluted and corrupted messages of the consensus has been much of the rest of the problem.

      • We don’t belong to a club. A bunch of us are making individual progress outside of any “teams”.

      • Almost got it right at last?

        I assume that 220K is a nominal brightness temperature? Not a relevant consideration.

        The S_B equation applies in any volume of the atmosphere in local thermodynamic equilibrium – as well as at the surface. All it means is that energy will be emitted from any volume of atmosphere – really in all directions but net up. So a warmer volume will emit proportionately more IR by the T^4 factor – but IR will also be absorbed more with more molecules of greenhouse gases.

        If we assume that a decent slug of CO2 is pushed into the atmosphere from burning fossil fuels – the atmosphere will not warm but immediately begin to cool as the gases are hot. This is relatively a minor cooling but sets the scene. All other things being equal – It will cool to the temperature consistent with radiative equilibria at TOA but be warmer than otherwise without the gas.

        This is such a simple physical concept. It is not inconsistent with Tyndall effects but simply considers initial conditions.

      • Your dishonesty is ridiculous and obvious.

        “If we assume that a decent slug of CO2 is pushed into the atmosphere from burning fossil fuels – the atmosphere will not warm but immediately begin to cool as the gases are hot.”

        The exhaust gases will be cooled by the atmosphere, but that is not relevant.

        The atmosphere will be warmed by the exhaust gases, by a trivial amount, but still, the effect on the atmospheric temperature is warming, and you are lying again.

      • You have just misunderstood and accuse me of being a liar – it is a little tedious and silly.

        Do you expect a discourse – or do you simply want to swap insults. You a worm who doesn’t deserve an explanation.

      • You lied. I have understood exactly what you said.

        If we assume that a decent slug of CO2 is pushed into the atmosphere from burning fossil fuels – the atmosphere will not warm but immediately begin to cool as the gases are hot.

        You lied, or you’re such an idiot that you believe the atmosphere is immediately cooled by exhaust gases when the opposite is obviously true.

        If you prefer to be called “idiot” than “liar” I can oblige, but you still haven’t ruled out the very strong probability that you’re both.

      • I think he may be trying to distract or misdirect the discussion. What he often states are tautologies, consider his statement that “the atmosphere will not warm but immediately begin to cool as the gases are hot.”. Semantically, the gas immediately becomes part of the atmosphere when it enters it. It then cools as the heat energy disperses out of the system. This is stuff for masterdebaters and language lawyers and really has no role in advancing an argument. It is just stated to stifle discussion and create impediments that just distract from the truly relevant statistical physics we need to address.

      • This is such an idiotic conversation – I can’t believe it is still going on.

        OK – take a step back. Emit the gases and the heat content of the atmosphere increases because the gases are hot. The atmosphere then starts to cool.

        I had indicated that the gases were already in the atmosphere – i.e. that it was an initial condition. It arose in the context of the planet looking cooler from space. I wondered if at any time the atmosphere was cooler and warming because of photon absorption by extra CO2 molecules.

        A warmer atmosphere can’t look cooler from space.

        But you guys definitely look dumber.

      • A warmer atmosphere can’t look cooler from space.

        Another tautology. I don’t understand why these appear out of nowhere.

      • Webby,

        You seem an expert in tautology – so I’ll take your word for it. Bur really saying the same thing over and over again – and ocassionally descending to snark – is just trolling and deserves to be ignored.

        I must admit – for quite a while I have been wondering how long this obsession with my every word will persist. Obviously it seems much longer than my interest in continuing this nonsense.

        Cheers

      • I don’t know how actually creating models and then running some simulations of CO2 residence time on a thread dedicated to CO2 residence time constitutes trolling.

        It is you that hijacks and then derails the discussion by starting to talk about temperature effects. Get on the topic or jump to one dedicated to where your interests lie. It evidently isn’t on the carbon cycle.

      • CH, you keep talking about the hot emitted CO2.warming the air. You say it’s minor, but have you calculated how minor?
        Total solar received by Earth is 174 petawatts (PW). The extra forcing from manmade GHG is about 1 PW. Total human power consumption is about 15 TW. That’s a fair estimate of direct heating of the air.

      • The heat content of the the atmosphere increases by 5 x10^21 Joules/Degree Kelvin.

        The heat output of 15 TW (per annum) is 5 x 10^20 Joules.

        0.1K assuming all the heat stays in the atmosphere???

      • OK so the heat produced by FF combustion is in addition to the greenhouse gas effect. No problem if you consider the two as additive. The former is relatively small but if that’s the way it is, fine.

      • Well, 1 PW from GHG forcing is 6.7°K per year, assuming all the heat stays in the atmosphere.

      • Almost got it right at last?

        I’m afraid not.

        I assume that 220K is a nominal brightness temperature? Not a relevant consideration.

        No it’s the temperature of the CO2 that can be seen emitting in the 15μm band when viewed from space.

  75. Chief Hydrologist | September 1, 2011 at 6:28 pm |
    You were rude – continue to be condescending – and wrongly insist that planetary emissions are not temperature dependent?

    Bye

    You get worse, you don’t understand the basics of the subject and continue to pout when someone who does puts you straight. Despite the source of the temperature dependence being explained to you you still fail to understand! To put it plainly the emission in the CO2 band has absolutely nothing to do with whether the CO2 molecule was created by combustion or by say photosynthesis. A CO2 molecule has the potential to emit in the IR when it has been vibrationally excited (usually by absorption of IR), having been deactivated either by collision or by emission it can be re-excited again and again during its lifetime. The idea that you put forward, that the CO2 molecules’ ability to heat the atmosphere depended on the temperature of their creation is bizarre. You think it’s condescension for someone to point out your fundamental errors and misunderstandings to you, well that’s just tough.
    By the way on that note, suppose that someone came on here and posted something in your area of expertise (presumably Hydrology) that was fundamentally wrong and didn’t even meet college freshman level of understanding, how would you respond?

    • By the way on that note, suppose that someone came on here and posted something in your area of expertise (presumably Hydrology) that was fundamentally wrong and didn’t even meet college freshman level of understanding, how would you respond?

      I did just that early on in this thread. I challenged him on his understanding of what a breakthrough curve is, which is meat&potatoes stuff to a hydrologist. He responded:

      ‘Breakthrough Curve: A plot of column effluent concentration over time. In the field, monitoring a well produces a breakthrough curve for a column from a source to the well screen.’ It generally has a Gaussian normal distribution. They can be used for instance in tracing pollution sources in groundwater. Groundwater movement is commonly modelled using partial differential equations that conserve mass across finite grids.

      This has no application to the problem at all – there is not even a fat tail in any of that.

      As you can see, he does know what a breakthrough curve is and said that is modeled using PDE’s and a mesh. And he said there was not a fat-tail in the solution. Yet there is a fat-tail with respect to time and I showed it by applying a compartment model to solve the Fokker-Planck equation, which is just what he describes with his PDE and mesh approach!

      CH may now accuse me of being condescending, but I can accept that someone like him hasn’t run into this explanation and it is not necessarily his fault. My point is that one can run across problems that span scientific domains; as an undergrad I spent time as a resident science and engineering tutor for all the dorms on campus. When someone came in with a question on math, physics, chemistry, and every engineering discipline at the school, I had to become a quick learner and apply whatever analogies I could. I really don’t see any difference between what we are talking about here and something that fundamental science problem solving can’t help with.

      • Webby,

        I would never accuse you of being condescending – obsessive and eccentric perhaps.

        You insist on simplifying a non-linear problem – because the solution to the real and non-linear problem is not simple and tractable math. You solve the problem you can solve instead of the real problem.

        You reject chaos – because you can’t solve it. But reality is reality regardless.

      • I am glad to hear I am not being condescending. I want to be able to simplify this part of the problem as much as I can. If that is what it takes to make it tractable, that’s fine by me. Engineers deal with abstractions all the time when they design things.

        Upthread Gavin Cawley reported that he had a paper conditionally accepted modeling CO2 “adjustment time” (which is the correct term for the general impulse response, not residence time). I would like to see how he approached the problem, beyond his saying that it is a one-box model.

        Interesting also that M stated that IPCC acknowledged that the residence time is around 5 years, but others have distorted this to imply that they don’t know what they were doing.

        This is really not a linear vs non-linear argument, but it is about using statistical physics arguments to zone in on what is actually happening. Once we start to get closer, then we can knock ourselves out with whatever third-order effects we want to invoke.

        I am solving a multi-box model, one that might be in the same spirit as Cawley’s one-box model but I am not sure how he gets diffusion to pop out of a one-box model. Perhaps he adds the second-order partial derivative to model divergence on the single box. If that is the case then he is likely solving Fokker-Planck in a single box, which is kind of neat!

        That gives me an idea that I can work on, thanks to all for needling me.

      • Knock yourself out.

    • My handle derives from Cecil (he spent 4 years in clown school – I’ll thank you not to refer to Princeton like that) Terwillerger – Springfield’s Chief Hydrological and Hydraulical Engineer. Although I am trained in hydrology and environmental science.

      Nothing I have said is not standard theory – yet you insist on taking a minor point and assuming that I am saying something other than what I clearly said. I don’t want to go back over simple atmospheric physics.

      What I did say is that the gases created in fossil fuel combustion combustion are as hot as they are ever going to be. That should be fairly obvious even to you. They cool off with by collision with other molecules and with emission of photons. The gases cool to be the same temperature as the surrounding air – and of course continues to be opaque to photons in the IR frequency. There is no point in time when the planet looks cooler from space. The atmosphere doesn’t become warmer by adding greenhouse gases – it is warmer but of course continues to remain warmer than it would otherwise be through absorbing and colliding with IR photons.

      You are fundamentally wrong in considering only spectral absorption and not energy dynamics.

    • A CO2 molecule has the potential to emit in the IR when it has been vibrationally excited (usually by absorption of IR)

      In lower atmosphere some 99.99% of IR emission is from CO2 molecules that have been excited by collisions, not absorption of IR.

      Otherwise I think I agree with your message, although I’m not certain that I have understood correctly, what the disagreement is about. Thus I may agree more with Rob (CH) than I think based on my present understanding of his somewhat obscure comments.

      • A lot of this seems to be related to being separated by a common language. :)

      • Dallas,

        You meant lack of consensus! HaHaHa.

      • Hi Sam,

        There is no real disagreement on the fundamentals of greenhouse gas radiative physics – only of initial conditions. The tone and language from Phil is unfortunate – and doesn’t foster a considered discourse. Similar, I felt, to some of the responses to you.

        The problems lie elsewhere. Primarily in the non-linearity of climate. See here for instance – http://judithcurry.com/2011/02/03/nonlinearities-feedbacks-and-critical-thresholds/

        Climate is a system involving competing negative and positive feedbacks and abrupt shifts between states forced by little known threshold changes. To understand this needs some background in chaos theory.

        There is a related issue in what the nature is of these feedbacks. One of these is cloud feedback to changes in sea surface temperature (SST) in the Pacific. http://judithcurry.com/2011/02/09/decadal-variability-of-clouds/

        There is a question of whether the feedback is physical – a response to SST – or chemical – a response to dimethyl sulphide emissions from phytoplankton.

        Cheers

      • Hi Robert,

        Thanks for the links and especially the 2nd one widened my knowledge about ocean, clouds. Climate is indeed very complicated or you call it nonlinearities or even chaos.

        Each factor of climate changes involved may cause different interpretation of individuals and hence their conclusions. GHE is among one of them, natural variabilities another one(s), CO2 residence time, so on so forth. All these misinterpretations, misinformation resulting from individuals understanding will be cleared/exposed/uncovered in front of physics we have for hundreds of years proven real sciences with respect to thermodynamics, radiation laws, chemical properties, properties of materials and gases. Any statements must comply with fundamental physics.

        “Climate is a system involving competing negative and positive feedbacks and abrupt shifts between states forced by little known threshold changes. To understand this needs some background in chaos theory.” All feedbacks must be supported with correct energy flow.

        “There is a question of whether the feedback is physical – a response to SST – or chemical – a response to dimethyl sulphide emissions from phytoplankton.” Yes, very true that if feedbacks are physical so far. My intuition thinking is ocean flow patterns and SST have a lot influences on local weather. The clouds also play a major role on weather as well as the climates. You may like to add a lot more so that I can learn from you, especially the hydrological aspects that affect climates. A response to dimethyl sulfide emissions from phytoplankton, sounds very interesting and please enlighten us.

      • A response to dimethyl sulfide emissions from phytoplankton, sounds very interesting and please enlighten us.

        Dimethyl sulfide (DMS) is a pretty volatile molecule which has a residence time of less than one day. How far is that going to get in the atmosphere?

        What is strange about the arguments is that the dedicated climate scientists offer up one general theory that looks fairly straightforward, that of CO2 emissions. The skeptics jump on it and say that it is too complicated and non-linear effects are not taken into account. This is not enough to derail the process so the skeptics obviously need to come up with their own counter-theory . After thinking a while, the skeptics will then offer up even more complicated scenarios to counter the main-stream thought. One of these is the DMS scenario. There goes the “too complicated” argument as they just shot themselves in the foot.

      • Oh please – sulphide in the atmosphere has a residence of hours to days and yet is a significant factor in global climate. It is not volatile as such but rains out as hydrogen sulphide. It persists for long enough to form cloud nucleation sites over the oceans – the CLAW hypothesis that has been discussed for decades.

        Climate is not just complex but dynamically complex – chaotic spatially as well as temporally with all that implies for sensitive dependence and abrupt and non-linear change.

        ‘Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.

        Abrupt climate changes were especially common when the climate system was being forced to change most rapidly. Thus, greenhouse warming and other human alterations of the earth system may increase the possibility of large, abrupt, and unwelcome regional or global climatic events. The abrupt changes of the past are not fully explained yet, and climate models typically underestimate the size, speed, and extent of those changes. Hence, future abrupt changes cannot be predicted with confidence, and climate surprises are to be expected.

        The new paradigm of an abruptly changing climatic system has been well established by research over the last decade, but this new thinking is little known and scarcely appreciated in the wider community of natural and social scientists and policy-makers.’ National Academy of Sciences – Committee on Abrupt Climate Change – ‘Abrupt Climate Change: Inevitable Surprises

        I quote this quite often – because most people still fail to understand the implications. You have a background in math but very little knowledge of the detail of physical climate or environmental systems.

      • The new paradigm of an abruptly changing climatic system has been well established by research over the last decade, but this new thinking is little known and scarcely appreciated in the wider community of natural and social scientists and policy-makers.’ National Academy of Sciences – Committee on Abrupt Climate Change – ‘Abrupt Climate Change: Inevitable Surprises

        I quote this quite often – because most people still fail to understand the implications. You have a background in math but very little knowledge of the detail of physical climate or environmental systems.

        So as I understand it, your basic premise is that any climate change can be on us at any moment and it will be unpredictable, so that there is no practical benefit of modeling anything. Further, since it is all non-linear dynamics, whereby the coefficients of the equations can change at any time, there is little value to solve these equations, both because we can’t and if we could they would change anyways, so what’s the point?

        So I looked this document up and you left out the next sentence:

        At present, there is no plan for improving our understanding of the issue, no research priorities have been
        identified, and no policy-making body is addressing the many concerns
        raised by the potential for abrupt climate change.

        Is applying chaos theory your basic plan to understanding the issue?

      • ‘So as I understand it, your basic premise is that any climate change can be on us at any moment and it will be unpredictable, so that there is no practical benefit of modeling anything. Further, since it is all non-linear dynamics, whereby the coefficients of the equations can change at any time, there is little value to solve these equations, both because we can’t and if we could they would change anyways, so what’s the point?’

        Pretty much. Would you prefer to pretend we have a different problem and solve that one?

        There are limited mathematical approaches – an increase in autocorrelation leading to noisy bifurcation. The Navier-Stokes equations used in atmospheric and ocean simulations are of course non-linear – but it is not the same non-linear system as climate.

        Here’s another approach – oddly enough using Fokker-Planck and probability – but expressing the result as a probability density function.

        ‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space. Prognostic equations for ρ, the Liouville and Fokker-Planck equation are described by Ehrendorfer (this volume). In practice these equations are solved by ensemble techniques, as described in Buizza (this volume).’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)

        ‘On the other hand, our examples lead to an inevitable conclusion: since the climate system is complex, occasionally chaotic, dominated by abrupt changes and driven by competing feedbacks with largely unknown thresholds, climate prediction is difficult, if not impracticable.’
        http://www.biology.duke.edu/upe302/pdf%20files/jfr_nonlinear.pdf

        You can find some practical scientific and policy objectives in the last link. You can find some pragmatic policies here – http://thebreakthrough.org/blog/2011/07/climate_pragmatism_innovation.shtml

        Do try to keep up.

      • Here’s another approach – oddly enough using Fokker-Planck and probability – but expressing the result as a probability density function.

        The solution to the Fokker-Planck is always a probability density function. It has to be a probability density function because diffusion is a random walk. It is easy to be confused by this because the physical manifestation of a PDF is a concentration profile. The concentration is a probability because it gives the probability of having that number of molecules normalized against the total.

        The PDF equivalency is why the Fokker-Planck equation is so universal and why I can apply it in many different applications without having to learn each domain. If you want to understand this in more detail for many practical applications, including many in your field of hydrology, you can read my book (located in the link on my comment handle).

        As to the rest of the comment on questioning the use of PDF’s, the missing piece is that the forecasters are starting to use this approach. Weather is not climate but weather is more volatile at shorter time scales. The AMS has been suggesting that weather forecasting use more probability-based forecasting as part of their toolkit. From last year:
        http://journals.ametsoc.org/doi/pdf/10.1175/2011BAMS3212.1

        NATIONAL UNIFIED OPERATIONAL PREDICTION CAPABILITY INITIATIVE

        Critical Research Gaps
        The list of research and development needs is long. … That said, there
        are clear areas from the operational perspective that require additional (or new) investment and are considered critical to meeting mission needs. It will be important to train forecasters in probability-based forecasting, and to train decision makers such as emergency managers in using these ensemble probability-based forecasts in their decision-making process.

        You seem to be antagonistic against certain analytical mindsets, kind of like a concern troll. The attitude is like “Better make sure no progress is made because I think the math is too hard”. There is only one place that I will not go and that is game theory when applied to economic decision making. IMO, the rest of the physical world is completely open to analysis because it does follow physical laws and the ensemble effects will follow the laws of entropy and free energy above all else. I think one can reason about this stuff contrary to what you deem is acceptable.

      • Webby,

        Go read the book instead of making it up.

        Cheers

      • Go read the book instead of making it up.

        Regarding the equivalence of the Fokker-Planck to a probability density formulation?

        The thing that you think I am making up is actually a premise I am working from. My premise is that if you assume that disorder plays a role in some physical behavior, there are specific analysis approaches that you can apply to make progress in understanding a problem.

        I agree with you 100% that chaos, bifurcation, critical phenomenon, complex systems, etc are difficult problems. That the math is difficult (which I agree with) is a canard, because the math of any combinatorial problem will grow uncontrollably in scale. My point is that complexity can be reduced if you can apply an appropriate analysis. It doesn’t take a lot of reading to figure out that this approach has potential. I have used probability arguments for years, but there are motivational essays that you can read to get yourself excited about the possibilities of information theory: Murray Gell-Mann’s “The Quark and the Jaguar” is a good one to read. Gell-Man co-founded the Santa Fe Institute, which is THE place for studies on complexity. I apply ideas from Santa Fe routinely, see a blog post I made last year:
        http://mobjectivist.blogspot.com/2010/10/bird-surveys.html

      • There is no real disagreement on the fundamentals of greenhouse gas radiative physics – only of initial conditions. The tone and language from Phil is unfortunate – and doesn’t foster a considered discourse. Similar, I felt, to some of the responses to you.

        Actually there is a real disagreement, you on important fundamentals of the atmospheric physics have it dead wrong, a fact you are unwilling to face!

      • I can’t help you get over your error Phil about brightness temperature meaning that the planet ‘looks cooler from space’.

        A warmer planet can’t possibly look cooler from space. You start with an insulting rant and finish with a meaningless comment. There is no point in further discussion with you – why don’t we let people decide for themselves.

      • Rob,
        After reaching a new stationary state the Earth as a whole will look from space to have the same temperature as before, if the solar irradiance and the albedo are the same, but during a warming period the Earth emits less than it receives energy from the sun and looks cooler from the space.

        All these comments refer to the equivalent black body temperature or equivalently to the total IR emitted by Earth to the space, not to any more detailed properties of the radiative energy spectrum.

      • Hi Pekka,

        You are assuming that CO2 enters the atmosphere and is then warmed by photon absorption. Gases from fossil fuel burning enter the atmosphere in an excited state. It seems to me that the atmosphere cools to the stationary state rather than warms. The atmosphere as a whole will remain warmer as a result of more greenhouse molecules.

        The spectrally resolved brightness temperature – which is obtained with the inverse of the Planck function (see I have learned something) – shows IR absorption in specific bands associated with specific molecules. It is not relevant to radiative equilibrium at TOA – which is mediated by, inter alia, atmospheric composition and temperature.

        I don’t know why they are so offended by this? It is a simple physical idea based on initial conditions.

        Cheers

      • Webby,

        “…the missing piece is that the forecasters are starting to use this approach.”

        Would this be part of the reason the forecasts appear to be getting WORSE?

      • Hi Pekka,

        It is the insistence that the planet appears to be cooler from space with increased greenhouse gases – which apparently demonstrates my utter ignorance. It assumes that the gases warm up as a result of interactions with IR – thus reducing IR emissions to space while the gases warm.

        But the initial state of gases from fossil fuel combustion is already highly energetic and the gases cool off in the atmosphere. Thus there is no point in time when the gases are cooler and warming. It is a simple physical concept emerging from consideration of initial conditions.

        It was part of a discussion with Sam NC – in an appropriate language for that discourse and Phil seems to have got the wrong end of the stick.

      • Gases of the atmosphere are normally very close to local thermal equilibrium. That applies to the velocity distributions, which are essentially Maxwell-Boltzmann distributions, but that applies also to rotational and vibrational modes in a way that’s deeply quantum mechanical, i.e. only certain discrete energy levels are possible and their occupancies are proportional to exp(-E/kT) where E is the energy of the state, k Boltzmann’s constant and T temperature. The local thermal equilibrium is maintained by very frequent collisional excitation and de-excitation of the discrete excited energy levels.

        The dependence on the energy of the state is the same exp(-E/kT) also for the kinetic energies of all types of molecules as the Maxwell-Boltzmann distribution leads to precisely that result.

        The IR radiation at strongly absorbed (and emitted) wavelengths is also in some sense in local thermal equilibrium with the gas as long as the mean free path is so short that the temperature is essentially the same over the whole path. This means that the rate of absorption and the rate of emission are very close to equal for the photons of those wavelengths.

        When hot flue gas enters the atmosphere, it’s soon mixed with a much larger mass of air. This whole mass is heated a little, but only very little. The CO2 molecules are continuously in almost perfect local thermal equilibrium with the other gases, and the vibrational states reach the occupancy of the local thermal equilibrium within milliseconds, i.e. they are all the time in local thermal equilibrium with all neighboring gases. The CO2 that just came from the stack is from the point of IR emission and absorption just like all other CO2. The higher temperature has its influence as long as the flue gases remain hotter, i.e. for a very short time only. The temperature dependence of IR emission from CO2 is given by Planck’s law, not Stefan-Boltzmann law. This means that the radiative power grows more slowly than in proportion to T^4.

      • The temperature dependence of IR emission from CO2 is given by Planck’s law, not Stefan-Boltzmann law. This means that the radiative power grows more slowly than in proportion to T^4.

        Yes, the T^4 dependence comes about strictly from integrating across a Planck distribution. Take a look at the comment I made above earlier today:
        http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-108303
        This contains a Wolfram Alpha integration that I can play with and get a feel for the temperature dependence.

        The smokestack and tail-pipe discussion is a huge red-herring.

      • WHT,

        We have agreed on that point all the time. You should see that, when you read more carefully the message above that you give the link above.

        The human energy production is indeed very small (insignificant) on global scale, but significant on local level, where the local energy production warms both directly and through locally elevated GHG concentrations. That’s a part of the urban heat island effect.

      • Hi Pekka,

        I hope you are well. I usually make it a point to consider what you say carefully – and not to think I know better. Nonetheless – I do disagree.

        The Stefan-Boltzmann equation is often used at TOA to calculate an ‘effective temperature’ of about 254K. e.g. – http://www.atmos.ucla.edu/~liougst/Lecture/Lecture_3.pdf – as opposed to a temperature of some 34 degrees hotter at the surface because of the greenhouse effect. This assumes a radiative equilibrium at TOA.

        The CO2 is exactly the same as any other CO2 molecule. If one were to add cold CO2 – the molecules would interact with IR photons and warm to the local thermal equilibrium – plus a little bit more because of the extra molecules. Radiative flux up at TOA would fall until the planet warmed sufficiently to increase the radiative flux out to equilibrium with the incoming flux. The planet would have a lower ‘effective temperature’ while the warming took place.

        If hot gases are added – the molecules cool to the local thermal equilibrium – plus a little bit because of the additional molecules. The ‘effective temperature’ is a little higher for a while until the planet cools a little and energy equlibria at TOA is restored.

        I think the 2nd case is self evident – and that is indeed implied in what you said. There is no challenge to radiative physics in my comment – simply an acknowledgement of the initial state of exitation of the molecules – as a result of which at no point in time is the ‘effective temperature’ of the Earth lower as a result of the addition of gases from fossil fuel combustion to the atmosphere.

        Cheers

      • Oh – and the most important point is that Phil and Webby are wrong.

      • Rob,

        As you seem to remember, I spent most of last week off from the net. (I was in Northern Finland (Lapland) hiking in the low mountains and nearly tundra like wilderness of the region.) A nice break from home.

        The Stefan-Boltzmann equation is often used in reverse to tell the effective blackbody temperature of the Earth, but that doesn’t imply anything on the validity of the equation for the Earth system, as the effective blackbody temperature is nothing more than the result of applying S-B equation in reverse. Defined in this way the equation cannot fail, whatever the physical situation.

        Adding CO2 to the atmosphere by burning fossil fuels will never cool the Earth even temporarily, but it might lower temporarily the artificial parameter called the effective blackbody temperature of the Earth.

      • Hi Pekka,

        Sounds wonderful.

        I gathered that about the SB equation. On the other hand the ‘effective temperature’ is just a shorthand for radiative flux.

        Cheers

      • Hi Rob,

        I gathered that about the SB equation. On the other hand the ‘effective temperature’ is just a shorthand for radiative flux.

        Right. It’s a measure of the radiative flux using the temperature scale to provide the unit for the value (with a conversion that involves the surface area, a universal constant and the fourth power or root).

        Improving radiative insulation makes a body radiate less than before. That leads to warming of the body until the incoming and outgoing radiation are equal again.

      • I approach the modeling of the energy distribution of the gray body from a different perspective. My background is heavy on semiconductor physics, where we reason about behaviors based on manipulating the F-D distribution. The CO2 performs almost like an energy bandgap for certain wavelengths, and since transmission at these wavelengths are reduced then the quiescent gray-body distribution has to rearrange to balance the energy fluxes. How the wavelength distribution is balanced doesn’t matter; in semiconductor physics, we can say it’s caused by scattering by phonons and defects, and the lattice itself modulates the shape — all we know that it does happen and the ergodic distribution will push toward maximum entropy.
        Reasoning about statistical mechanics is the same across disciplines. This discussion of initial thermal states is a distraction in the bigger picture.

        The bigger picture to me is getting at the significant effects, for example, just using the fundamental ideas to explain the temperature of Venus. To me that seems easy to convey. The significant molar fraction concentration of CO2 on Venus essentially blocks out or modulates a huge chunk of the infrared emission spectra. Yet, the energy fluxes have to balance out and thus the gray-body spectrum adjusts to allow the other wavelengths to emit. The Planck distribution says that the temperature of the gray-body will have to increase, so that is what it does, and on Venus this is is enough to push it up hundreds of degrees from the steady-state temperature corresponding to a no atmosphere situation. The initial state of some CO2 molecule has no effect as that happened eons ago and the steady-state effects are what matter.

        This gross kind of effect is undeniable, yet we see a lot of dancing around the topic and arguing over subtle effects. Certainly, this will have an effect if we are talking about tenth of a degree warming differences, but I personally interested in finding out different ways of reasoning about the system and thus inferring the outcomes from a different perspective.

        To me the initial state is a distraction, but to you it is some sort of conquest to demonstrate superior rhetorical skills.

      • The IR radiation at strongly absorbed (and emitted) wavelengths is also in some sense in local thermal equilibrium with the gas as long as the mean free path is so short that the temperature is essentially the same over the whole path. This means that the rate of absorption and the rate of emission are very close to equal for the photons of those wavelengths.

        Sorry Pekka this is where you have it wrong for our atmosphere. Near the Earth’s surface CO2 is a strong absorber but it is not a strong emitter because its radiative lifetime is so long (consider it as a Poisson process) compared with the very rapid collisional deactivation (we both agree that that is the order of nanosecs).
        So if you model using LTE you have to be aware that the rate of absorption is significantly greater than the rate of emission under those conditions.

      • Phil.,

        I’,m sorry, but it’s not me who is wrong.

        The rate of emission depends only on the quantum mechanical coupling between the excited state and the ground state through emission and the number of molecules in the excited state. The coupling is the same as that leading to absorption, i.e. large (emissivity is equal to absorptivity). Thus the rate of emission is high and the CO2 is a strong emitter. It’s true that some 99.99% or so of the excited states are de-excited through collisions, but that doesn’t change the situation, because an equal number is brought to the excited state at the same time maintaining the rather high level of occupancy.

        To a high accuracy the number of emissions is equal to the number of absorptions and the number of collisional de-excitations is equal to the number of collisional excitations. The latter numbers are much larger than the former ones, but even the smaller ones are large on the scale of emission and absorption. They are larger than the rate of IR absorption and emission anywhere else in the atmosphere.

  76. maksimovich | September 2, 2011 at 2:52 am |:

    Re: Tommy Gold:
    “Another area where it is particularly bad is in the planetary sciences …”

    I notice that you have posted this quote in several places. I would appreciate knowing what the source of it is.

    Thanks in advance.

    • Just google the expression. It’s from the journal called Scientfic Exploration and it reads like Gold is whining about his being a victim of the scientific establishment. This what Gold had predicted
      1. Steady State theory of cosmology
      2. “garbage theory” for the origin of life spread from waste dumped on Earth by aliens
      3. Moon was covered with a deep layer of soft dust thus preventing a lunar landing
      In baseball three strikes and you are out. So when he started theorizing about abiotic oil, people stopped listening, even though he could talk your ears off. Apparently, he used little mathematics but relied on deep intuition and in the case of abiotic oil, suspected plagiarism from Russian scientists. It really doesn’t matter anymore apart from a curiosity piece and to use Schermer’s phrase “Why do people believe weird things”.

      • Thanks, Web.

        Interesting read. I knew Gold, particularly through his work in planetary sciences. Although his ideas were not generally accepted, they were certainly not ignored (Tommy saw to that). I came to believe that he just enjoyed rattling the mainstream, and indeed, he was quite good at it.

        But I must admit that this piece surprised me; it does sound like sour grapes, and I would not have expected that from him.

      • Webby,

        #3, even NASA was worried about the depth of the dust on the moon. That PHYSICS thing so many people bow down to says that if the moon is billions of years old with all those thingies flying around and smashing into it and the differential expansion and contraction rates from the heating/cooling cycles without an atmosphere to MODERATE it… there will be large amounts of dust.

        There wasn’t. Velikovsky won another one.

  77. Chief Hydrologist | September 5, 2011 at 4:04 am |
    Oh – and the most important point is that Phil and Webby are wrong.

    Only in your dreams, by the way you seem to think that flue gases are added to the atmosphere at the combustion temperature, maybe that’s how you do it in the outback but in the rest of the world when fuel is used to generate electricity it’s mostly via the Rankine cycle where the flue gas temperature is around 30ºC (preferably lower)!

  78. Pekka Pirilä | September 5, 2011 at 5:56 am |
    Improving radiative insulation makes a body radiate less than before. That leads to warming of the body until the incoming and outgoing radiation are equal again.

    Not according to our resident cartoon character, the Hydrologist, he believes that is impossible!

    • Your argumentation with him has been void of real content for long, if not from the beginning. Neither side is going to win such a game.

      • Oh sure one can win such a game. There is such a thing as objective truth through the application of science and models, and many people want to seek that to their own satisfaction.
        Instead I see a lot of distractions and diversionary tactics.

        Improving radiative insulation makes a body radiate less than before. That leads to warming of the body until the incoming and outgoing radiation are equal again.

        I believe with this model wholeheartedly, but the English I read on this site is almost purposely obfuscated and ambiguous to completely bury this truth. And then forget the English and just look at the math; I think it is fun to get into something substantial mathematically, but I am beginning to doubt anyone here will engage on that.

        So let us parse those statements:
        1. Improving radiative insulation makes a body radiate less than before.
        2. That leads to warming of the body until the incoming and outgoing radiation are equal again.

        #1 is molecular absorption scaled according to the Planck distribution function
        #2 is the ideal Planck redistributing itself to maximize entropy

      • Handling mathematics is too cumbersome with the tools available here. I guess it was you, who succeeded in getting one formula through, but failed another time.

      • Yes, that second equation I tried to verify with an online Latex checker here:
        http://www.forkosh.dreamhost.com/source_mathtex.html#webservice
        but the WordPress plugin couldn’t parse it. I have to find an official WordPress checker somewhere…

        I agree it is a pain though.

    • ‘Improving radiative insulation makes a body radiate less than before. That leads to warming of the body until the incoming and outgoing radiation are equal again.’ Pekka

      My original response to you was:

      ‘But there is simple idea – all other things being equal the planet warms wth additional CO2 until the energy equilibria at TOA is restored. Energy in must equal energy out over the longer term.’

      The pointless personal attacks continue with deliberate misrepresentation.

      • ‘The emission by a GHG of IR occurs in the same wavelength band as it absorbs. in a strong absorption band the emission from the earth will be strongly absorbed and heats up the non-radiatively active gases like N2, O2 and Ar which constitute most of the atmosphere, it is only able to emit to space when it reaches the more rarified regions of the atmosphere. However, the emission is limited by the gas’s temperature via the S-B equation so the emission to space in that band is less that the emission from the surface hence the planet looks cooler from space.’ Phil

        This is the statement you made in your original and very rude post. You then realise your error and ascribe to me the complete reverse of what I actually said.

        ‘Pekka Pirilä | September 5, 2011 at 5:56 am |
        Improving radiative insulation makes a body radiate less than before. That leads to warming of the body until the incoming and outgoing radiation are equal again.’

        ‘Not according to our resident cartoon character, the Hydrologist, he believes that is impossible!’

        To remind you – what I said was:

        ‘But there is simple idea – all other things being equal the planet warms wth additional CO2 until the energy equilibria at TOA is restored. Energy in must equal energy out over the longer term.’

        The dishonesty and bad faith is breathtaking. You’re a disgrace to civilised discourse.

      • To remind you, you said: “A warmer planet can’t possibly look cooler from space.” which directly contradicts what Pekka said:
        “Pekka Pirilä | September 5, 2011 at 5:56 am |
        Improving radiative insulation makes a body radiate less than before. That leads to warming of the body until the incoming and outgoing radiation are equal again.”

        As pointed out by Pekka you have not added anything to the discourse other than to post erroneous material and pout when this is pointed out.
        You have made no attempt to support any of your statements, and continue to think that the only contribution of CO2 to heating of the atmosphere is as a result of its production by combustion and not via absorption of radiation, which is complete nonsense. You have made no attempt to conduct a ‘civilised discourse’.

      • ‘But there is simple idea – all other things being equal the planet warms wth additional CO2 until the energy equilibria at TOA is restored. Energy in must equal energy out over the longer term.’ me

        ‘Your argumentation with him has been void of real content for long, if not from the beginning. Neither side is going to win such a game.’ Pekka

        You are a liar and a fool.

      • Ahhh Chief,

        this has been a most amazing discussion with those ranked against you inverting their own logic.

        “‘Pekka Pirilä | September 5, 2011 at 5:56 am |
        Improving radiative insulation makes a body radiate less than before. That leads to warming of the body until the incoming and outgoing radiation are equal again.’ ”

        We are told that CO2 emits against the earth making it warmer than otherwise. If something is warmer it emits MORE warming the “insulation” more…

        When does the BB ever emit less??

        Why are they making themselves look silly by attacking you??

        My only thought is they are getting too used to considering these instantaneous SLUGS of material that are unphysical and COULD make this change. That is, they are assuming such a large amount of material that there is an actual measurable decrease in the emissions as seen from TOA when this SLUG is introduced!!

        As you appropriately point out, in the real world the material will have temperature and emissions based on how it is introduced to the environment and will not be in such a massive amount at a cooler temp.

      • When does the BB ever emit less??

        Good question, kuhnkat. What’s important here is to distinguish surface temperature from OLR. When CO2 increases it blocks more BB radiation, so that the planet looks cooler from space for the time being. The surface then heats up, pushing more BB radiation through what remains of the atmospheric window until equilibrium is restored, at which point the planet looks warmer from space again.

        What changes in between is the OLR spectrum. If you naively measure the net radiation before and after it looks the same. But if you take the spectrum into account, less of the OLR is going through CO2 absorption lines and more through what remains of hte atmospheric window.

      • Hi KK

        I am wondering what question you were asking Vaughan?

        Extra CO2 is added to the environment making the atmosphere more opaque to IR. This simply means that there are more molecules to absorb and re-emit IR. The gases from fossil fuels are hotter – but the whole atmosphere warms because of the increased absorption/emission in the atmosphere. I am not sure if I am right about this. The flame temperature is about 2000 degrees C for coal. The atmospheric temperature increase for the atmosphere as a whole is a fraction of a degree. Simple mass balance suggests that there Is no point when the atmosphere is cooler and then warms to a new equilibria?

        The spectra absorption is a different thing however. The set up involves a beam of light, a sample and a detector. It by no means implies that the atmosphere doesn’t emit in those frequencies. Individual molecules continue to absorb and emit in those frequencies – but on average the IR emits from higher in the atmosphere. All other things being equal – which they never are.

        The emissions are at a higher temperature because of the extra molecules. Pekka tells me that the increase in radiation with temperature is in accord with the Planck distribution and not the Stefan-Boltzmann T^4 factor – but still an exponential increase with temperature. .

        I don’t know what question you were asking – but you got the wrong answer.

        Cheers

      • The gases from fossil fuels are hotter – but the whole atmosphere warms because of the increased absorption/emission in the atmosphere. I am not sure if I am right about this.

        The first part is irrelevant because the effect is a negligible transient. The second part is simply shortened to: the atmosphere warms because of the greenhouse effect. Look to what happens on the moon versus what happens on Venus.

        I know that this is a pedantic answer but sometimes it works to look at the limits.

      • Webby,

        The first part is what is known as initial conditions.

        The second part explains spectral absorption.

        I know you are not good at science – but try not to repeat your irrelevant and simplistic assertions.

        Cheers

      • Vaughan,

        this was partially tongue-in-cheek because I KNOW Pekka didn’t mean what was written. The BB will continue to emit as much whether there is insulation or not. What changes is the NET flux outward!!

        Under most realistic conditions there will never be a time when the earth looks cooler. The CO2 emittied will either be at ambient temperature or warmer meaning it will be absorbing/colliding/emitting just as much as the CO2 already there, or more.

        The idea that the earth will look cooler for a short time till relative equilibrium apparently comes from the idea that the “insulation” shrouds the BB. My understanding is that at this point the BB is already very close to completely shrouded so the “insulation” is only shrouding itself. Again, there is no long period when there would be adjustments. The CO2 is “native” to the system from introduction.

        This need for a long equilibration may also be coming from the Bottleneck AGW theory. That is, that the extra CO2 increases the effective emissions height causing it to be cooler. I have written before on this and no one has shown me any reason to change my mind yet. As more CO2 is introduced it will be IMMEDIATELY warm from the radiation and will be convected. The convection process warms the atmosphere so there is never a point where the upper level is blocking anything. As the atmosphere warms it also expands reducing what would otherwise be a shrouding effect.

        I say this with the idea that the changes will be very tiny. The only way there would be a significant shrouding effect is the unphysical introduction of a “slug” of CO2 spread through the atmosphere that has not gone through typical warming and convective processes..

        This brings us to my issue with the whole INSULATOR meme. A blanket, shirt, feathers, fiberglass, etc will take a significant time to warm to the point where it starts emitting at the BB level it is shrouding. In the case of CO2 and other GHG’s there is no significant time for equilibration. It is emitted, it absorbs radiation, it is equilibrated unless it has to COOL through collisions which, in the lower atmosphere, will be rather quickly. The part of the system that takes a significant amount of time to change temp is the non-GHG’s which cannot change quickly by themselves and need the moderator GHG’s to warm or cool them.

        The problem with the meme is the idea that increased CO2 can make it to the upper atmosphere without warming itself and the rest of the atmosphere on the way. I know of no way currently that we inject CO2 in any appreciable amount into the upper atmosphere without it going through the usual processes.

      • Vaughan,

        I do have questions if you have time to try and educate me.

        You mention that the energy that is absorbed by the GHG’s gets frequency shifted and ends up being emitted through the window as we see that it does NOT go out its original freq window and would cause much more heating if it WAS retained!!

        I have been wondering about the details of that for a while. The spectra I have seen don’t really look like there is much extra radiation in the window compared to BB curve for the surface temp.

        One possibility is that the DLR that is absorbed by the surface is reemitted over the full BB. This would mean only part of it would go back up in the same freq as it went down in and seems somewhat reasonable. Is there no preferential absorption and emission based on the freq?? Is this the only frequency shift mechanism or are there others buried in the complex interactions of the atmosphere?

      • The BB will continue to emit as much whether there is insulation or not. What changes is the NET flux outward!!

        Axiom 1 (if you’ll forgive me for sounding like a logician :) ) is that a planet in thermal equilibrium with its star emits the same total quantity of radiation to space that it receives from the star. The outgoing radiation will be bimodal in that some, ~30% in the case of Earth, is reflected shortwave radiation and the rest is longwave radiation, OLR.

        Axiom 2, with or without thermal equilibrium, is that the surface temperature depends strongly on the insulating qualities of the atmosphere. The more GHGs, the better the insulation and the warmer the surface. However the physics of the insulation is quite different from that of say a blanket, which achieves the same surface-warming effect but in a different way.

        Note that Axiom 2 is independent of Axiom 1: in thermal equilibrium both are true.

        Now for the case of a disturbance in equilibrium.

        Axiom 3: During a transition between different states of thermal equilibrium, Axiom 1 no longer holds, but Axiom 2 will remain true. In the warming case Axiom 1 fails because some of the heat that would have been radiated to space in thermal equilibrium is instead used to heat the planet, not just the surface but also the oceans and land. This is reflected in a temporary decrease in OLR (reflected shortwave will not change much if at all). Land heating is particularly slow due to the good thermal insulation provided by the crust. Ocean heating happens faster thanks to convection. With a significant transition the delays involved in getting this heat into the oceans and ground may mean a long wait for thermal equilibrium to be restored. The cooling case is the reverse of this: OLR increases so that it and reflected energy becomes temporarily greater than incoming energy from the Sun.

        This is a natural stopping point for this account. There are of course lots more details, but this basic part is I feel essential.

        I’m not sure where the above is stated in so many words, or even who would agree or disagree with it. Conceivably it’s wrong, but in that case I’d like to understand why. Meanwhile this is the basic layer on which I would layer more details about climate change.

      • As more CO2 is introduced it will be IMMEDIATELY warm from the radiation and will be convected.

        Yes, exactly. Because CO2 is only increasing at about 0.01% a week (or 0.0000000165% per second), what happens in any detail to this tiny amount of additional heat resulting from additional CO2 is dwarfed by other considerations such as storms, the jet stream, and so on. So one may as well treat it as having been absorbed by the CO2 instantly, and passed along to the atmosphere just as instantly.

        Basically you have a system at close to thermal equilibrium (from a long-term standpoint) that is warming extremely slowly. So the usual analysis of what happens to a saucepan of cold water placed on a hot stove, or a blowtorch being used to solder plumbing, isn’t really applicable. On a time scale at which global warming can be noticed, the heat captured by CO2 is dispersed instantly to the atmosphere and from there to the oceans and land.

      • The part of the system that takes a significant amount of time to change temp is the non-GHG’s which cannot change quickly by themselves and need the moderator GHG’s to warm or cool them.

        Global warming happens so slowly that even heating the nonGHG part of the atmosphere is instantaneous by comparison, as is moving that heat around convectively, and getting it into the upper part of the oceans and land. Only deeper down does the time required start to become comparable to the time constants of the global warming process itself.

      • I have been wondering about the details of that for a while. The spectra I have seen don’t really look like there is much extra radiation in the window compared to BB curve for the surface temp.

        Well, the temperature has only increased 2 or 3 tenths of one degree during the period in which we’ve been collecting OLR spectra. The variation by latitude is much greater than this, so you have to compare spectra for the same latitude. The whole envelope of the spectrum will be very slowly pushed up with increasing surface temperature, by that rather hard-to-see amount, which is the effect you should be looking for.

        Note that it will continue to follow Planck’s BB law, so you won’t see any change in shape of the atmospheric window, just an increase in its overall height, by the amount that the surface temperature increases. This is an extremely hard effect to see when the height is only increasing by say 0.2 °C. In particular, don’t expect to see the envelope developing bulging muscles at what remains of the window. Instead the notches will get deeper as more is blocked there, an effect that may be more visible than the rise in the overall BB envelope.

      • Vaughan,

        Your axiom 3 is, of course, where we disagree, although as a minor quibble I doubt if the earth is ever in exact equilibrium with the sun. I would agree that it appears to be in a general equilibria within the apparent physical limits of the system.

        you seem to be missing the point that the BB can only emit what it is receiving. After that the “insulation” can moderate its loss. In other words the insulation cannot make the BB cooler unless it blocks the insolation which would cool the BB, not cause a temporary dimming by absorbing extra energy for a fraction of a second!! The insulation will not become cooler. The insulation in the original issue was supposed to DIM the earth by shrouding the brightness of the BB by absorbing extra outgoing radiation. My understanding of what the Chief was saying and what I am saying is that unless you put a large amount of GHG into the atmosphere at an artificially low temp that will not happen at a measurable level much less a SEEABLE level. GHG’s are emitted at hotter than atmospheric temps or ambient so are already emitting.

        You apparently think the earth and ocean is being heated by the DLR. The DLR slows the radiation emission of the BB, true, but that does not make the BB dimmer. The DLR replaces the reduced emission of the BB with itself so the BB is never dimmer even if we could actually “see” it from TOA in the GHG absorptive bands. The BB heats from the slowed emission. There is never a point where the outgoing radiation is lower. This is part of the CAGW fallacy.

        “Global warming happens so slowly that even heating the nonGHG part of the atmosphere is instantaneous by comparison,”

        Actually global warming happens every day. I reject the idea that the earth is so knife edge balanced that a tiny amount of something pushes us over some edge when we see huge changes over all time scales with no observable issues. The climate is quite stable. The ice cores are at best misinterpreted and, like so much of CAGW theory, unverifiable.

        The radiation frequency shift, I am assuming that you are telling me that the frequency change is only done through DLR causing the earth to be warmer not by collisional emissions from CO2 to other radiative specie?

      • My understanding of what the Chief was saying and what I am saying is that unless you put a large amount of GHG into the atmosphere at an artificially low temp that will not happen at a measurable level much less a SEEABLE level. GHG’s are emitted at hotter than atmospheric temps or ambient so are already emitting.

        The temperature at which “you put a large amount of GHG into the atmosphere” is irrelevant. That equalizes with the atmosphere rapidly and by law of mass action is hardly measurable.

        I have been working with the CO2 perturbation data and the results are very interesting, the following graph summarizes the findings:
        http://img402.imageshack.us/img402/7351/co2pdpmodels.gif
        Rapid changes in CO2 levels track with global temperatures, with a variance reduction of 30% if d[CO2] derivatives are included in the model. This increase in temperature is not caused by the temperature of the introduced CO2, but is likely due to a reduced capacity for the biosphere to take-up the excess CO2.

        This kind of modeling is very easy if you have any experience with engineering controls development. The model is of the type called Proportional-Derivative, and it essentially models a first-order equation
        \Delta T = k[CO_2] + B \frac{d[CO_2]}{dt}

      • ‘Good question, kuhnkat. What’s important here is to distinguish surface temperature from OLR. When CO2 increases it blocks more BB radiation, so that the planet looks cooler from space for the time being. The surface then heats up, pushing more BB radiation through what remains of the atmospheric window until equilibrium is restored, at which point the planet looks warmer from space again.’

        What changes in between is the OLR spectrum. If you naively measure the net radiation before and after it looks the same. But if you take the spectrum into account, less of the OLR is going through CO2 absorption lines and more through what remains of hte atmospheric window.

        I assume BB is black body – but the Earth is of course a grey body.

        The changes in the first part are very rapid and the gases are very hot to start with ~ 2000 degree C flame temperature.

        In the 2nd part – molecules both absorp and emit photons at very nearly the same frequency. A small difference in temperature will shift the Planck distribution in accordance with the Wein displacement law a little bit. The concept of spectral adsorption is a laboratory experiment – and the concept needs to be reinterpreted for the atmosphere.

      • Thanks Chief.

        Webby,

        I think you may have just argued against CAGW. What I am attempting to claim is that there is no noticeable dimming of the earth from TOA with the introduction of more GHG’s. The IPCC standard line is that there is due to the energy being used to warm the earth from DLR action to the new equilibrium with the additional GHG’s.

        Part of my argument is that radiative effects are so fast that even if they did go negative it would be for such a short time it is meaningless to us. The other part is that there is no decrease in energy radiated during the warming of the earth. This happens daily and is driven by the insolation and not increase in GHG’s. Attempting to find the GHG effect in the huge daily insolation is poinbtless but being played by the IPCC.

      • The changes in the first part are very rapid and the gases are very hot to start with ~ 2000 degree C flame temperature.

        Yes, I know they are initial conditions and all that, yet I find that irrelevant to the problem at hand.

        Or is this all part of the grand chaos theory that you champion where a butterfly flapping its wings in China can create a hurricane in the Atlantic? Maybe you are implying that the smokestacks are chaotic initiators? I have no other rationale for why this initial temperature argument is being brought up over and over again.

      • Your axiom 3 is, of course, where we disagree,

        Boy, that took me by surprise, I couldn’t imagine anyone familiar with high school physics objecting to it.

        Axiom 3 is an immediate corollary of the law of conservation of energy. Assuming no change in incoming energy, a planet that increases in temperature is increasing the energy in it. If it continues to radiate the same amount of energy while increasing in temperature, that’s a violation of conservation of energy.

        you seem to be missing the point that the BB can only emit what it is receiving.

        I must not have understood you correctly, since I certainly don’t understand that. If I heat up a body and then remove all sources of heat, it is now receiving nothing, yet it is hot and capable of radiating that heat. If you’re saying it can now not radiate anything then we have incompatible ideas of how radiation works.

        You apparently think the earth and ocean is being heated by the DLR.

        What did I say to make you think that? You may have missed my post saying the opposite, where I complain about Professor Alistair Fraser’s claim that the atmosphere heats the surface of the Earth, pointing out that this is like saying that a block of ice warms you when you sit next to it. That’s rubbish, the ice cools you by not supplying enough radiation to balance the radiation you’re losing. Same with the atmosphere on a cloudless day: it cools the surface by not supplying enough DLR to balance the ULR from the surface. (On a cloudy day the two tend to be more or less in balance so neither one warms or cools the other.)

        Actually global warming happens every day.

        We seem to be using different definitions of “global warming.” If I’ve understood you correctly you’re using it to describe the (more or less) equilibrium situation in which the Earth is warmed by the Sun and cooled to an equal degree by radiating the same amount of energy to space. I’m using it to describe the result of increasing GHGs and low-altitude aerosols (high altitude ones cool). GHGs are increasing at a very slow rate, a mere 2 parts per million a year in the case of CO2, which is far slower than the rate at which the atmosphere and surface keep moving back towards thermal equilibrium after each storm or other thermal perturbation.

        I get the feeling you think I’m terribly confused about climate science. If I am then you shouldn’t mind if I stop belaboring you with my confusions, which clearly are not serving any useful purpose for either of us.

      • See Webby, this is why some people probably think you are a leftist. Leftists consistently accuse their protagonists of things they themselves are doing!!

        “Or is this all part of the grand chaos theory that you champion where a butterfly flapping its wings in China can create a hurricane in the Atlantic?”

        Now, I have to admit wnen I first heard the butterfly theory it seemed somewhat reasonable to me, although rather LOW probability. Then I ran across a very well written article on how the coherent energy of the butterfly wings beating simply could not survive for much distance in the atmosphere of our planet. It convinced me that Al “an inconvenient Moron” Gore, who also apparently believes this with his IPCC buddies, was wrong. Oversimplifying the article, the impulse from the butterfly wings would be dispersed and convected away so that there would be 0 possibility of any connection. Uh, you did catch that this is an idea put forward by the warmistas didn’t you?

      • Webby,

        However minor and short lived – initial conditions – I am a little pedantic. If you want to stop implying that it is incorrect – by all means do so. Why do you keep bringing it up?

        I tend to ignore your version of the problem at hand – too simple to be a realistic model of the atmosphere.

        This is a more satisfactory model – http://www.agci.org/docs/lean.pdf – although still with many elements of the real world climate system missing. This is a far better model – http://www.nosams.whoi.edu/PDFs/papers/tsonis-grl_newtheoryforclimateshifts.pdf – and leads to startling ideas.

        Cheers

      • The butterfly was a journalistic interpretation of the Lorenz strange attractors – http://en.wikipedia.org/wiki/File:Lorenz_attractor_yb.svg

        It can’t be taken as an actual proposition – but chaos in Earth systems is undeniable.

      • Actually Vaughan, it isn’t whether what I am saying contradicts well known physics or not. It is whether you can demonstrate with observations that what you claim actually happens in the atmosphere. I have no argument with the physics of what you are saying. The real world isn’t so well behaved.

        I couldn’t even consider doing it myself, but, you might want to try and compute how much GHG you would have to dump into an area to actually cause a MEASUREABLE dimming.

      • The concept of chaos is misused all the time, and Lorenz may be one of those to blame for that.

        The Earth system behaves in many ways chaotically, but it’s certainly not a deterministic chaotic system, in the spirit of all the equations that lead to the standard chaos theories. The Earth systems is both complex to the point that it can be described fully only by looking at all the atoms of the Earth (and not really even by that). In more practical terms it can be looked as a continuum described by continuous fields. Either way leads to an essentially infinite number of variables that do not allow for writing such equations that could be solved to provide deterministic chaos even, if taken as perfectly accurate.

        Perhaps even more profoundly the Earth is a stochastic system rather than having the nature of deterministic chaos. Some oversimplified models of the Earth system may neglect the stochasticity and produce deterministic chaos, not the the real Earth.

        Stochastic chaoticity is in many ways very different from the deterministic chaos. Stochastic effects eliminate very rapidly the influence of the butterfly of Amazonas. The Earth system has, however, something left from the ideas of deterministic chaos. Certain phenomena are indeed highly sensitive to small changes in initial conditions, but it’s far from clear, what this means for the predictability of weather on shorter time scales and of climate on longer time scales. The stochasticity adds often to the apparent chaoticity, but it may also reduce it very effectively in some other phenomena. I have seen this discussed in relation to the medium term weather forecasts, but less in relation to the climate, but the point is likely to be as important for both.

        It’s a common practice to run the climate models many times with different initial conditions. This may be extremely misleading, when a more correct approach would be to add stochasticity to the models and run with that. That should make the runs much less sensitive to the initial conditions, and that might change profoundly the dynamics of the models.

      • Chief,

        “Imagine a rectangular slice of air heated from below and cooled from above by edges kept at constant temperatures. This is our atmosphere in its simplest description. The bottom is heated by the earth and the top is cooled by the void of outer space. Within this slice, warm air rises and cool air sinks. In the model as in the atmosphere, convection cells develop, transferring heat from bottom to top.”

        Obviously overplayed as usual.

        I will wait for a LOT more information on hurricanes/typhoons/tornadoes before guessing whether they could be caused chaotically. They happen too regularly and need such special conditions…

      • I appreciate what Pekka had to say about chaos because it matches my thinking. Stochastic-based approach have been widely overlooked or ignored for many natural science disciplines. A very good read is by Bricmont:
        “Science of Chaos, or Chaos in Science”
        It is long but he brings up so many interesting ideas that at the end you have a different perspective.

      • Webby,

        I think you may have just argued against CAGW.

        Then you should be able to use it to build up your own argument.
        From my experience, I think the correlation is strong in the data. So we can argue a few things:
        1. Is the correlation definitive for someone else? What are the odds of having so many of the peaks and troughs match up that well?
        2. Is the causality from CO2 rise to Temperature rise?
        3. Is the causality from Temperature rise to CO2 rise?

        The causality is statistically hard to tell because the cross-correlation peak is at essentially zero lag.

        For #3, it could be that the CO2 is following temperature rise as biota becomes more active as it gets hotter, the partial pressure of CO2 increases (as in Arrhenius law), more people use air conditioning, and any number of reasons that may work as a set of hypotheses. Is that level of forcing function enough to incrementally bump up the CO2 concentration when the temperature spikes occur? Who knows?

        All I know is that I haven’t seen this kind of signal processing done on the CO2 data. I have looked but unless I know all the keywords I may have missed some obvious and frequently cited references.

      • Pekka,

        “That should make the runs much less sensitive to the initial conditions, and that might change profoundly the dynamics of the models.”

        Since I am not a big fan of chaos, i see this as a problem. No initial conditions means the system MAY move between the general boundaries in vaguely realistic peaks and valleys, but, CANNOT match the real climate. It is a similar issue to volcanoes. Without manually entering the parameters the models cannot match the perturbations.

        For instance, the last 10 years or so of flat temps were a surprise to the modelers. One group did a few runs with their models initialized with realistic conditions and THEN saw the lack of warming. Doesn’t this tell you something? Like either they have more serious problems than thought OR they need the realistic inputs?

        Since we can’t currently quantify the various ways the variable sun perturbs the climate I think the modelers are chasing their tails. Their outputs are vaguely similar to our planet but do not do a reasonable job of showing us where it is going without a better understanding of the inputs and how they drive the system. The idea that such a poorly modelled system can tell us anything about a small potential increase in one component is silly.

        (and of course there are the issues with clouds, aerosols, precipitation…)

        I do a lot of arm waving about climate, but, the modellers are the MASTER Arm Wavers!!!

      • Vaughan,

        “Land heating is particularly slow due to the good thermal insulation provided by the crust. Ocean heating happens faster thanks to convection. ”

        Wrong! You forgot specific heat capacities of matters. Desert land experiences extremes of daytime and night time temperatures whereas oceans remains relatively unchanged temeperature as compared with deserts.

      • Sam NC,

        You were trying so hard for a while then you lost it again. Vaughan was talking about how long it takes for the land to warm, not the air above the land. Even been in a cave or seen a dog dig a hole to lay in?

      • Dallas,

        You are a careless reader. Read my above message again and you will see I did not mention atmosphere I refer to land temperature and ocean temperature which I addressed to Vaughan. Read it again, your boss will not be happy with your careless and hamper your future in the biofuels.

  79. Pekka Pirilä | September 5, 2011 at 1:05 pm |
    Phil.,

    I’,m sorry, but it’s not me who is wrong.

    The rate of emission depends only on the quantum mechanical coupling between the excited state and the ground state through emission and the number of molecules in the excited state. The coupling is the same as that leading to absorption, i.e. large (emissivity is equal to absorptivity). Thus the rate of emission is high and the CO2 is a strong emitter. It’s true that some 99.99% or so of the excited states are de-excited through collisions, but that doesn’t change the situation, because an equal number is brought to the excited state at the same time maintaining the rather high level of occupancy.

    To a high accuracy the number of emissions is equal to the number of absorptions and the number of collisional de-excitations is equal to the number of collisional excitations. The latter numbers are much larger than the former ones, but even the smaller ones are large on the scale of emission and absorption. They are larger than the rate of IR absorption and emission anywhere else in the atmosphere.

    I wish you were right Pekka, if so it would have made a big difference to my Laser Induced Fluorescence experiments, no fluorescence quenching! Check out the Stern-Volmer equation. Just because a collision can transfer the same amount of energy to a CO2 molecule as a photon of 15μm wavelength can it doesn’t mean that it is specifically transferred to the same vibrational state, in fact it’s more likely to be a translational excitation.

    • Phil.

      “Just because a collision can transfer the same amount of energy to a CO2 molecule as a photon of 15μm wavelength can it doesn’t mean that it is specifically transferred to the same vibrational state, in fact it’s more likely to be a translational excitation.”

      Can you provide a probability number? I have been trying build a better argument for conductive heat transfer at the surface. The Trenberth cartoon has everyone thinking crazy stuff.

      • If we assume that the line width of a particular transition in the 15μm band is about 0.1 cm^-1 then fewer than 1 in 10000 molecules would have the requisite energy, in order to directly excite the transition it would have to collide in a particular orientation, which would also be very low probability. Contrast this with the vib-vib exchange which occurs in the CO2 laser where an excited N2 molecule with an almost exactly matching vibration energy and spring constant transfers that energy to a CO2 vibration.

      • Thank you.

    • For simplicity, consider a layer of our atmosphere where non-radiative inputs and outputs are negligible and CO2 is the only radiatively active moiety. Photon emissions from CO2 must equal photon absorptions at equilibrium. Absorptions exceeding emissions will cause the layer to warm until enough collisional excitations followed by emissions occur (plus the rare photon absorption followed by emission) for emissions to rise to equal absorptions. The number of collisions that fail to do this because they don’t supply excitation energy or because the excitation is followed by collisional de-excitation is irrelevant to this principle. The higher the percentage of failures, the higher the temperature must rise until the successful excitations equal the rate of absorption. The fact that an absorption will be balanced by emission from a different molecule is also irrelevant as long as the rates balance on average, which is the definition of equilibrium.

      • I also don’t believe that the directionality of collisional excitation is important under the conditions of radiative transfer in our atmosphere, but I will defer to experts in laser technology for its role in laser emissions..

      • Fred, It is if you’re looking for a collision to excite a particular vibrational mode at a particular frequency. Bear in mind that the vibration is quantized, only certain discrete frequencies being allowed

      • If you have evidence that directionality is important for collisional excitation under atmospheric conditions (temperature, pressure, CO2 concentrations), as opposed simply to the kinetic energy involved in the collision, could you cite it and a source? Vibration is quantized of course, but translational kinetic energy involved in collisions is not, and if it happens to be adequate to mediate a vibrational quantum transition, I’m unaware of reasons that isn’t sufficient (but would be interested in data on this point). I realize that lasers operate differently, and at different energy levels and concentrations, but for the atmosphere, it appears to be the translational energy of colliding molecules (O2, N2, etc.) that is paramount. What evidence for a different conclusion bears on this?

      • Exactly Fred. This is where we were talking past each other on another thread. I was saying that an increase of absorption in the higher atmosphere was balanced by more collisions at the surface. I.E. that conduction was the major player at the surface. That doesn’t change the search for equilibrium, just that CO2 trapping is a poor choice of words and that down welling at the surface can be easily confused not only as the main means of heat transfer but also the by the source of the energy causing the majority of transfer. About 70% of the energy in the atmosphere is due to direct solar absorption and latent due to direct solar absorption.

        This may not seem like a major point but when you add the change in heat transport in the atmosphere caused by internal oscillations, I get a better feel for the non-linearity of the internal variation’s forcing/feedback potential.

      • Fred,

        Your conclusion is correct, but your argument not fully. The requirement of equilibrium could be reached also in the way that absorption and emission rates differ and the difference is balance by an opposite difference between collisional excitation and de-excitation. The correct argument requires the observation that quantum mechanics tells that absorptivity and emissivity are equal for each wavelength separately. This is a modern formulation of Kirchoff’s law.

        This form of Kirchoff’s law does not apply to all situations, but it’s valid for the IR radiation in the atmosphere as it’s valid for almost all cases not related to lasers. The lasers are based on those exceptional cases, where Kirchhoff’s law is not valid. There are two specific issues of that nature in lasers. One is the importance of stimulated emission. Stimulated emission is extremely weak for normal IR and visible light and can be neglected. The lasers are specifically constructed to create strong coherent radiation and their operation is based on stimulation of emission by the strong fields of this coherent radiation.

        The other special issue in lasers is the presence of metastable states of exceptionally long life times. In case of CO2 lasers these metastable states occur in N2, which cannot radiate as it doesn’t have electric or magnetic dipole moment. These excited states of N2 can transfer their energy to CO2 in a collision leading to abnormally high level of excitation for certain specific excites states, which are then de-excited by the stimulated emission.

        There are also some other exceptional situations, where metastable states (or excited states of exceptionally long life times) have a role. Such issues don’t, however, have much role in the atmospheric IR, where the rule that emissivity is equal to absorptivity for each wavelength and each transition separately is true.

        Phil. was referring in his above comment to laser induced processes. That leads naturally to more absorption than emission as the laser bean has a high intensity. The equality of absorptivity and emissivity leads to the near equality of absorption and emission rates only, when the intensity of IR radiation corresponds to the local temperature as it does for the 15 um radiation in troposphere far enough from the surface (100 m is already safely far enough near the center of the peak).

        Phil. was also referring to vibrational vs. translational excitation indicating that they would compete in a way that suppresses vibrational excitations. That’s a wrong argument. The point is that both are excited at the rates determined by the quantum mechanical transition probabilities. The presence of translational modes is indeed needed to have the collisions that lead to the vibrational modes at the rate corresponding to the temperature and the transition probabilities.

      • Pekka, “Phil. was also referring to vibrational vs. translational excitation indicating that they would compete in a way that suppresses vibrational excitations. That’s a wrong argument. ”

        I got the impression Phil was indicating that additional CO2 in an environment dominated by collisional transfer would not significantly increase vibrational excitation.

      • Dallas

        Phil. is comparing with a CO2 laser, where the excitation of CO2 vibrational modes goes largely through the vib-vib transfer from metastable N2 vibrational state of 0.27 eV energy. Nothing of that is relevant for the troposphere, because the conditions are so different. The vibrational states of CO2 molecules that are important in the troposphere are at lower energy level (0.083 eV) and maintained continuously at the occupancy that corresponds to local thermal equilibrium. They are mostly excited directly in collisions, not at all through vib-vib exchange. The thermal equilibrium occupancies of the vibrational states are about 3% of the corresponding vibrational ground states. Both the vibrational ground states and the excited states may in addition have rotational energy as well as translational energy.

        The wavelength of a CO2 laser is not 15 um but close to 10 um. The lines close to 10 um are very much weaker than the 15 um line and of little significance for the GH effect in spite of the fact that they are in the middle of the atmospheric IR window. The 10 um transition is not between the ground state and an excited state, but between two excited states. The energy of the upper state is an asymmetric stretching mode at about 0.27 eV which is high in comparison with the bending mode at 0.083 eV that corresponds the 15 um line. The energy of 0.27 eV happens to coincide very precisely with the lowest vibrational state of N2 leading to the vib-vib transfer. The occupancy of the 0.27 eV level is in atmosphere only about 0.002%, i.e. almost insignificant.

        Additional CO2 would increase in proportion also the rates of IR absorption and emission in a volume of air. There is nothing exceptional in that.

      • Pekka said “Additional CO2 would increase in proportion also the rates of IR absorption and emission in a volume of air. There is nothing exceptional in that.”

        This may just be a communication issue over a minor point. I would say, “Additional CO2 would increase the rates of IR absorption and emission in a volume of air in proportion to the initial rate of emission and absorption of that volume.” So at the surface, absorption is low relatively due to translation excitation and emission virtually nil. So the translation excitation doesn’t suppress vibrational excitation any more or less with the addition of CO2. This would be consistent throughout the atmosphere, which is why 2xCO2 absorption/emission impact is greater in the upper atmosphere.

        This may not be a big deal, but it means that the emission window at the point where the vibrational/translational excitation ratio becomes significant would be the more important window to consider for determining the change in the radiation budget due to 2xCO2. Lower clouds would tend cool and higher clouds would tend to warm, etc.

      • Dallas,

        My comments were all related to what happens in some chosen volume of air inside the troposphere. That doesn’t necessarily have much influence on temperatures, because the changes in radiative energy transfer inside the troposphere are compensated by changes in convection.

        There are more significant changes in the radiative heat transfer from the surface and to the space, but my most recent comments were not on these changes. I have discussed those changes as well, but not in these most recent comments.

      • Pekka,
        Although the emissivity is equal to the absorptivity that does not mean that the rate of emission is equal to the rate of absorption, your explicit assumption of this is the difference between our approaches. The rate of emission depends on the emissivity and the population of the excited state, however the population of the excited state depends very strongly on the collisions with neighboring molecules. Your assumption is that the particular vibrational state is equally repopulated by collisions which is flawed because the collision cross section for vibrational excitation of a particular vibrational state is very small. This process is described by the Stern-Volmer mechanism, if you were to do a plot of CO2 emission vs diluent gas pressure I’d expect a fall off in emission with increased pressure you’d expect no change. Bear in mind that the radiative lifetimes of the excited states are μsec to msec, however they are excited they don’t last long enough to significantly emit.

        See here for example: http://www.biophysics.helsinki.fi/lectures/generalfluorescenceB.pdf

      • Phil.,

        You are right that the equality of emissivity and absorptivity alone does not guarantee that the rates are equal. That requires another factor, namely that the intensity of radiation is at the level, that corresponds to the rate of emission.

        The rate of emission is determined by molecular emissivity, density of CO2 molecules and the occupation of the excited state. The occupation of the excited state is determined by temperature. Thus all these factors are independent on the intensity of radiation, which has a very small additional influence on the occupation of the excited state through absorption.

        The intensity of radiation inside a region that’s essentially isothermal in a volume large compared to mean free path of radiation is determined by the rate of emission discussed above and absorption, as the situation was defined to be such that external radiation cannot penetrate effectively at the wavelengths considered. Here the equality of emissivity and absorptivity comes in and guarantees that intensity of radiation is such that the rate of absorption is equal to the rate of emission. The equality is broken near the surface, where radiation from surface influences the outcome, and it’s not true anywhere at wavelengths that are absorbed so weakly that the mean free path is comparable to the hight of the troposphere.

        You continue to bring in arguments from processes not relevant for the atmosphere, where we are interested in the 15 um line of the first bending vibrational mode. For that mode, which is the only of interest here, the cross section for collisional excitation and de-excitation is large enough to be 10000 time stronger than radiative effects. Your latest reference was again about effects that occur somewhere else. They are important under some specific conditions, but not relevant for the CO2 in the atmosphere.

        For the first bending mode we have a very simple situation, where elementary classical analogy tells well, why the cross section for collisional excitation is large. The CO2 molecule is a linear three-atomic molecule. Hitting it from side leads to bending vibration. In quantum mechanics that can occur at some discrete energies only. In this case we are interested in one vibrational energy level that corresponds to the 15 um radiation. As soon as the two colliding particles have sufficiently translational energy in the collision, the transition probability for excitation is large, for de-excitation it’s large at all energies. The requirement for sufficient translational energy is the reason for the factor exp(-E/kT) for the relative probabilities of excitation and de-excitation. This factor is about 0.035 at the temperature of the lower troposphere and a little smaller at tropopause. (Actually we have two bending modes, both with relative occupancy of 3.5% or 7% in total.)

        The requirement of exact match with the energy of the excited state doesn’t reduce the cross section, because the conservation of energy and momentum is automatically taken care by the velocities of the molecules after collision. In most cases also the rotational state of the molecule changes in the collision.

        There are no other states or transitions available that could lead to the similar effects that occur in CO2 lasers or are discussed in your latest reference. We have only the one vibrational energy level, rotational modes, and translational energy. All transitions that involve the vibrational state are of the basic type discussed above. Therefore there are no possibilities for the effects that you have brought to the discussion.

      • The intensity of radiation inside a region that’s essentially isothermal in a volume large compared to mean free path of radiation is determined by the rate of emission discussed above and absorption, as the situation was defined to be such that external radiation cannot penetrate effectively at the wavelengths considered.

        Yes. I like to visualize this situation as an ocean of photons with a short mean free path, just as water molecules in the real ocean have a short mean free path. I find this viewpoint particularly helpful in visualizing radiation in Venus’s atmosphere, but the same conceptualization works for photons at strongly absorbing wavelengths in Earth’s atmosphere.

        In both the atmosphere and the ocean, photons/molecules that are sufficiently near the surface stand a fair chance of evaporating. For photons, evaporation means escaping the atmosphere without being captured by a GHG. For water molecules, evaporation means having enough energy within a molecule or two of the surface to tear free of surface tension.

        Most photons at strongly absorbing wavelengths never get near the surface, just like water molecules. But unlike water molecules they don’t collide with each other or even with O2 or N2, only with GHGs. And also unlike water molecules they lose their identity at every such collision.

        Photons at sufficiently weakly absorbing wavelengths see an atmosphere that is optically much thinner and accordingly more transparent. Some have a good probability of passing from the surface all the way through the atmosphere without being captured. These aren’t so naturally thought of as forming a photon ocean, rather they are just ordinary radiation slightly attenuated by the atmosphere, for which the wave view of radiation is more natural than the photon view.

      • Pekka – This has been an interesting discussion, and my understanding of radiative equilibrium is very much in the manner you describe, with Kirchoff’s Law being the relevant principle. On a small point, though, I don’t think I quite follow your reasoning in the following argument:

        ” The requirement of equilibrium could be reached also in the way that absorption and emission rates differ and the difference is balance by an opposite difference between collisional excitation and de-excitation. “

        Consider a simple N2 plus CO2 atmosphere at equilibrium (and let us ignore the rare cases of photon absorption followed by photon emission without any collisions). I believe collisional excitation rates must necessarily exceed collisional de-excitation rates in order for some of the collisionally excited CO2 molecules to be de-excited by photon emission. However, that would progressively deplete the energy of atmospheric N2 over time unless the difference were compensated by N2 energy gained via collisions with CO2 molecules excited by photon absorption. As a result, emission and absorptions will remain equal. An energy budget analysis should give the same result. If the only source of energy into the layer is absorption and the only source of energy out is emission, absorption and emission energies must be equal for the layer not to be gaining or losing energy. Whether that means that the numbers of emissions and absorptions are equal is uncertain in my mind, but they would probably be close.

      • In my above comment, my statement that collisional excitation must exceed de-excitation referred to the de-excitation of collisionally excited CO2 molecules. Once we add de-excitation of CO2 molecules excited by photon absorption, excitation and de-excitation should balance.

      • Fred,

        This is as much or more continuation of my earlier comments than a response to you.

        In a stationary situation the sum of collisional excitation and absorptive excitation must be equal to the sum of collisional de-excitation and de-excitation by emission. When we are deeply inside a volume large in comparison to the mean free path of radiation, the rate of emission must also be equal to the rate of absorption for the wavelength considered. Thus it’s also guaranteed that the rate of collisional excitation is equal to collisional de-excitation. All that follows from the stationarity and the assumption that the mean free path of radiation is small, and from the third fact that there are no other mechanisms of excitation or de-excitation of the vibrational bending mode. As stated many times, the rate of collisional processes is much larger than that of radiative processes.

        The above applies strictly to an isothermal volume, but there is a temperature gradient in the atmosphere that leads to net energy transfer by radiation. That effect is, however, so weak in comparison that it doesn’t affect essentially the above discussion, unless we wish to study specifically this form of energy transfer. When we wish to study that, then the small deviations from equality become important. With constant lapse rate more radiation is coming from below than is radiated downwards and more is radiated up than is coming down. The total is still in almost perfect balance, but the nonlinearity in the temperature dependence of the rate of emission leads to a very small imbalance to be compensated by collisional effects.

  80. Sam NC | September 5, 2011 at 1:17 pm |
    Phil,

    “No this is a fundamental error on your part, em radiation doesn’t have a temperature and CO2 most certainly does absorb 15μm radiation. In any case you are still mistaking the S-B law with the 2nd Law as pointed out before.”
    I was not using strict words ‘temperature’ which should be ‘energy’ in my upper statements. When you consider 15um, its a wave theory. Wave theory must obey wave properties. Weaker energy state waves will be deflected or absorbed by e-m waves of stronger energy state.

    This is science the words used have precise meanings. CO2 absorbs strongly in the 15μm band, 15μm photons are all the same regardless of their source, there are no weaker or stronger ones!

    “When a CO2 molecule which is in equilibrium with its surroundings absorbs a 15μm its energy is increased by ~8kJ/mole.”
    So the surroundings cool off by ~8kJ/mole!

    No the surface absorbs light from the sun and radiates it out in the form of IR which cools the surface and warms the atmosphere via the medium of CO2 in the case of 15μm radiation.

    “It is that excess energy that it loses to the surrounding molecules, its ‘vibrational temperature’ is higher than its neighbors’.”
    Is it not circular? ‘vibrational temperature’ is higher after loss excess energy? You confused yourself very much.

    I’m not the one who’s confused here!

  81. Pekka Pirilä | September 6, 2011 at 1:20 pm |
    Dallas

    Phil. is comparing with a CO2 laser, where the excitation of CO2 vibrational modes goes largely through the vib-vib transfer from metastable N2 vibrational state of 0.27 eV energy. Nothing of that is relevant for the troposphere, because the conditions are so different.

    Yes that’s exactly the point I made when Tomas posted that the CO2 laser was a good analog for the atmosphere up-thread.

    The vibrational states of CO2 molecules that are important in the troposphere are at lower energy level (0.083 eV) and maintained continuously at the occupancy that corresponds to local thermal equilibrium. They are mostly excited directly in collisions, not at all through vib-vib exchange.

    They are mostly excited by IR absorption, collisions would be translation-vib exchange, very low efficiency.

    The thermal equilibrium occupancies of the vibrational states are about 3% of the corresponding vibrational ground states. Both the vibrational ground states and the excited states may in addition have rotational energy as well as translational energy.

    Certainly would.

    The wavelength of a CO2 laser is not 15 um but close to 10 um. The lines close to 10 um are very much weaker than the 15 um line and of little significance for the GH effect in spite of the fact that they are in the middle of the atmospheric IR window. The 10 um transition is not between the ground state and an excited state, but between two excited states.

    Exactly the point i made in response to Tomas.

    The energy of the upper state is an asymmetric stretching mode at about 0.27 eV which is high in comparison with the bending mode at 0.083 eV that corresponds the 15 um line. The energy of 0.27 eV happens to coincide very precisely with the lowest vibrational state of N2 leading to the vib-vib transfer. The occupancy of the 0.27 eV level is in atmosphere only about 0.002%, i.e. almost insignificant.

    Additional CO2 would increase in proportion also the rates of IR absorption and emission in a volume of air. There is nothing exceptional in that.

    True except that the emission rate would be very low in the lower troposphere.

    • Phil.,

      I don’t understand, why you claim that the emission rate would be very low in the lower troposphere, when the physics is straightforward and simple and the rate is high, higher than anywhere else, because the temperature is higher and the density is higher. Both contribute directly to the emission rate.

      The phenomena related to lasers are totally irrelevant in the atmosphere, because the types of excitation are different and the radiation is (practically) not at all stimulated.

      The lowest vibrational state has a high occupancy and a state of high occupancy emits with an intensity proportional to the occupancy. The short life time of the state leads to line broadening, but does not reduce the emission rate, when the occupation level is maintained continuously by new excitations.

      • The phenomena related to lasers are totally irrelevant in the atmosphere, because the types of excitation are different and the radiation is (practically) not at all stimulated.

        In fact it would be counterproductive to run a CO2 laser near a CO2 absorbing line since the CO2 would soak up the laser radiation and it wouldn’t work. CO2 lasers emit at 9.4 and 10.7 μ, which is in the atmospheric window and can therefore travel miles. Otherwise laser rangefinders would have very limited range: the mean free path of a photon at the strongest CO2 absorption line is only a few centimeters.

        The lowest vibrational state has a high occupancy and a state of high occupancy emits with an intensity proportional to the occupancy.

        This raises an issue that we discussed some time ago, Pekka, concerning Kasha’s rules and Stokes shift. You were dubious that these were relevant at atmospheric temperatures, but it seems to me they must be.

        Currently about 600 CO2 lines have optical thickness 1 or more for the atmosphere considered vertically. A lot of these will correspond to higher vibrational states, and it seems to me that what must happen is that after capturing a photon at one of higher levels the molecule will undergo internal state transitions to the lowest vibrational state before emitting a photon and leaving that state.

        This would imply that there are many lines where CO2 absorbs but does not emit. The troposphere will be bathed in energy at the emitting wavelengths, which will “see” the CO2 as highly viscous. However it won’t be fed directly by Earth at those wavelengths because they will be captured a meter or less above the ground. The surface directly warms the atmosphere at all altitudes, but the wavelengths that do so must be non-emitting wavelengths in order to reach that high, due to putting the molecule in a low-occupancy state that it immediately gets out of (keeping it low occupancy) via internal (non-radiating) transitions.

        Does my reasoning make sense? There may be a clearer way of putting it.

      • Incidentally, apropos of my “Otherwise laser rangefinders would have very limited range,” see this this abstract pointing out the advantages of CO2 laser rangefinders.

      • Yeah! What he said :)

      • Vaughan,
        Stokes shift is a phenomenon related to electronic transitions, not molecular vibrations (although that’s probably not totally excluded). The idea is that a higher energy state is excited and the excitation is followed by a transition to another state of somewhat lower energy, when the direct de-excitation of the original state to the ground state is forbidden of at least relatively slow.

        For the vibrational states of CO2 near 15 um the transitions are not forbidden and the simple rule on the occupation levels is valid. This is particularly obvious as a collision can very easily excite a transverse vibration of the three-atomic CO2 molecule. Very often the rotational state is also modified as the collisions are seldom central. The excited molecules participate also in collisions and get de-excited. In a single collision excitation is less likely than de-excitation by the factor exp(-E/kT), where E is the excitation energy. Therefore the occupation level of the excited state is smaller by the same factor. The occupation level does not depend on the collision rate as the relative frequency of excitation to de-excitation remains the same.

        Only, when the emission and absorption of radiation starts to be nearly as common as the collisions starts that affect the occupation level, but that happens only in the stratosphere.

        The lowest transverse bending mode corresponding to the 15 um peak is by far the most important mode for the atmospheric absorption and emission by CO2. The many lines are related to a combination of this mode with rotational states of both the excited state and the vibrational ground state, not to many vibrational states. The second transverse bending mode and the lowest symmetric longitudinal stretching mode are also in the thermal energy range, but not nearly as important (the symmetric stretching mode doesn’t interact with IR). The asymmetric stretching mode that’s the basis for CO2 lasers has a too high energy for being of major significance (wavelength 4.2 um).

        All the combinations of vibrational mode and rotational modes contribute similarly to emission and absorption. They don’t have that kind of selection rules that would lead to Stokes shift. Large changes in angular momentum are suppressed, but similarly for both phenomena.

      • Stokes shift is a phenomenon related to electronic transitions, not molecular vibrations (although that’s probably not totally excluded). The idea is that a higher energy state is excited and the excitation is followed by a transition to another state of somewhat lower energy, when the direct de-excitation of the original state to the ground state is forbidden of at least relatively slow.

        (Sorry about the slow reply, Pekka, I lost track of this thread for a couple of days.)

        We seem to have somewhat different views of Stokes shift and Kasha’s rules, particularly as regards their relevance to bonds. When Kasha first described the latter in the 1950s he did so exclusively for molecules, namely in his July 3 1950 paper “Characterization of Electronic Transitions in Complex Molecules”, Discuss. Faraday Soc., 1950, 9, 14-19. His paper focuses on the radiationless transition from the lowest excited singlet level ot the lowest triplet level of the molecule. He writes “the internal conversion process takes place at least 10^4 times as fast as the spontaneous S^n –> __ emission.” Emphasis on “at least” — Kasha had no idea just how fast internal transitions were, only that they were much faster than photon emission. The mean time between collisions of air molecules is around 140 picoseconds, but Kasha could not even tell whether the internal transitions were faster or slower than that, only that emissions were several orders of magnitude slower.

        Only, when the emission and absorption of radiation starts to be nearly as common as the collisions starts that affect the occupation level, but that happens only in the stratosphere.

        Fine, but what if the internal transitions (no emission or absorption of radiation) are much more common/faster than the collisions? Granted 140 picoseconds at sea level is a short time between collisions, but if the internal transitions are faster still then there’s going to be a lot of absorption at wavelengths for which there is no corresponding emission.

        Is there experimental evidence to the contrary that is specific to atmospheric CO2? Bear in mind that the atmosphere is many kilometers thick.

      • Vaughan,

        Every transition must conserve energy. Thus there are no internal transitions without interaction with some external participant that receives the energy. Furthermore also momentum and angular momentum must be conserved, which leads to exclusion of some transitions that would be possible based on energy alone.

        In solids every molecule or atom is always in contact with neighboring molecules or atoms. The transition may involve some phonons that take energy to the lattice vibrations. In case of conductors the electrons in the conduction belt may participate in the process. For a gas the only possibilities are radiation and collisions. Thus every transition involves either emission or absorption of radiation or a collision with another molecule. In this I include in collisions every situation, where two molecules are close enough for direct interaction.

        In short: There are no internal transitions in gas molecules as they would violate some conservation laws. This answers also your other question.

      • Pekka, I am going to reply here to your above. “Phil. | September 5, 2011 at 7:10 pm | Reply

        If we assume that the line width of a particular transition in the 15μm band is about 0.1 cm^-1 then fewer than 1 in 10000 molecules would have the requisite energy, in order to directly excite the transition it would have to collide in a particular orientation, which would also be very low probability.”

        Per Phil the probability of excitation of CO2, at the surface, is low in the 15 micron band. The probability of emission in the same band is lower, due to the high number of collisions (Phil can correct me if I misinterpret his meaning). Phil was using the laser to illustrate the differences.

        This was the point I was trying to make in the Vaughan Pratt thread where he suggested that conduction dominates molecular heat transfer at the surface. Conduction induced convection would then provide a larger portion of the heat transfer from the surface into the troposphere. Since CO2 represents an extremely small percentage of the greenhouse gases at the surface and impacts a much smaller portion of the greenhouse gas spectrum at the surface, a doubling of its rather small radiative impact is still a small impact.

        It may be easier to look at the probability of H2O excitation by collision at the surface to see the differences.

      • Dallas,

        Collisions are behind 99.99% of both excitations and de-excitations. The states are the same, only the direction of energy transfer changes. Therefore the probability of excitation to de-excitation is given by exp(-E/kT), which determines the occupation level of the excited state. The number of emissions is determined by this occupation level and the emission rate of a single molecule in excited state.

        There are no problems from the width of the lines as the two-body collision can satisfy the requirements of conservation of energy and momentum giving precisely the right amount of energy to the excitation or receiving it in the de-excitation.

      • I am cool with that. From your comment to Vaughan, “Only, when the emission and absorption of radiation starts to be nearly as common as the collisions starts that affect the occupation level, but that happens only in the stratosphere.” I thought that emission and absorption of radiation was roughly equal with collisions at the tropopause, which was the explanation for the temperature profile.

      • Dallas,

        At the tropopause the pressure (and density of air) is about 1/10 of that at the surface. The time between collisions is proportional to density. Thus the ratio of the rate of absorption/emission to collisional excitation/de-excitation grows from roughly 1/10000 to roughly 1/1000, not more than that.

        The altitude of tropopause is determined by the condition that radiative heat transfer alone leads to the adiabatic lapse rate. In troposphere the radiation alone would lead to a higher lapse rate, in stratosphere the lapse rate is smaller or even inverted. When radiative heat transfer alone would lead to a higher lapse rate, convection is initiated and brings the lapse rate to the adiabatic value.

        (As always, short descriptions are simplistic. The real atmosphere is much more complex, but the basic reason for the existence of troposphere, tropopause and stratosphere is given by this description.)

      • Collisions are behind 99.99% of both excitations and de-excitations.

        Is there experimental evidence for this in the case of internal transitions in CO2 molecules? How could one tell if the internal transitions take time on the order of ten picoseconds or less?

      • Pekka, let me frame my question more clearly to reduce the chances of our speaking at cross purposes. It seems to me that the remarkably rapid internal (i.e. non-emitting) transitions of an excited CO2 bond to the lowest excited state could occur either spontaneously or as the result of a collision.

        For the latter, a 500 m/s molecule at STP has a mean free path of 70 nm and hence a mean time between collisions of 70/500 = 0.140 ns or 140 picoseconds. At that rate I could well believe that the majority of, or even all, internal transitions could be explained as being the result of collisions. But I am still curious about the possibility of the opposite: could most internal transitions happen internally and even faster?

        But now that I think more about it, I think I can answer my own question in the negative: it must be, as you say, almost all induced by collision.

        My reasoning has to do with where the energy would go. This is easily answered for the case of a collision: it is redistributed among the degrees of freedom of the colliding molecules. But these include translational (and hence effectively nonquantized) energy states, whence the bin-packing-type question of matching resonances need not arise. With spontaneous transition the only available degrees of freedom where the energy could go would have to be vibrational (not translational or rotational since momentum must be conserved) and since these are quantized and few in number (since they’re in the same molecule) such a transition will be unlikely and therefore slow if at all.

        But I also think the question is purely academic as far as the atmosphere is concerned, because I don’t see how the answer could make any difference to any observable property of the atmosphere.

        For the atmosphere, the important difference is between internal and emitting transitions. Here my understanding is that you and I part company on the relevance to the atmosphere of Kasha’s rule, that emission is primarily from the lowest excited state and that higher states decay to that state via internal transitions.

        I understand the basis for your position on this is that the phenomenon is only observed at higher temperatures than found in the atmosphere.

        I would agree with this if we were talking about a volume of gas typical of a laboratory experiment involving say a cubic meter of gas. However we’re not. A column of atmosphere with cross section 1 square centimeter (the preferred area unit in spectroscopy) weighs 1 kg. The CO2 in it weighs 0.6 g, which is 0.6/44 = 0.014 moles. The number of CO2 molecules in that column is .014 * 6.022 = .084 * 10^23 CO2 molecules or about 10^22.

        But ¹²C¹⁶O₂ (the main CO2 species) contains 313 lines of strength 10^-22 or more in the spectrum from 126 to 1990 /cm (chosen to cover 98% of radiation at 288 K) and another 330 lines going down to 10^-23. Ignoring the nuances of Lorentz line shape, multiplying line strength by number of molecules in a square cm column gives an estimate of optical thickness of that column. Optical thickness one, what you get with the product of 10^22 and 10^-22, is where blocking sets in.

        This radically changes the probabilities of absorption leading to a high excited state at weakly absorbing lines. The top 200 or so most strongly absorbing lines of the main CO2 species are irrelevant to global warming because they’re well above unit optical thickness and hence essentially completely blocked, as Angstrom would have happily pointed out. It is the next couple of hundred lines that impact global warming, the theory of which crucially depends on their being so weak.

        But the weakly absorbing lines aren’t at the lowest excited state.

        In a lab setting, these particular weakly absorbing lines would not be noticed (the point you’ve been making) because the strong lines would do all the absorbing and you would therefore dismiss the weak ones as inconsequential. In the atmosphere however they represent the difference between global warming being a problem or not. Angstrom’s argument is refuted by the role played by these weaker lines, which become relevant when the CO2 reaches the corresponding level.

        This, I claim, is where Kasha’s rules have to enter.

        The relevance of Kasha’s rules to global warming is that emission and absorption are not at all symmetric over the region of the spectrum sensitive to changes in CO2 level.

        The reason you think Kasha’s rules are irrelevant is (if I understand you) because that’s a weakly absorbing region and therefore not observable in a lab setting. But we’re not in a lab, we’re in the atmosphere, and it is those weakly absorbing lines that are responsible for continued global warming.

      • Vaughan,

        You started to give the right answer. In every transition energy must be conserved, and in addition also momentum and angular momentum. That means that every transition must involve either two molecules or one molecule and a photon. In gas there are no other possibilities excluding the cases that involve three molecules or two molecules and a photon. They can be excluded, because they are rare in Earth atmosphere, but important in Titan atmosphere as an example due to the “stickiness” of H2 molecules. (The continuum spectrum is due to these effects, but it’s very weak in earth atmosphere.)

        There may be weak collisions (collisions with a rather large minimum distance) which may change the angular momentum, but not the vibrational state, but in atmosphere all important processes are symmetrical in the sense that transitions in both directions proceed at their “natural” relative rate. The other alternatives would be such, which leads to Stokes shift or allows for lasers. Those effects are due to the presence of some strong transitions that fill or empty some excited states at a rate large compared to the normal direct transitions. Such strong transitions may occur in solids with the participation of phonons and other multiatom/multimolecule interactions or they may be such as the vib-vib coupling of N2 and CO2, when that’s augmented further by pumping of the vibrational state of N2.

        Nothing like that is significant in the Earth atmosphere taking into account its density and temperature. The total set of possible transitions is too simple for any such effect to be present.

      • I really should spend more time proofreading what I wrote. I made not just one but two mistakes in my preceding comment:.

        Ignoring the nuances of Lorentz line shape, multiplying line strength by number of molecules in a square cm column gives an estimate of optical thickness of that column.

        I forgot to divide by the width of the line in units of /cm (cm⁻¹). This is typically 0.07 /cm for CO2 lines. So I overestimated the strength needed for optical thickness 1 by a factor of around 14. For CO2 at its present level the strength should be .07/(8.2*10^21) or 8.53 * 10^-24.

        But ¹²C¹⁶O₂ (the main CO2 species) contains 313 lines of strength 10^-22 or more in the spectrum from 126 to 1990 /cm (chosen to cover 98% of black body radiation at 288 K)

        The 10^-22 should have read 10^-21. There are 643 lines of strength 10^-22 in that spectrum.

        The two errors compounded: there are 1454 CO2 lines of strength more than 8.53 * 10^-24. Barring any more mistakes, that’s the number of CO2 lines in today’s atmosphere with optical thickness > 1.

        I really should spell things out more in my comments, to make it easier for everyone including me to follow the reasoning and make it easier to catch errors like the above. With that in mind, let me go over my understanding of spectroscopic line strength.

        Line strength (or intensity) is given in units of /cm (bandwidth in cm⁻¹) per (molecules per sq. cm = molecular density), which I understand to mean (correct me if I’m wrong) that if there are D molecules/cm^2 in the column, and the strength is S /cm/(molecule/cm^2), and the linewidth (suitably understood as e.g. pressure-broadened halfwidth) is W, then the optical thickness of the column at that line is S*D/W.

        Optical thickness 1 is the case S*D = W. In that case radiation in the vicinity of the line that is not being blocked by some other line will pass through the column with the exception of a band of said width centered on the line, of which only 1/e gets through. In general the fraction that passes through in that band is exp(-T) where T = S*D/W is the optical thickness.

        Optical thickness 1 is the level of greatest sensitivity to changing density D. Thicknesses outside the interval [1/5, 5] are effectively either completely transparent or completely opaque.

        As I wrote below (which I just realized used the wrong table, I’ll fix that) , for CO2 at its present level of 394 ppmv, optical thickness 1 occurs at lines of strength W/D. With W = 0.07 /cm and D = 8.21 * 10^21/cm^2, W/D = 8.526 * 10^-24. Lines in this vicinity are the ones having the greatest impact today on climate change. In the part of the IR spectrum relevant to a planet at a nominal temperature of 288 K (I inadvertently omitted that requirement in my comment below) there are, as i said above, 1454 CO2 lines with optical strength 1 or more.

  82. Chief Hydrologist | September 5, 2011 at 8:33 pm |
    ‘But there is simple idea – all other things being equal the planet warms wth additional CO2 until the energy equilibria at TOA is restored. Energy in must equal energy out over the longer term.’ me

    ‘Your argumentation with him has been void of real content for long, if not from the beginning. Neither side is going to win such a game.’ Pekka

    You are a liar and a fool.

    Such a civilised(sic.) discourse!

  83. Pekka Pirilä | September 5, 2011 at 5:56 am |
    Improving radiative insulation makes a body radiate less than before. That leads to warming of the body until the incoming and outgoing radiation are equal again.

    ‘Not according to our resident cartoon character, the Hydrologist, he believes that is impossible!’

    You have insulted, belittled and lied – it is what it is.
    http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-109148

    Civilised is the English spelling – try making a substantive point before you make a bigger fool of yourself.

    • You have insulted, belittled and lied – it is what it is.

      When two parties strongly believe mutually contradictory facts, one natural explanation is that the other party must be lying. I used to think that myself about my opponents, since I couldn’t come up with any other explanation.

      I’ve since been persuaded by those offering the confirmation-bias explanation, namely that people begin with mutually contradictory facts, but instead of working on resolving the inconsistencies to their mutual satisfaction, they accept the judgment of their side’s authorities and dismiss the other side’s authorities as some combination of incompetent nincompoops, con-men, financial criminals, and grant-hungry academics.

      The qualifications of an authority are judged on the basis of whether he or she knows what he’s talking about. If he’s spouting obvious rubbish then clearly he’s unqualified and/or dishonest.

      But this is circular logic. You start out believing not-P. When asked to prove not-P you cite the authorities who assert not-P while pointing out that those who assert P are clearly incompetent. How do you know they’re incompetent? Well, that’s obvious isn’t it? They believe P which is clearly rubbish. ;)

      In CH’s case, he may be his own authority. That doesn’t seem to have stopped him from believing that his opposition is clearly lying.

    • Bring it on. The result of this mostly edifying discussion thread was the realization that two types of characteristic life-times exist.

      Everyone seems to agree that the “residence time” of atmospheric CO2 is actually relatively short, with times estimated between 2 and 10 years. IPCC agrees with this number as well.

      However, not everyone agrees that an “adjustment time” of CO2 locked into the carbon cycle exists, even though it was first proposed in by Rodhe in 1979. This is a fat-tail distribution, solved by applying a diffusional model between atmospheric C02 and carbon sequestered at deeper levels (this is how I modeled it at least). A mean adjustment time actually does not exist and that is why we see estimates from hundreds of year to thousands of years.

      This adjustment time is obviously what is plotted on the chart you link to, and that is obviously comparing apples to oranges, or peanuts to acorns.

      The essential problem is that the person that created the bogus graph hasn’t the slightest idea of the statistical physics involved, or more likely just wants to feed off of a non-controversy.

  84. They only listed the estimates of papers on a graph, is something mislabeled?

  85. Pekka Pirilä | September 7, 2011 at 3:42 am |
    You continue to bring in arguments from processes not relevant for the atmosphere, where we are interested in the 15 um line of the first bending vibrational mode.

    The processes that I discuss are relevant to the atmosphere and the 15μm band.

    For that mode, which is the only of interest here, the cross section for collisional excitation and de-excitation is large enough to be 10000 time stronger than radiative effects. Your latest reference was again about effects that occur somewhere else. They are important under some specific conditions, but not relevant for the CO2 in the atmosphere.

    You are wrong about the effectiveness of the collisional activation of this band, the lifetime of the state is very long compared with collisions which you persist in ignoring. The Stern-Volmer effect does apply.

    For the first bending mode we have a very simple situation, where elementary classical analogy tells well, why the cross section for collisional excitation is large. The CO2 molecule is a linear three-atomic molecule. Hitting it from side leads to bending vibration. In quantum mechanics that can occur at some discrete energies only. In this case we are interested in one vibrational energy level that corresponds to the 15 um radiation. As soon as the two colliding particles have sufficiently translational energy in the collision, the transition probability for excitation is large,

    This is where you are wrong the transition probability for excitation is not large, only a small fraction of the sufficiently energetic collisions will actually excite the vibrational mode.

    for de-excitation it’s large at all energies. The requirement for sufficient translational energy is the reason for the factor exp(-E/kT) for the relative probabilities of excitation and de-excitation. This factor is about 0.035 at the temperature of the lower troposphere and a little smaller at tropopause. (Actually we have two bending modes, both with relative occupancy of 3.5% or 7% in total.)

    The requirement of exact match with the energy of the excited state doesn’t reduce the cross section, because the conservation of energy and momentum is automatically taken care by the velocities of the molecules after collision. In most cases also the rotational state of the molecule changes in the collision.

    Your implicit assumption appears to be that if there’s enough translational energy to populate the vibrational state it will do so preferentially and any left over energy will go into rotational states and you totally ignore translational energy. This is wrong, it is the reverse of the real situation. The molecular dynamics is that very few of the possible collisions which contain sufficient energy will be able to excite the vibrational mode. Think of it like balls on a pool table, the CO2 is three balls joined by two springs, for a colliding ball with just enough energy to excite the vibration there’s only one direction that could work!
    The efficiency of the translational-vibrational exchange is low, not high as you seem to think.

    There are no other states or transitions available that could lead to the similar effects that occur in CO2 lasers or are discussed in your latest reference. We have only the one vibrational energy level, rotational modes, and translational energy. All transitions that involve the vibrational state are of the basic type discussed above. Therefore there are no possibilities for the effects that you have brought to the discussion.

    I urge you to read up about the Stern-Volmer effect, particularly bear in mind the long radiative lifetime of the vibrational state involved. If you were right about the efficiency of the collisional transfer then it wouldn’t be necessary to use N2 as the gas in a CO2 laser to take advantage of the near resonant vib-vib exchange because any gas would do! Yes I know you don’t like to think about lasers Pekka, but that’s where the physics of these exchanges is used.

  86. Phil.

    You are wrong about the effectiveness of the collisional activation of this band, the lifetime of the state is very long compared with collisions which you persist in ignoring.

    I’m certainly not ignoring that. I cannot understand, how you don’t see that many of my arguments wouldn’t make any sense, if I would be ignoring that.

    The Stern-Volmer effect does apply.

    How it could apply? Please explain in some detail. Or present some material on the effect that’s even remotely related to the case of 15 um line of CO2. It’s related to fluorescence, but there’s no fluorescence related to this excitation. What would be the transition that the excited CO2 would do according to the Stern-Volmer effect and what is the other molecule contributing?

    You have not presented any argument that would justify your claim that it’s relevant in this case.

    This is where you are wrong the transition probability for excitation is not large, only a small fraction of the sufficiently energetic collisions will actually excite the vibrational mode.

    Why? What’s the reason that the excitation would not be strong in the sense any excitation is or in the sense the de-excitation is strong? Or more specifically, why it would not be as strong in relation to the de-excitation as the factor exp(-E/kT) tells?

    Your implicit assumption appears to be that if there’s enough translational energy to populate the vibrational state it will do so preferentially and any left over energy will go into rotational states and you totally ignore translational energy.

    Absolutely not. That’s not at all, what I wrote. Translational energy takes, whatever is left over after vibrational excitation and change in rotational energy. What’s left over can be large or small and depends totally on the energies of the colliding molecules and other details of the collision, which influence the rotational part.

    This is wrong, it is the reverse of the real situation. The molecular dynamics is that very few of the possible collisions which contain sufficient energy will be able to excite the vibrational mode. Think of it like balls on a pool table, the CO2 is three balls joined by two springs, for a colliding ball with just enough energy to excite the vibration there’s only one direction that could work!
    The efficiency of the translational-vibrational exchange is low, not high as you seem to think.

    The strength of the transition depends on the Quantum Mechanical coupling between the initial and final states. That same coupling applies to excitation and de-excitation that are are the reverse processes. This property of QM is behind the law that emissivity is equal to absorptivity. That law guarantees that the relative strengths of excitation and de-excitation behave as I have explained.

    I urge you to read up about the Stern-Volmer effect, particularly bear in mind the long radiative lifetime of the vibrational state involved. If you were right about the efficiency of the collisional transfer then it wouldn’t be necessary to use N2 as the gas in a CO2 laser to take advantage of the near resonant vib-vib exchange because any gas would do! Yes I know you don’t like to think about lasers Pekka, but that’s where the physics of these exchanges is used.

    You come again to the totally different situation of the asymmetric stretching mode that’s the basis of CO2 laser. That’s a totally different situation, and you cannot draw any conclusions concerning the first bending mode from that. Lasers set specific requirement for the excitations to work. There must be an efficient way of pumping a excited state of sufficient lifetime. There must also be a weakly populated lower state, where the laser transition ends, which is in this case the symmetric stretching mode. The vib-vib transition and electronic excitation of the N2 vibrational state allow for the creation of the population inversion that is needed for the laser. All this is important for lasers, but totally irrelevant for the 15 um line and the related bending mode of CO2.

    Do you understand the basic idea of lasers? I would expect that you do, but then you should also understand, why it’s not relevant for the atmosphere. Your comments make me doubt, whether you really understand even that.

    And one more point? Explain,how the number of vibrationally excited CO2 molecules can be essentially constant in time without strong collisional excitation, when the collisional de-excitation is fast?

    • “Your implicit assumption appears to be that if there’s enough translational energy to populate the vibrational state it will do so preferentially and any left over energy will go into rotational states and you totally ignore translational energy.”

      Absolutely not. That’s not at all, what I wrote. Translational energy takes, whatever is left over after vibrational excitation and change in rotational energy. What’s left over can be large or small and depends totally on the energies of the colliding molecules and other details of the collision, which influence the rotational part.

      On the contrary, that is exactly what you’ve written here! That’s not how collisional excitation works. Until you understand that we’re going nowhere.

      • Phil.

        I’m sorry, but you don’t make sense.

        I have asked you many times to substantiate your claims, but you just repeat that I don’t understand the issues.

        So far you haven’t given any concrete argument to support the view that it’s me who doesn’t understand. My view is exactly the opposite, and I have explained in many details, where your arguments fail to have value.

        If, you really believe that you understand more, please provide some evidence for that. Right now I think that you know many words from physical literature, but not, what’s their physical relevance.

      • I’m sorry, but you don’t make sense.

        Really, you explicitly said that in the collision that the vibration took the first bite out of the energy and then the rotational took the rest. I told you that wasn’t the way it works and you said “Absolutely not. That’s not at all, what I wrote.” And followed with the following statement which says that exactly, make up your mind: ” Translational energy takes, whatever is left over after vibrational excitation and change in rotational energy.

        If you don’t mean that the vibrational level is excited first, restate your statement of what happens to say what you mean then justify your assertion of efficient trans-vib energy transfer, because you’re not making sense this way.

      • Phil.

        I explain it in more details. We have three types of energy involved:

        – Vibrational energy, which is quantized and has only one relevant level of excitation, that corresponding 15 um IR radiation. There are two independent states at that level corresponding to the two possible bending directions. Both these excited states have a occupancy of about 3.5% of the corresponding ground state.
        – Rotational energy, which is also quantized, but with a low gap between levels, which allows a large number of states to have a high occupancy. The rotational states can combine with the vibrational ground state as well as the excited states.
        – Translational energy, which has the continuous Maxwell-Boltzmann distribution.

        When a CO2 molecule in ground state collides with another molecule so that the collisional energy exceeds the energy of the vibrational excitation, there is a relatively high probability that the vibrational state is excited. What the probability is numerically is not relevant for the present argument, but it’s determined by the quantum mechanical coupling between the initial state of CO2 in ground state as well as the initial velocities and rotational states of both molecules. The final state has similar parameters (velocities and rotational states), but in addition one of the two vibrational modes is now excited. Depending on other details of the collisions, which must be described in a way applicable to QM, but which tell essentially, how the molecules hit each other, the rotational states get modified. For each possible outcome concerning the vibrational state and the rotational states one combination of final velocities of the two molecules conserves energy, momentum and angular momentum. The quantum mechanical coupling strength must be calculated for this combination.

        In quantum mechanics each initial state can lead to several final states from the point of view of actual observations, but the whole thing can also be calculated starting from initial wave function and ending with the wave function after the collision excluding further observations. Final consequences for the atmosphere don’t depend on, where the calculation is stopped.

        The whole calculation may be reversed picking the final state as initial state and the initial state as final state. This reverse process describes a collisional de-excitation. The quantum mechanical coupling constant is the same for the original transition and the reverse. This is a fundamental law of QM and this is behind the applied law that emissivity is equal to absorptivity, when it’s applied to the interaction with IR. This law applies also to collisional processes and maintains the balance between the excitations and de-excitations, when the occupation levels are in agreement with the factor exp(-E/kT).

      • Vibrational energy, which is quantized and has only one relevant level of excitation, that corresponding 15 um IR radiation.

        But Pekka, that level corresponds to the line at 667.661347 /cm or 14.97766 um, an excitation energy of h*c*66713/ev = 0.083 electron volts where ev = joules per electron volt = 1.602 * 10^-19 joules. Its strength is 2.981 * 10^-19 cm (= per /cm) per (molecule per sq cm). That line has an air-broadened halfwidth of 0.074 /cm (which I forgot to divide by in my previous message to you today and so understated optical thickness by a factor of 14). There are about 10^20 molecules per sq cm, 8.21 * 10^19 to be more precise. Hence the optical thickness is 2.981/0.074*8.21 = 331 >> 1, hence completely opaque, hence completely irrelevant to further global warming (Angstrom’s argument).

        In order to find levels of excitation that are relevant to continuing global warming we must identify those with optical thickness less than unity. Optical thickness > 1 remains true down to line strength 9 * 10^-22. There are 898 lines stronger than that in the band 126 to 1990 /cm, where 98% of black body radiation at 288 K resides. (My earlier figure of 313 lines resulted from forgetting to divide by line halfwidth.) The 899th line is at 2351.004897 /cm (4.254 um), corresponding to an excitation energy of h*c*235100/ev = 0.2915 electron volts.. Its strength is a weak 8.995*10^-22, which you see as making it irrelevant but which I see as making it relevant.

        Its excitation level is at 3.5 times the energy of the one you claim is the only relevant one.

      • Vaughan Pratt

        “Hence the optical thickness is 2.981/0.074*8.21 = 331 >> 1, hence completely opaque, hence completely irrelevant to further global warming (Angstrom’s argument).”

        Now you have me confused, I think.

        Optical thickness is a two dimensional projection from any altitude through any other altitude.

        Using TOA to surface is a simple range useful for some calculations, if some conditions are met.

        We also know atmospheres are not really opaque, so must impose the condition that optical thickness is less than unity.

        Our chore then is to divide the projections into a stack of shells, each with optical thickness less than unity, and perform our calculations by parts, no?

        Global warming would be limited by some complex function of shell depth, absorbtivity saturation, and altitude-dependent mixing?

      • Vaughan – Allow me to break in here, because I think you may have misunderstood a fundamental principle underlying greenhouse effect warming. For a particular wavelength where CO2 absorbs more than trivially to be irrelevant, the optical thickness must significantly exceed unity at the tropopause. High opacity at lower altitudes is irrelevant, because total absorption is simply followed by isotropic emission. At the tropopause, all the important wavenumbers are relatively close to 667, and those that are far away are inconsequential for radiative transfer effects in our atmosphere. It’s true that at exactly the maximum absorption at about 667, more CO2 will have no discernible effect, because it raises the escape altitude to a higher level which, being in the stratosphere, is not cooler, and therefore requires no further warming for adequate emissions to space. On the other hand, opacity is sufficiently low in the neighborhood of 667 both below and above that number for increased CO2, and therefore increased opacity to move escape to higher, cooler regions, with a consequent warming of the atmosphere until a flux balance is restored.

        To summarize, it is increases in opacity that raise average emission altitudes to colder levels that determine the greenhouse effect. Total absorption at lower altitudes does not eliminate the effectiveness of a particular wavenumber in mediating those effects.

      • Fred, With H2O around, that’s right, above the tropopause?

      • Vaughan,

        There are weaker lines in the sidebands of the 10 um line. They all involve the same vibrational transition, but in combination with different rotational transitions. The most important range is on the higher energy side of the peak, i.e. at significantly larger wavenumber than 667.66 1/cm. Those transitions correspond to the case where the upper level has a high rotational energy in addition of the vibrational energy. Such levels are excited by asymmetric collisions and they radiate also asymmetrically to allow for the significant reduction in angular momentum.

        There’s also absorption at those wavenumbers, when a photon hits a CO2 molecule asymmetrically leading at the same time to the vibrational excitation and a major increase in its angular momentum.

        The wavenumbers most relevant to the additional GHE are over 700 1/cm and they correspond to a change of some 20 units of angular momentum or even more.

      • Fred, How much more opaque can opaque be? :) 15 microns gets all the press along with its close buddies, but a slight increase in the trivial lines may not be trivial. The trivial lines may be a order of magnitude smaller but there is an order of magnitude more of them.

      • Dallas – see Figure 3 of Pierrehumbert’s Physics Today article. Only wavenumbers flanking 667 have a discernible effect on atmospheric CO2 effects. Quantitatively, IR absorption by CO2 is almost entirely in this region.

      • http://www.atmos-chem-phys.net/11/1167/2011/acp-11-1167-2011.pdf

        There is a figure in this article that shows the clear sky spectrum, cloud spectrum and deep convective cloud inverted spectrum. I was wanting to overly the Co2 spectrum with the inverted spectrum, no luck, but it looks like more CO2 lines come into play to me.

      • I didn’t see anything in that paper, Dallas, to indicate a significant CO2 effect outside the region I mentioned. Note that water operates over a larger part of the spectrum.

      • CO2 has other strong absorption peaks around 2.6 um and 4.3 um, but those wavelengths fall between the ranges that are important for thermal IR (over 5 um) and solar SW (less than 2 um). Those lines can certainly be observed, but they are not important for the Earth energy balance.

      • Just to be clear, CO2 has known absorption bands far distant from the 667 wavenumber and flanking wavenumbers, but these play little quantitative role in determining atmospheric greenhouse effects due to climatically relevant changes in CO2 concentration.

      • In the link I gave the cloud top emission is between 190 and 210 K which left shifts the BB envelope. That decreases the 15 micron lines impact somewhat and increases the 4.5 ? micron bands. While my laptop should, it really is stubborn about letting me do integrations to see how much impact that may be, it looks 5% to 10% range.

    • You come again to the totally different situation of the asymmetric stretching mode that’s the basis of CO2 laser. That’s a totally different situation, and you cannot draw any conclusions concerning the first bending mode from that.

      Yes you can, if the vibrational exchange takes priority as you assert, then there would be no need to use N2 for its high efficiency vib-vib exchange, that was the whole point. You don’t understand the molecular dynamics of collisions which are important in both the atmosphere and lasers, that is why you are getting it wrong.

      Lasers set specific requirement for the excitations to work. There must be an efficient way of pumping a excited state of sufficient lifetime. There must also be a weakly populated lower state, where the laser transition ends, which is in this case the symmetric stretching mode.

      Which is weakly populated because of the deactivating collisions with the He atoms which are specifically added for that purpose. Note that the He atoms don’t collisionally populate those lower states.

      • Phil.

        You describe only part of the idea.

        Direct collisional transitions alone lead always to the situation that the lower state has a high population level than a higher one. That leads further to strong absorption, which prevents the lase from working. To get it running an indirect mechanism for populating the upper level is needed. That’s the vib-vib transition through N2. Helium contributes to the direct thermal transitions and helps in keeping the population of the symmetric mode low reducing the absorption further. In that you are right.

        But again. Nothing in that is relevant for the atmosphere. There the direct collisional transitions dominate and they occur as I have explained. It’s you, who stubbornly keeps to wrong ideas.

      • Alexander Harvey

        Pekka,

        Others, who like I have only limited understanding of all this, might be missing an important part of the puzzle and hence find little to guide them as to the relative merits of the argument. The part I failed to get is that the origin of the excitation of the N2 vibrational mode is due to an electronic discharge, i.e. not a state of affairs that might commonly occur in the atmosphere. Also they, as I, might not realise that the N2 vibrational mode is metastable and that interaction with CO2 offers the efficient de-excitation path. The excitation by discharge of the N2 part of the process is I believe efficient and due to the reluctance of a symmetric diatomic to de-excite either radiatively or by colisions not involving the vib-vib pathway, the N2 CO2 interaction is the pathway of least resistance.

        All in all this adds up to an effect that only occurs at a significant rate in the presence of a stimulus such as the elctronic discharge.

        I know that you comprehend how this works much better than I. What I am suggesting is that, for the benefit of we who don’t, it might be a good thing to spell out why this effect is uncharacteristic of the atmosphere. When matters like these are discussed in public, the public can get the wrong end of the stick.

        Hopefully I have not made too much of a hash of the physics.

        Alex

      • Alex,

        Your comments are correct. To have an effective laser, we need a transition with highly populated upper level and much less populated lower level. You described, how we get the high population for the upper level (the asymmetric stretching mode at 2349 1/cm). Phil. made a correct statement on the lower level (the symmetric stretching mode at 1388 1/cm) in telling that collisions with Helium atoms have a major role in de-exciting CO2-molecules from the lower level. The energy of the lower level is high enough to make the factor exp(-E/kT) small. Without that fact the occupation level could not be made small by any number of colliding atoms.

        The strong population inversion is essential as all emission of the laser comes from molecules in the upper level, but every molecule in the lower level absorbs that same wavelength, which is detrimental for the operation.

        Here we have again the situation, where the coupling constant is the same for transitions in both direction and the rates of emission and absorption are determined by both the intensity of the radiation and the occupation levels of the states (but with the difference that the radiation is coherent and the strength of both emission and absorption influenced by that, i.e. stimulated).

  87. Pekka Pirilä | September 9, 2011 at 10:47 am |
    When a CO2 molecule in ground state collides with another molecule so that the collisional energy exceeds the energy of the vibrational excitation, there is a relatively high probability that the vibrational state is excited. What the probability is numerically is not relevant for the present argument, but it’s determined by the quantum mechanical coupling between the initial state of CO2 in ground state as well as the initial velocities and rotational states of both molecules. The final state has similar parameters (velocities and rotational states), but in addition one of the two vibrational modes is now excited.

    You keep making this assertion without any justification, your whole position depends on it and you refuse to justify it. Unless you do so there’s no point in continuing the discussion.

    • Phil.

      I have justified very many of my points in detail, you haven’t a single one.

      On this point the essential issue is the equality of the coupling constant for the related excitation and de-excitation. In addition we agree that the de-excitation by collisions is much stronger than through emission of IR. Nothing more is needed to justify my conclusions.

      Stop making empty accusations, when you cannot substantiate any of them. It’s your turn to present justification for your claims or give up.

      • Alexander Harvey

        Pekka,

        I am puzzled as to why such an effect as debated above due to long life times due to low collision cross sections should be important. I have detailed my understanding below and perhaps you can tell me which bits I have right and which wrong.

        Alex

      • Alex,

        This argumentation is not based on importance of the issue, but I have the habit of trying to get erroneous physics arguments corrected. I may continue as long as the discussion is not totally in deadlock.

  88. Alexander Harvey

    If I understand correctly there is an argument that certain vib-rot states are only weakly thermalisized, i.e. have very small colission cross-sections.

    If that is the case, I think that associated spectral lines should be sharp as they would not be broadened by collisions given that IR emission/absorption is the dominant pathway.

    If that is the case, the argument could be decidable by the presence of such lines in the spectra. This could be checked.

    Yet if they do, I cannot see how they would come much into play. They would be narrow and there occupancy little affected by temperature but rather by their absorption, spontaneous emission, and stimulated emission rates. The populations would I think be determined by radiative as opposed to thermal equilibrium conciderations and the proposed long life times might suggest low spontaneous emission rates and perhaps a not unimportant role for stimulated emission but does this add up to a significant effect?

    What is the perceived significance of such an effect? I have looked at the CO2 spectra and it does seem to be dominated by pressure broadened lines for I have seen no others.

    Alex

    • Alex,

      I don’t really know, what Phil. has in his mind. His claims cannot be interpreted in terms of real physics. I have tried to explain, what happens in the atmosphere, but he brings repeatedly in something that is relevant in lasers with their population inversions, but not elsewhere.

      • Alexander Harvey

        Pekka,

        Thanks but have I go the details broadly correct?

        Alex

      • Alex,

        If there’s any difference in the lifetimes and lineshapes of transitions related to highly rotational states, I would expect that they might have slightly smaller lifetimes and broader lines, because the high angular momentum might add a little to the collisional cross section. Whether this is really the case would require proper quantum mechanical calculations, but based on classical analogy that appears plausible.

        The lifetime of the excited state is determined by the time between collisions and the linewidth is given essentially by the lifetime and the Heisenberg uncertainty principle. There may be a small coefficient in the correct calculation as the uncertainty principle gives formally only the lower limit, but the real relationship between the time between collisions and the linewidth is close to this limit.

        When the density of the atmosphere is reduced the time between collisions grows as the inverse of the density. Lower temperature has a weaker influence in the same direction as it leads to lower average speed of molecules.

    • Alexander, “If that is the case, the argument could be decidable by the presence of such lines in the spectra. This could be checked.” That’s the thing, I haven’t seen a mid-atmosphere spectrum looking up to tell. I would believe the mid-atmosphere up would have a stronger 15 micron peak with a broadened spectrum looking down.

      • Alexander Harvey

        Dallas,

        The effect would I think show up in the standard spectra as prepared by the labs. If you do look, make sure not to use HITRAN as they do not produce true spectral lines but measurements of spectral intensity. The other lab PNNL? does give true spectral lines and I have previously chesaw lnly broadened lines.

        Alex

  89. Alexander Harvey

    Correction:

    The other lab PNNL? does give true spectral lines and I have previously seen only broadened lines.

    • Alexander, I can’t get the PNNL to do what I want. I would need to change the gas composition, density and temperature to get a spectrum for each layer. CO2 is so washed out at the surface you can’t see any change, at least I can’t. At the tropopause there are some AIRS spectra for deep convective clouds that show the inverted spectrum, but nothing I can use to say, “here is the window at this altitude and temperature, here is what it would be with double CO2.” Looking at the inverted spectra for deep convective clouds at 200K there is a pretty major change.

  90. I wonder whether we can estimate the efficiency of collisional excitation based on the following assumptions:

    1. Excitation and de-excitation rates must be equal in a layer where only radiative transfer is important in energy transfer, and which is in a steady state.

    2. The fraction of molecules with adequate translational energy can be calculated from the Boltzmann distribution and a knowledge of atmospheric temperatures (which tend to be low).

    3. The mean lifetime for spontaneous photon emission following absorption is known

    4. The mean lifetime between absorption and collisional de-excitation can be estimated by the much greater rate of this form of de-excitation than de-excitation via photon emission. This gives us the de-excitation collision frequency. The excitation collision frequency will be the same, via 1 above.

    5. The total collision frequency involving.molecules with adequate translational energy (see 3 above) can be calculated based on temperature and molecular concentrations. This will involve both collisions that excite and those that fail to excite. The efficiency can be calculated as the ratio of successful to total collisions.

    Some of the above calculations may already have been done, which would simplify the process.

    • Fred,

      What you describe is similar (but not identical) to what I have presented in one of the earlier messages. As usual, many things are well known, and the question is only, whether people believe, what physics tells, or not. Combining basic theoretical knowledge and observations the whole picture is pretty clear.

      Doing the full quantum mechanical calculation would be a major effort, the combination of theory and empirical knowledge is a more practical approach in this case.

      • Pekka,

        These guys do a pretty good job of getting at the real, first principles quantum mechanics to figure out the types of processes you’re discussing above.

        I personally haven’t tried the code, but it does not get more basic than the quantum master equations for interactions between molecules and radiation.

      • Maxwell,

        That software seems to be developed for dynamic calculations, when the required parameters are first determined by other means. That wouldn’t provide much help for the calculation of the stationary state, where the problem is in the determination of the parameters that are needed also by that software. When these parameters are known, determining the stationary state is a trivial task of solving a small set of equations.

  91. Actually you have never made any attempt to justify your basic assumption that the vibrational state it excited preferentially when there is sufficient energy. Your whole argument rests on this. Just consider a collision with exactly the right amount of energy to excite the vibrational state, the incoming molecule would only be able to excite the transition if it made contact from one direction otherwise it would excite rotational and translational only.

    • Phil.

      I have justified all my important claims. I’m not saying that vibrational excitation would be preferential, only that it’s rather common and most importantly that it the excitation has the relative strength exp(-E/kT) in comparison with de-excitation. I have justified this central point as well as can reasonably be done in blog comments. This is not a place for a full course of Quantum Mechanics (and my own lecture text on Quantum Mechanics is in Finnish and also so old that it’s not available in file form).

      You haven’t justified any of your claims.

  92. My explanation of the “missing carbon”

    Certain laws of physics can’t be denied and the model of the carbon cycle is really set in stone. The fundamental law is one of mass balance and every physicist worth his salt is familiar with the master equation, also known as the
    Fokker-Planck formulation in its continuous form.
    What we are really interested in is the detailed mass balance of carbon between the atmosphere and the earth’s surface. The surface can be either land or water, it doesn’t matter for argument’s sake.

    We know that the carbon-cycle between the atmosphere and the biota is relatively fast and the majority of the exchange has a turnover
    of just a few years. Yet, what we are really interested in is the deep exchange of the carbon with slow-releasing stores. This process is described by diffusion and that is where we can use the Fokker-Planck to represent the flow of CO2.

    This part can’t be debated because this is the way that the flow of all particles works; they call it the master equation because it invokes the laws of probability and in particular the basic random walk that just about every physical phenomenon displays.

    The origin of the master model is best described by considering a flow graph and drawing edges between compartments of the system. This is often referred to as a compartment or box model. The flows go both ways and are random, and thus model the
    http://img534.imageshack.us/img534/9016/co250stages.gif

    That shows how you would solve the system numerically. The basic analytical solution to the Fokker-Planck assuming a planar source and one-dimenional diffusion is the following:
    \frac1{\sqrt{2 \pi t}} exp(-x^2/{2t})

    Consider that x=0 near the surface, or at the atmosphere/earth interface. Because of that, this expression can be approximated by
    n(t)=\frac{q}{\sqrt{t}}
    where n(t) is the concentration evolution over time and q is a scaling factor for that concentration.
    First thing one notices about this expression is that n(t) has a fat tail and after a rapid initial fall-off only slowly decreases over time. The physical meaning is that, due to diffusion, the concentration randomly walks between the interface and deeper locations in the earth. The square root of time dependence is a classic trait of all random walks and you can’t escape seeing this if you have ever watched nature in action. That is just the way particles move around.

    For CO2 concentration this in fact describes the evolution of the adjustment time, and it accurately reflects the infamous IPCC curve for the atmospheric CO2 impulse response. It is called an impulse response because that is the response that one would expect based on an initial impulse of CO2 concentration.

    But that is just the first part of the story. As an impulse response, n(t) describes what is called a single point source of initial concentration and its slow evolution. In practice, fossil-fuel emissions generate a continuous stream of CO2 impulses. These have to be incorporated somehow. The way this is done is by the mathematical
    technique called convolution.

    So consider that the incoming stream of new CO2 from fossil fuel emissions is called F(t). This becomes the forcing function.
    Then the system evolution is described by the equation
    c(t)=n(t)*F(t)
    where the operator * is not a multiplication but signifies convolution.

    Again, there is no debate over the fundamental correctness of what has been said so far. This is exactly the way a system will respond.

    If we are now to put this into practice and see how well it describes the actual evolution of CO2, we can understand ever nagging issue that has haunted skeptical observers. It really all becomes very clear.

    For the forcing function F(t) we use a growing power law.
    F(t) = k t^N
    where N is the power and k is a scaling constant.

    This roughly represents the atmospheric emissions through the industrial era if we use a power law of N=4. See the following curve:
    http://2.bp.blogspot.com/_csV48ElUsZQ/S-DLIeFygSI/AAAAAAAAARs/zsfTRgmzy9Y/s1600/emissions.gif

    So all we really want to solve is the convolution of n(t) with F(t). By using Laplace transforms on the convolution expression, the answer comes out
    surprisingly clean and concise. Ignoring the scaling factor :
    c(t) \sim t^{N+1/2}

    With that solved, we can now answer the issue of where the “missing” CO2 went to. This is an elementary problem of integrating the forcing function, F(t), over time and then comparing the concentration, c(t), to this value. Then this ratio of c(t) to the integral of F(t) is the amount of CO2 that remains in the atmosphere.
    Working out the details, this ratio is:
    q \sqrt{\frac{\pi}{t}}\frac{(N+1)!}{(N+0.5)!}
    Plugging in numbers for this expression, q=1, and N=4, then the ratio is about 0.28 after 200 years of growth. This means that 0.72 of the CO2 is going back into the deep-stores of the carbon cycle, and 0.28 is remaining in the atmosphere.
    If we choose a value of q=2, then 0.56 remains in the atmosphere and 0.44 goes into the deep store. This ratio is essentially related to the effective
    diffusion coefficient
    of the carbon going into the deep store.

    Come up with a good number for the diffusion coefficient, which is related to q, and we have an explanation of the evolution of the “missing carbon”.

    BTW, This Alpha page is useful for modifying the numbers.
    http://www.wolframalpha.com/input/?i=1*sqrt%28pi%2F200%29*%284%2B1%29!%2F%284%2B0.5%29!

  93. Pekka, Vaughan, Phil.

    I am bringing this down in hopes that the discussion will continue here where I can follow it better.

    Vaughan Pratt | September 11, 2011 at 11:53 am |

    Pekka, let me frame my question more clearly to reduce the chances of our speaking at cross purposes. It seems to me that the remarkably rapid internal (i.e. non-emitting) transitions of an excited CO2 bond to the lowest excited state could occur either spontaneously or as the result of a collision.

    For the latter, a 500 m/s molecule at STP has a mean free path of 70 nm and hence a mean time between collisions of 70/500 = 0.140 ns or 140 picoseconds. At that rate I could well believe that the majority of, or even all, internal transitions could be explained as being the result of collisions. But I am still curious about the possibility of the opposite: could most internal transitions happen internally and even faster?

    But now that I think more about it, I think I can answer my own question in the negative: it must be, as you say, almost all induced by collision.

    My reasoning has to do with where the energy would go. This is easily answered for the case of a collision: it is redistributed among the degrees of freedom of the colliding molecules. But these include translational (and hence effectively nonquantized) energy states, whence the bin-packing-type question of matching resonances need not arise. With spontaneous transition the only available degrees of freedom where the energy could go would have to be vibrational (not translational or rotational since momentum must be conserved) and since these are quantized and few in number (since they’re in the same molecule) such a transition will be unlikely and therefore slow if at all.

    But I also think the question is purely academic as far as the atmosphere is concerned, because I don’t see how the answer could make any difference to any observable property of the atmosphere.

    For the atmosphere, the important difference is between internal and emitting transitions. Here my understanding is that you and I part company on the relevance to the atmosphere of Kasha’s rule, that emission is primarily from the lowest excited state and that higher states decay to that state via internal transitions.

    I understand the basis for your position on this is that the phenomenon is only observed at higher temperatures than found in the atmosphere.

    I would agree with this if we were talking about a volume of gas typical of a laboratory experiment involving say a cubic meter of gas. However we’re not. A column of atmosphere with cross section 1 square centimeter (the preferred area unit in spectroscopy) weighs 1 kg. The CO2 in it weighs 0.6 g, which is 0.6/44 = 0.014 moles. The number of CO2 molecules in that column is .014 * 6.022 = .084 * 10^23 CO2 molecules or about 10^22.

    But ¹²C¹⁶O₂ (the main CO2 species) contains 313 lines of strength 10^-22 or more in the spectrum from 126 to 1990 /cm (chosen to cover 98% of radiation at 288 K) and another 330 lines going down to 10^-23. Ignoring the nuances of Lorentz line shape, multiplying line strength by number of molecules in a square cm column gives an estimate of optical thickness of that column. Optical thickness one, what you get with the product of 10^22 and 10^-22, is where blocking sets in.

    This radically changes the probabilities of absorption leading to a high excited state at weakly absorbing lines. The top 200 or so most strongly absorbing lines of the main CO2 species are irrelevant to global warming because they’re well above unit optical thickness and hence essentially completely blocked, as Angstrom would have happily pointed out. It is the next couple of hundred lines that impact global warming, the theory of which crucially depends on their being so weak.

    But the weakly absorbing lines aren’t at the lowest excited state.

    In a lab setting, these particular weakly absorbing lines would not be noticed (the point you’ve been making) because the strong lines would do all the absorbing and you would therefore dismiss the weak ones as inconsequential. In the atmosphere however they represent the difference between global warming being a problem or not. Angstrom’s argument is refuted by the role played by these weaker lines, which become relevant when the CO2 reaches the corresponding level.

    This, I claim, is where Kasha’s rules have to enter.

    The relevance of Kasha’s rules to global warming is that emission and absorption are not at all symmetric over the region of the spectrum sensitive to changes in CO2 level.

    The reason you think Kasha’s rules are irrelevant is (if I understand you) because that’s a weakly absorbing region and therefore not observable in a lab setting. But we’re not in a lab, we’re in the atmosphere, and it is those weakly absorbing lines that are responsible for continued global warming.”

    And in a second post,
    “Vibrational energy, which is quantized and has only one relevant level of excitation, that corresponding 15 um IR radiation.

    But Pekka, that level corresponds to the line at 667.661347 /cm or 14.97766 um, an excitation energy of h*c*66713/ev = 0.083 electron volts where ev = joules per electron volt = 1.602 * 10^-19 joules. Its strength is 2.981 * 10^-19 cm (= per /cm) per (molecule per sq cm). That line has an air-broadened halfwidth of 0.074 /cm (which I forgot to divide by in my previous message to you today and so understated optical thickness by a factor of 14). There are about 10^20 molecules per sq cm, 8.21 * 10^19 to be more precise. Hence the optical thickness is 2.981/0.074*8.21 = 331 >> 1, hence completely opaque, hence completely irrelevant to further global warming (Angstrom’s argument).

    In order to find levels of excitation that are relevant to continuing global warming we must identify those with optical thickness less than unity. Optical thickness > 1 remains true down to line strength 9 * 10^-22. There are 898 lines stronger than that in the band 126 to 1990 /cm, where 98% of black body radiation at 288 K resides. (My earlier figure of 313 lines resulted from forgetting to divide by line halfwidth.) The 899th line is at 2351.004897 /cm (4.254 um), corresponding to an excitation energy of h*c*235100/ev = 0.2915 electron volts.. Its strength is a weak 8.995*10^-22, which you see as making it irrelevant but which I see as making it relevant.

    Its excitation level is at 3.5 times the energy of the one you claim is the only relevant one.”

    • Dallas,

      I add some comments here to, what I answered already further up in the thread.

      There are no internal transitions. It’s that simple.

      There cannot be any internal transitions, because they would not conserve energy except for degenerate states of equal energy, but a transition between degenerate states would not make any difference, and such transitions are also forbidden by other rules.

      We have only the very frequent collisional transitions and the much less common radiative transitions. The high frequency of collisional transitions doesn’t influence significantly the number of radiative transitions as long as the collisional excitations and de-excitations balance each other maintaining the same occupation level of the excited state – and they do indeed maintain the occupation level determined by temperature in accordance with the relative occupation of exp(-E/kT). The high frequency of collisions leads, however to line broadening.

      • There cannot be any internal transitions, because they would not conserve energy except for degenerate states of equal energy, but a transition between degenerate states would not make any difference, and such transitions are also forbidden by other rules.

        I just realized that Kasha in his July 3 1950 paper titled “Characterization of Electronic Transitions in Complex Molecules” uses the terms “radiationless transition” for what I was calling “internal transition.”

        (He further classifies radiationless transitions in complex molecules into two kinds, “internal conversions” which he defines as the “rapid radiationless combination of excited electronic states of like multiplicity (combination in the spectroscopic sense of undergoing transition between)” and “intersystem crossing” which he defines as “the spin-orbit-coupling-dependent internal conversion. In most cases this is the radiationless transition from the lowest excited single level to the lowest triplet level of the molecule.” However I’m not sure how this distinction applies here.)

        Anyway let me stick to Kasha’s terminology. We can then recycle my “internal transition” to mean a spontaneous transition involving neither collisions nor emissions (which sounds like what you thought I meant all along). And as I wrote above, “But now that I think more about it, I think I can answer my own question in the negative: it must be, as you say, almost all induced by collision.” So it looks like you’ve convinced me that there are no internal transitions in this sense of collisionless radiationless transitions, and for the reason you give: energy conservation.

        Although Kasha does not say (at least in his 1950 paper cited above) what causes internal conversions, we seem to be in agreement that they must be caused by collisions. Puzzling that Kasha doesn’t say so though.

        The high frequency of collisions leads, however to line broadening.

        Yes, presumably in combination with Doppler broadening to yield the Voigt profile.

      • Vaughan,

        There is also some Doppler broadening, but it’s important only in the lower densities of upper atmosphere. The collisional (or pressure) broadening dominates in troposphere. As the Doppler broadening has also a thinner tail than collisional broadening it’s effect is not noticeable at higher pressures.

        The Science of Doom seems to have a nice article with pictures on line broadening.

    • Dallas – I too responded above, but it’s worth reiterating here. The opacity that counts is in the neighborhood of the tropopause. A line that is completely “saturated” at low altitudes is capable of mediating significant greenhouse effects in response to changing CO2 if its optical thickness, tau, at the tropopause does not greatly exceed unity. Almost all relevant wavelengths for CO2 are in the neighborhood of 15 um (wavenumber 667) and surrounding regions, where the vibrational bending modes (plus different rotational transitions) are important. The asymmetric stretch region at about 4 um plays only a minor role in our atmosphere.

      • Fred I agree, that’s why I linked to the deep convective cloud paper. http://www.atmos-chem-phys.net/11/1167/2011/acp-11-1167-2011.pdf

        The cloud tops range from 190K to 210K which shifts the black body temperature envelop a little. It looks to me like the 4.5 micron CO2 band gets a boost from the shift. While the surface is radiating at around 288K, the water and ice in the clouds would blend the emission bands except for the 10 micron main window (and a few other gaps) into the bans where CO2 was over whelmed at the surface by water vapor. It looks like an interesting interaction from both below and above.

  94. Vaughan wrote further up in the thread:

    Optical thickness 1 is the level of greatest sensitivity to changing density D. Thicknesses outside the interval [1/5, 5] are effectively either completely transparent or completely opaque.

    That must be put together with the above comment of Fred. Doing that we notice that there must be a layer that satisfies two requirements to have a significant effect on GH warming:

    1) For that layer the optical thickness must be in the interval [1/5, 5]

    2) The temperature difference between the top and the bottom of the layer must be significant.

    These two conditions may be true for the whole troposphere, but they may be true also to a thinner layer, say 3 topmost km of troposphere. We have a temperature difference of 15-30 K over such a layer, but the optical thickness of that layer is a small fraction (less than 1/10) of that of the whole troposphere due to the lower density of the upper troposphere. Thus also radiation with a much higher (up to several tens or even 100) total optical thickness for the whole troposphere may contribute significantly to the GH warming.

    • (I’ll follow Dallas’s example and try to stay near the left margin.)

      Pekka, you wrote

      The total set of possible transitions is too simple for any such effect to be present.

      What do you mean by “simple” as applied to a set of transitions?” A small set?

      I’ve been relying on the HITRAN08 tables, which lists 128170 lines for the ¹²C¹⁶O₂ species alone. 32318 of these are in the range 126 to 1990 /cm (5.03 μ to 79.42 μ) which I’ve been taking as the relevant portion of the IR spectrum as it covers 98% of radiation from a body at 288 K (knocking off 1% from each end), though the extremes of that range may not be so relevant. Are you saying that most of these are irrelevant to the greenhouse effect? And if so why? Because some of them are too close to the extreme ends of the above spectral range? Because some of them are in regions that other GHGs have already rendered essentially completely opaque? Something else?

      HITRAN was originally developed for the terrestrial atmosphere, and all the line strengths have been standardized to a nominal 296 K, so it can’t be that some of the lines are only relevant to high temperatures (although high temperatures may be used to locate the weaker lines in the first place).

      I would be happy to rework what I wrote for just 50% of the relevant IR spectrum if it seems appropriate, as obtained by knocking 25% off each end so as to be sure there are no irrelevant lines (but then some relevant lines may go missing).

      • Vaughan,

        By simple I mean essentially that none of the states that can be involved is metastable due to lack of allowed transitions to ground state. It means also that every state that is significant through some series of transitions is significant through direct transitions between ground state and that state. All states of significance can be listed – and have been listed in this thread. All they behave in a straightforward way that guarantees that their relative occupation levels are proportional to exp(-E/kt) or should I write now e^{-\frac{E}{kT}}.

  95. By simple I mean essentially that none of the states that can be involved is metastable due to lack of allowed transitions to ground state.

    Where can one read about metastable states of CO2 bonds? Google turned up lots of stuff about metastability, states, CO2, and bonds, but nothing at all about metastable states of CO2 bonds.

    In any event how is metastability relevant to global warming? If a CO2 molecule captures a photon at one of the 128170 wavenumbers for the first species, with a probability as prescribed by the HITRAN table, why does that not count as a failure of that photon to help cool the Earth?

    Does the capturing CO2 molecule then enter a state where it can’t capture any more for a relatively long time? If so why wouldn’t that be reflected in the HITRAN table as a lower strength for that line?

    All states of significance can be listed – and have been listed in this thread.

    Couldn’t find it, a hint would be helpful. How long is the list?

    proportional to exp(-E/kT) or should I write now e^{-\frac{E}{kT}}.

    Prettier but less readable. One of e^{-\frac{E}{kT}} or e^{-\frac{E}{kT}} (&s=1 or &s=2 before the final $) might be ok though.

    • On second thought even metastable states (i.e. states that cannot decay through the same mechanisms that make the other states short lived) wouldn’t make any difference by themselves. The occupation levels deviate from the rule exp(-E/kT) only, when we have some mechanism of excitation that’s not restricted by the local temperature. That means in practice that we have some higher energy radiation from an external source. In lasers that’s provided by pumping. Even the vib-vib transition would have no influence unless the N2 molecules would be excited by an external source of energy, which is an electron discharge in the that case.

      All the other examples discussed in this thread are also related to energy states far beyond thermal excitations that are excited by some other type of radiation. Such states may lose part of their energy through soft radiation or through interaction with neighboring material in solids or in large complex molecules. There are two essential features in all these cases:

      1) The initial excitation is at an energy level not reached by thermal excitation at all or much less than in the case considered.

      2) There are some allowed and fast mechanisms that modify the initial excited state. That may be the vib-vib transition between the N2 and CO2 molecules in CO2 laser, and it can be soft radiation in the Stokes shift or phonon emission in solid, or some internal effect in a complex and very large molecule.

      Both points fail in the atmosphere. For the first point the solar radiation could offer a source of energy as the sun is certainly not in thermal equilibrium with the atmosphere. Something of that nature may happen in outer stratosphere with UV, but frequent collisions make that impossible in troposphere even for hard UV. When the frequent collisions are the dominant reason for transitions all molecular effects get thermalized very rapidly.

      The possible states are combinations of
      1) transitional state (a continuum following Maxwell-Boltzmann distribution);
      2) rotational state (discrete with spacing of \hbar) with tens of levels present for the molecules of two or more atoms;
      3) vibrational state (one relevant energy level for CO2 and a few for H2O, O3 and CH4).

      The three components can be combined without limitations. Molecular collisions provide by far the most efficient mechanism for all transitions between these states throughout the troposphere and also in lower stratosphere.

      The list is this short. Everything else requires more energy than available from thermal excitation by molecular collisions or LWIR.

      • Pekka, That agrees with what my few surviving brain cells are telling me. Though I do still have a small issue with the radiation window where H2O has less impact. The shape of the CO2 spectrum is not going to change significantly. The lines blocked by H2O become more important though the distribution of energy across the spectrum remains virtually the same. This is the point where CO2 change has the most impact on surface temperature.

        For the sake of argument, lets neglect convection for a moment. The difference in the outgoing overall spectrum with the downward overall spectrum now includes the temperature shift which changes with cloud top height. How significant is that impact?

        Clouds are the source of most of the uncertainty. To me it seems that clouds would tend to have a slight negative overall feedback with only high very thin clouds having a positive feedback which would decrease with more water vapor.

      • Dallas – You’re right that an overall increase in clouds would tend to increase heat loss to space, because the daytime albedo effect would outweigh the full-day longwave positive forcing, which is greater for high clouds than low clouds. Cloud feedback, though, involves both positive and negative changes in cloud amounts as well as the ratios of high to low clouds. An overall reduction in clouds, or a reduction in low clouds, would constitute a positive feedback, and an increase in high to low cloud ratios would do the same.

        Over recent decades, the HIRS and ISCCP data (also see ISCCP part 1) have tended to go in these directions although there is some disagreement between them, and also some fluctuations over intervals of a few years or thereabouts, including recently. As a generalization, though, it appears that low (cooling clouds) have tended to remain constant or decrease over much of the assessed intervals, and that the high cloud/low cloud ratios have tended to increase.

        It would probably be an overreach to conclude that these data conclusively demonstrate a clear positive feedback on the warming between the late 1970s and today, but they do appear to be inconsistent with a strongly negative long term cloud feedback.

      • Fred,

        I love the way the discussion has so quickly and quietly moved from a strong positive cloud feedback as espoused by the IPCC for, what, over 15 YEARS to suddenly the data does not support a strong NEGATIVE cloud feedback??

        Dude, where are the models without the positive cloud feedback????

        I could care less if there is no overall negative cloud feedback!!

      • The cloud discussion is a red herring.

        It is better to compare ocean heat content with TOA to provide lines of confirmation for the importance of decadal cloud change in recent global energy dynamics.

        http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=Wong2006figure7.gif&newest=1

        You can see that global warming parallels the TOA flux change that can in no way be attributed to greenhouse gases.

      • kuhnkat,
        Excellent point.
        The AGW community has practice in moving quickly and quietly from the areas where their predictions or underlying claims fail.
        Think of why the term “Global Warming” is no longer used much by the community. Think of when it was predicted that hurricanes would increase in frequency and strength.Think of when it was predicted snowfall in Britain was over, the Himalayan glaciers would be gone by 2035. Think of how Australian floods were a thing of the past.
        All quietly gone into the memory hole.
        Now that strong positive feedbacks from clouds and water vapor is failing, we will see the faithful doing what they are doing right now.
        Pretty soon, we will start hearing ideas that CO2 is the climate controller of the gaps.

      • Fred, I don’t think cloud impact can be interpreted as strong in either direction up to this point in time. That is why I think most are missing the subtle changes which when synchronized make larger changes. Tsonis et al seem to be on the right path, only they have yet to determine the magnitude of the changes, which when done, should show that minor solar variation can have a larger impact when synchronized with internal variations. Then when control theory is used properly, the climate system would be more understandable, I don’t think accurately predictable, but more understandable.

      • kihnkat – The ISCCP and HIRS data are consistent with a strong positive cloud feedback long term, as estimated by all or almost all the models. My point was that “consistent” and “proven” are not the same thing. There is nothing in the data, however, to indicate that the models are wrong. More specifically, the models all estimate the long term longwave (IR) cloud feedback to be positive, but many also estimate the shortwave (albedo-based) feedback to be positive, and the ISCCP and HIRS data are most relevant to this point. The data since the 2007 AR4 report have done nothing to move these conclusions in either direction.

        As to the effect of a cloud feedback that is hypothetically close to neutral, that would leave estimated CO2 climate sensitivity at about the level of 2 C or slightly more, based on the other feedback parameters. A strong negative cloud feedback would be required to render the composite net feedbacks negative.

      • kuhnkat – I apologize for twice misspelling your name. It was a careless, not a Freudian, slip. When I misspell a common word, I’m likely to catch it, but if it’s a name, I’m more likely to overlook the mistake.

      • Fred,

        even if you misspelled my name on purpose to irritate me it isn’t a problem. I thank you for the apology though.

        You still have a healthy belief in unvalidated models I see.

      • The list is this short. Everything else requires more energy than available from thermal excitation by molecular collisions or LWIR.

        Now I’m completely confused. My understanding was that every line in the HITRAN tables was relevant to absorption of terrestrial radiation. If you’re saying this is false then you would be saying that those using the tables are applying it incorrectly. If that’s not what you’re saying, then how is what you’re saying relevant to those using the tables?

      • There are certainly very many lines related to different rotational states around the single important purely vibrational line of CO2. There are also the higher energy vibrational states that affect the result enough to be observable in a full calculation, but not enough to be of real importance.

        There are more lines for the H2O and for the other multiatomic nonlinear molecules as they have both more relevant vibrational states and a much more complex rotational spectrum.

        All these states and transitions are, however, combinations of those few basic types.

      • There are also the higher energy vibrational states that affect the result enough to be observable in a full calculation, but not enough to be of real importance.

        Well, certainly. At its current level of 390 ppmv, CO2 has reached the point where unit optical thickness is achieved for lines of strength 8.53 * 10^-24 cm⁻¹/(molecules/cm²) or cm/molecule to put the units more simply. This is at the 4.5% point of line strengths, that is, well over 90% of what’s listed in HITRAN is of no importance to global warming.

        But even if CO2 shot up to 40%, a thousand-fold increase, unit optical thickness or more would still only be achieved at 8.53 * 10^-27 cm/molecule, represented by the top 24% of all CO2 lines.

        For the record here’s how I figured this. The atmosphere weighs 5140 teratonnes and a mole of it weighs 28.97 g whence it consists of 5140/28.97 = 177.4 examoles. CO2 is .39 parts per thousand of that, hence .39 * 177.4 = 69.2 petamoles. The Earth’s surface is 510 square megameters or 5.1 * 10^18 cm² whence above each sq cm is 69.2/5.1 = 13.6 millimoles of CO2. Avogadro’s number is 0.602.2 * 10^24 whence there are 13.6 * 0.6022 = 8.17 * 10^21 CO2 molecules above each sq.cm. Until I’m informed otherwise, optical thickness = SD/W where S = line strength, D is molecular density in molecules/sq.cm., and W is line width in cm⁻¹. Unit optical thickness is when S = W/D = .07/8.17 = .00856 * 10^-21 or 8.56 * 10^-24 cm/molecule.

      • I should add that the difficulty I was having before was with what sounded like your implication that not all lines in HITRAN tables were relevant, raising the possibility that users of the tables were using lines that weren’t relevant.

        You were in fact implying just that, but only in the sense that you were referring to transitions between energy levels so near each other that a photon at that line would have better than a 99.9% chance of passing through the whole atmosphere without being captured. My “well, certainly” in my previous message just meant that users of the tables automatically took all that into account.

        But this has not got us any closer to resolving my original question. Let me state it again, modified so we don’t get sidetracked again by lines that (if I’ve understood you) we now both agree are irrelevant.

        For definiteness let’s focus on line strengths near 10^-23 cm/mol. as being those whose transmissivity e^{-\frac{SD}{W}} with respect to the total column (of mass 1 kg/cm²) is changing the fastest with increasing CO2 today, the optical thickness SD/W being about 1. Absent other lines nearby, and ignoring aerosols, photons at such lines will have a mean free path in the atmosphere on the order of several kilometers, specifically the scale height of the atmosphere, around 8 km.

        What I want to know is the extent to which we can trust Kirchhoff’s law of radiation at such a weakly absorbing line. I take this law to be that the CO2 in a parcel of air emits at that line at the same intensity as it absorbs.

        The possibility of a collision-induced radiationless transition to a lower energy state creates a loophole for this law. Molecules might absorb at that line but not emit there, preferring instead to drop to a lower state (say the lowest excited state) following a collision.

        1. I see no reason why Kirchhoff’s law (in this sense) should hold.

        2. Kasha’s rules show that it doesn’t hold in such situations. You said these situations only happen at high temperatures, but why can’t they also happen at low probabilities of absorption as obtain for lines of strength 10^-23?

        Actually it should be possible to resolve all this by direct appeal to the Einstein coefficients for CO2 for each line, in particular the coefficient A_{21} for spontaneous emission. This should give a quantitative answer to my question, unless you see something I’ve neglected.

      • Concerning the lines of CO2 only those are of significance that are both strong enough and in the right wavelength range, i.e. either in the range of significant thermal LWIR (5.8-70 um or 150-1700 1/cm) or in the range of strong incoming solar SW (0.3-3 um or 3300-33000 1/cm). The ranges are chosen to include 95% of the corresponding black body energy spectrum. The second relevant vibrational state for CO2 is the asymmetric stretching mode at 4.3 um and falls just between the relevant ranges (the symmetric stretching mode doesn’t absorb or radiate due to its symmetry).

        This is the reason for the fact that only the lowest vibrational state is relevant.

        Concerning the Kasha’s rule I can only repeat that it’s not relevant at all because of the type of the excitations and because it’s canceled when excitations are in local thermal equilibrium. It may become relevant, when the excitation is not in local thermal equilibrium, and in practice that means that it’s relevant specifically for electronic states of far higher energy. When we have local thermal equilibrium, transition in both directions proceed at the same rate, while the Kasha’s rule applies to the situation, where the inverse transition doesn’t occur practically at all.

        The description of Kasha’s rule in Wikipedia tells, what it’s about. In that case there are fast internal transitions, because the electronic states and the vibrational states couple strongly. The electronic excitation energy is transferred to vibrations, but the inverse process doesn’t occur to an significant extent, because the vibrational states remain weakly populated as even their energies are above the range, where populations are high.

        The higher energy states in the case of CO2 are combinations of vibrational and rotational excitation. There are no possible internal transitions, where the rotational energy could be lowered. Furthermore we have local thermal equilibrium, where collisions maintain the same rate of transitions in both directions, when they are involved.

  96. ‘The concept of chaos is misused all the time, and Lorenz may be one of those to blame for that.’ Pekka

    http://judithcurry.com/2011/08/24/co2-discussion-thread/#comment-111711

    We are speaking about real world phenomenon that show characteristics of choas. Abrupt change, increases in autocorrelation or ‘slowing down’, noisy bifurcation or ‘dragon-kings’. In practice – these are sudden shifts on standing waves in ocean and atmosphere in spatial and temporal dimensions. They involve macro physical phenomenon – ice, cloud, ocean currents, SST, winds, SLP – in multiple feedbacks with little understood thresholds. They are in principle determinant but practically incalculable. So it is a dynamic spatio-temporal chaotic system.

    ‘Researchers first became intrigued by abrupt climate change when they discovered striking evidence of large, abrupt, and widespread changes preserved in paleoclimatic archives. Interpretation of such proxy records of climate—for example, using tree rings to judge occurrence of droughts or gas bubbles in ice cores to study the atmosphere at the time the bubbles were trapped—is a well-established science that has grown much in recent years. This chapter summarizes techniques for studying paleoclimate and highlights research results. The chapter concludes with examples of modern climate change and techniques for observing it. Modern climate records include abrupt changes that are smaller and briefer than in paleoclimate records but show that abrupt climate change is not restricted to the distant past.’

    The non-linear partial differential equations are a chaotic system. Earth systems are likewise chaotic – but a different system entirely. As are other chaotic systems – populations, ecologies, economies, earthquakes etc. They have a reality independent of math.

    Cheers

    • Rob,

      I agree that there’s evidence on phenomena of the type you list. They are also theoretically plausible. With all the complexities and stochasticity the phenomena are, however, likely to be less specific than corresponding phenomena might be in some deterministic models, which has also far less variables than a maximally true description of the real Earth system.

      Thus we may expect something with features of bifurcation without real bifurcation and similarly with all the other phenomena. “Something like bifurcation” might behave as a real bifurcation for a while, but stay within a ensemble of states that cannot really be divided to two parts in the spirit of bifurcation. What I’m saying is not adding to our ability to predict, what will come out, it’s rather telling only that we cannot use any results of other and genuinely chaotic systems to tell, what we should expect in case of climate.

      Concerning climate models, a deterministic model that produces realistic variability on short term based on deterministic equations might have very different longer term dynamics than a comparable model with a realistic amount of stochastic inputs. The stochasticity may dampen some variability of a deterministic model, whose equations lead to chaotic or nearly chaotic behavior. Thus the stochastic model might need basic dynamics that would give more variability without the stochastic damping, but it might be that this happens most strongly at certain time scales and the stochastic model might then have stronger long term variability and thus better consistency with some observed abrupt changes. All the above is just one possibility, which I consider plausible.

      What I really want to say that we should not consider any model based on real fundamentals unless it’s based also on realistic level of stochastic inputs. Compensating one weakness in model fundamentals by tuning other features is likely to lead to failure at some point. The research done in medium term weather forecasting using stochastic models is very interesting from this point of view.

  97. Fred Moolten:

    “there is no single decay curve, but the trajectory of decline toward equilibrium concentrations can be expressed as a rough average in the range of about 100 years, with a long tail lasting hundreds of millennia. In other words, the CO2 we emittomorrow, or refrain from emitting, is not something we can take back if we later decide we shouldn’t have put it up there. It will warm us for centuries.

    Just as the monotheistic religions require an eternal hell, just so the “monosatanic” CAGW religion requires its evil champion CO2 to be an eternal foe. This is in defiance of C12/13 ratio data and other normal kinetic considerations.

    • Who has the outlier number on co2 residence?;

      http://c3headlines.typepad.com/.a/6a010536b58035970c0120a7896032970b-pi

      100 years? It’s a joke and it’s unknown, no proof at all.

      • That is a lie that IPCC says the residence time is that long. The evidence is right in the IPCC document. Read it yourself: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch7s7-3.html

        Carbon dioxide is continuously exchanged between the atmosphere and the ocean. Carbon dioxide entering the surface ocean immediately reacts with water to form bicarbonate (HCO3–) and carbonate (CO32–) ions. Carbon dioxide, HCO3– and CO32– are collectively known as dissolved inorganic carbon (DIC). The residence time of CO2 (as DIC) in the surface ocean, relative to exchange with the atmosphere and physical exchange with the intermediate layers of the ocean below, is less than a decade. In winter, cold waters at high latitudes, heavy and enriched with CO2 (as DIC) because of their high solubility, sink from the surface layer to the depths of the ocean. This localised sinking, associated with the Meridional Overturning Circulation (MOC; Box 5.1) is termed the ‘solubility pump’. Over time, it is roughly balanced by a distributed diffuse upward transport of DIC primarily into warm surface waters.

        Now what does have a long tail is the “adjustment time”, which characterizes the time it takes for the CO2 to go into deep sequestering sites. I described this behavior on my blog:
        http://theoilconundrum.blogspot.com/2011/09/missing-carbon.html

      • Webby,

        The residence time of dissolved inorganic carbon they say is a decade in the surface ocean.

        This is far from significant in any way.

        Cheers

      • So the turnover of carbon into the atmosphere during the carbon cycle is around 10 years. The turnover of carbon out of the atmosphere is also around 10 years. No wonder they call it a carbon cycle.

        The sequestering to deeper sites takes much longer. That becomes the main driver which will return the elevated CO2 to the historical steady state. Deep sequestering is a pure diffusion problem, all stochastics, and the fat tails derive from that. That’s the ridiculously obvious model — one that only becomes obvious when you work it out..

      • There are both terrestrial and marine aspects to the carbon cycle. There is both a solubility and a biological ‘pump’ in marine systems.

        The inorganic carbon refers to to the solubility pump – and matters very little at all. What it means is that the volume of DIC taken into the deep ocean is about 10% of the DIC in the surface. But about the same upwells.

        God you are an idiot.

      • God you are an idiot.

        I am cool with that.

        It is an eye-opener when one considers that scientists can estimate that X moles of carbon gets exchanged between the earth and the atmosphere per year and that Y moles of carbon exist in the atmosphere at any one time. Assuming effective mixing of carbon in the atmosphere, the characteristic turnover time is easily determined just by dimensional analysis. That would make the Time = Y / X [(moles)/(moles/year)]. If I trust the numbers (and I have to trust data like this) then the time scale is like 10 years. That is the relatively short residence time.

        Yet because there is another exchange dynamic between the permeable layer and the deeply sequestered layer, and the deeper layers are not effectively mixing, then one must use a different analysis technique to get at that rate. That’s where we derive the Fickian diffusion dynamics and a long square root of time build-up in the deep carbon stores. When we do a detailed mass balance on this we can get a feel for the adjustment time. This adjustment time is much longer than the residence time.

        I have a passage in my latest book where I talk about this as a general mechanism in natural sciences. I inserted an aside that one of the first people to really take technical advantage of the dynamics of diffusion was the grad student Andy Grove. For his thesis, Grove worked out the diffusion behavior for oxygen into silicon, and figured out how to accurately estimate oxide growth. This is necessary for semiconductor fab process control. With that set of equations, Grove established his credentials and co-founded a startup company called Intel, which he then turned into a commercial success by applying his formulations to process control. The moral of the story is that a little bit of knowledge is power, and if you can predict what will happen you have the edge.
        YMMV.

      • Chief,
        Not to beat on a dead horse, but the limnology paper I sent you information on speaks very directly to previously unmeasured freshwater roles in the carbon cycle that maybe quite surprisingly large.
        You may find my e-mail on this in your inbox, if you have noticed it before.

      • Y’all still at it? I thought this was resolved as being a trick question?

      • http://www.co2science.org/articles/V12/N31/EDIT.php

        Residence time is obviously important and nothing is settled. The graph was extracted from the book “The Deniers” by Lawrence Solomon page 80. I’m looking for links beyond this.

        It seems clear that residence time is another “given” in AR4 and that its overstated for a reason. It’s a needed fiction or unexplained assumption if you like since it supports that mythic idea of “compounding” human co2 when the reality might be very different. The sink flux is also poorly understood and I and many would argue misrepresented.

        Aside from whatever we think where is the proof mankinds added 5% or less co2 contribution is why the co2 count is fluxing upward which it has done many times before? It’s a simpleminded talking point without evidence. With a ten year residence time you could never get to the G.Schmidt fantasy the 40% of co2 is manmade.

        Who do you think is really “lying” webboy? Residence time is glossed over in the report (but it’s assumed higher than others have concluded) and the science is soft on claims. Tell me how you get to huge human co2 contribution on a residence lifetime of 10 years which in itself may be grossly overstated? How do you know the sink can’t just be absorbing human contributions and that excess co2 detected wasn’t part of other natural processes yet to be understood?

      • How we get to 100 year IPCC residence time claims from the IPCC, from the report;

        “Because CO2 does not limit photosynthesis significantly in the ocean, the biological pump does not take up and store anthropogenic carbon directly. Rather, marine biological cycling of carbon may undergo changes due to high CO2 concentrations, via feedbacks in response to a changing climate. The speed with which anthropogenic CO2 is taken up effectively by the ocean, however, depends on how quickly surface waters are transported and mixed into the intermediate and deep layers of the ocean. A considerable amount of anthropogenic CO2 can be buffered or neutralized by dissolution of CaCO3 from surface sediments in the deep sea, but this process requires many thousands of years.”

        It’s typical slight of hand among warming advocates. Instead of just confining residence time to the atmosphere another fantasy is introduced. Absorbed co2 is being counted twice or more and blaming human co2 for increasing natural production of co2 as well. Unless it turns to rock the human co2 is counted even if it’s absorbed by plants or sea life?!

        If you are going to lose an argument over a single fact it’s important to distract the topic with other factors that can be neither proved or concluded. Co2 residence time and the IPCC is a perfect example. They need a high human input factor, so they invent one. There is no reason to think total human co2 input is anything greater than the total added contribution of less than 5% of natural but that is a deadend for AGW advocacy. Hence they start counting non-atmospheric co2 as well in the residency number, talk about lying Webboy?. It’s all right there in the report and the links above.

        Residence time is just one of the many basics that should have buried the AGW movement but advocacy and partisan wishful thinking against carbon interests drive right through science logic. Dr. Curry avoids this topic as well, why?

  98. I just wasted an hour reading all the comments.

    This posting has to be the biggest pissing contest I have ever read.

    Global warming per sec is dying a natural death, due to the unforseen circumstance of no global warming or catastrophic ocean rise. Thus CO2 is not a devil, and many, perhaps a majority believe the worlds flora and fauna have been deprived of its beneficence by its scarcity for too long.

    One half an hour after sunrise the CO2 levels in a corn field drop below the level of which the corn can grow, it is starving. The question of how long before our man made CO2 disappears should be asked of the corn, and all other elements of our food chain.

  99. What is CAGW? Citizens Against Public Waste?

  100. Gavin Cawley

    The paper I mentioned earlier in the thread, that refutes the claim made in Essenhigh (2009) that the short residence time of CO2 means that anthropogenic emissions are not the cause of the observed post-industrial increases, is now in print. The URL for the publishers website is

    http://pubs.acs.org/doi/abs/10.1021/ef200914u

    Gavin C. Cawley, On the atmospheric residence time of anthropogenically sourced carbon dioxide, Energy & Fuels, volume 25, number 11, pages 5503–5513, September 2011.

    If Salby is indeed to publish an article showing that the observed rise is a natural phenomenon, then my paper refutes his as well. A natural cause for the observed rise is inconsistent with the annual rise in atmospheric concentration being less than annual anthropogenic emissions. That we are responsible for the observed rise is something we know with a very high degree of certainty. I hope my paper will be useful in addressing this particular error in discussions in the blogsphere.

    I would be happy to answer questions about my work over at SkS, the URL is

    http://skepticalscience.com/essenhigh_rebuttal.html